Hearing Aids (Or, Metaphors are Personal)

Thursday morning, I gave the opening keynote at an event about the future of commerce at the Rotman School of Management in Toronto. I shared four insights:

  • The AI instinct is to view a reasoning problem as a data problem
    • Marketing hype leads many to imagine that artificial intelligence (AI) works like human brain intelligence. Words like “cognitive” lead us to assume that computers think like we think. In fact, succeeding with supervised learning, as I explain in this article and this previous post, involves a shift in perspective to reframe a reasoning task as a data collection task.
  • Advances in deep learning are enabling radical new recommender systems
    • My former colleague Hilary Mason always cited recommender systems as a classic example of a misunderstood capability. Data scientists often consider recommenders to be a solved problem, given the widespread use of collaborative filtering, where systems infer person B’s interests based on similarity with person A’s interests. This approach, however, is often limited by the “cold start” problem: you need person A and person B to do stuff before you can infer how they are similar. Deep learning is enabling us to shift from comparing past transactional history (structured data) to comparing affinities between people and products (person A loves leopard prints, like this ridiculous Kimpton-style robe!). This doesn’t erase the cold start problem wholesale, but it opens a wide range of possibilities because taste is so hard to quantify and describe: it’s much easier to point to something you like than to articulate why you like it.
  • AI capabilities are often features, not whole products
  • AI will dampen the moral benefits of commerce if we are not careful
    • Adam Smith is largely remembered for his theories on the value of the distribution of labor and the invisible hand that guides capitalistic markets. But he also wrote a wonderful treatise on moral sentiments where he argued that commerce is a boon to civilization because it forces us to interact with strangers; when we interact with strangers, we can’t have temper tantrums like we do at home with our loved ones; and this gives us practice in regulating our emotions, which is a necessary condition of rational discourse and the compromise at the heart of teamwork and democracy. As with many of the other narcissistic inclinations of our age, the logical extreme of personalization and eCommerce is a world where we no longer need to interact with strangers, no longer need to practice the art of tempered self-interest to negotiate a bargain. Being elegantly bored at a dinner party can be a salutatory boon to happiness. David Hume knew this, and died happy; Jean-Jacques Rousseau did not, and died miserable.
bill cunningham
This post on Robo Bill Cunningham does a good job explaining how image recognition capabilities are opening new roads in commerce and fashion.

An elderly couple approached me after the talk. I felt a curious sense of comfort and familiarity. When I give talks, I scan the audience for signs of comprehension and approval, my attention gravitating towards eyes that emit kindness and engagement. On Thursday, one of those loci of approval was an elderly gentleman seated in the center about ten rows deep. He and his Russian companion had to have been in their late seventies or early eighties. I did not fear their questions. I embraced them with the openness that only exists when there is no expectation of judgment.

She got right to the point, her accent lilted and slavic. “I am old,” she said, “but I would like to understand this technology. What recommendations would you give to elderly people like myself, who grew up in a different age with different tools and different mores (she looked beautifully put together in her tweed suit), to learn about this new world?”

I told her I didn’t have a good answer. The irony is that, by asking about something I don’t normally think about, she utterly stumped me. But it didn’t hurt to admit my ignorance and need to reflect. By contrast, I’m often able to conjure some plausible response to those whose opinion I worry about most, who elicit my insecurities because my sense of self is wrapped up in their approval. The left-field questions are ultimately much more interesting.

The first thing that comes to mind if we think about how AI might impact the elderly is how new voice recognition capabilities are lowering the barrier to entry to engage with complex systems. Gerontechnology is a thing, and there are many examples of businesses working to build robots to keep the elderly company or administer remote care. My grandmother, never an early adopter, loves talking to Amazon Alexa.

But the elegant Russian woman was not interested in how the technology could help her; She wanted to understand how it works. Democratizing knowledge is harder than democratizing utility, but ultimately much more meaningful and impactful (as a U Chicago alum, I endorse a lifelong life of the mind).

Then something remarkable happened. Her gentleman friend interceded with an anecdote.

“This,” he started, referring to the hearing aid he’d removed from his ear, “is an example of artificial intelligence. You can hear from my accent that I hail from the other side of the Atlantic (his accent was upper-class British; he’d studied at Harvard). Last year, we took a trip back with the family and stayed in quintessential British town with quintessential British pubs. I was elated by the prospect of returning to the locals of my youth, of unearthing the myriad memories lodged within childhood smells and sounds and tastes. But my first visit to a pub was intolerable! My hearing aid had become thoroughly Canadian, adapted to the acoustics of airy buildings where sound is free to move amidst tall ceilings. British pubs are confined and small! They trap the noise and completely bombarded my hearing aid. But after a few days, it adjusted, as these devices are wont to do these days. And this adaptation, you see, shows how devices can be intelligent.”

Of course! A hearing aid is a wonderful example of an adaptive piece of technology, of something whose functionality changes automatically with context. His anecdote brilliantly showed how technologies are always more than the functionalities they provide, are rather opportunities to expose culture and anthropology: Toronto’s adolescence as a city indexed by its architecture, in contrast to the wizened wood of an old-world pub; the frustrating compromises of age and fragility, the nostalgic ideal clipped by the time the device required to recalibrate; the incredible detail of the personal as a theatrical device to illustrate the universal.

What’s more, the history of hearing aids does a nice job illustrating the more general history of technology in this our digital age.

Partial deafness is not a modern phenomenon. As everywhere, the tools to overcome it have changed shape over time.

Screen Shot 2017-11-19 at 11.39.29 AM
This 1967 British Pathé primer on the history of hearing aids is a total trip, featuring radical facial hair and accompanying elevator music. They pay special attention to using the environment to camouflage cumbersome hearing aid machinery.

One thing that stands out when you go down the rabbit hole of hearing aid history is the importance of design. Indeed, historical hearing aids are analogue, not digital. People used to use naturally occurring objects, like shells or horns, to make ear trumpets like the one pictured in the featured image above. Some, including 18th-century portrait painter Joshua Reynolds, did not mind exposing their physical limitations publicly. Reynolds was renowned for carrying an ear trumpet and even represented his partial deafness in self-portraits painted later in life.

Reynolds’ self-portrait as deaf (1775)

Others preferred to deflect attention from their disabilities, camouflaging their tools in the environment or even transforming them into signals of power. At the height of the Napoleonic Age, King John VI of Portugal commissioned an acoustic throne with two open lion mouths at the end of the arms. These lion mouthes became his makeshift ears, design transforming weakness into a token of strength; Visitors were required to kneel before the chair and speak directly into the animal heads.

acoustic throne
King John VI’s acoustic throne, its lion head ears requiring submission

The advent of the telephone changed hearing aid technology significantly. Since the early 20th century, they’ve gone from being electronic to transistor to digital. Following the exponential dynamics of Moore’s Law, their size has shrunk drastically: contemporary tyrants need not camouflage their weakness behind visual symbols of power. Only recently have they been able to dynamically adapt to their surroundings, as in the anecdote told by the British gentleman at my talk. Time will tell how they evolve in the near future. Awesome machine listening research in labs like those run by Juan Pablo Bello at NYU may unlock new capabilities where aids can register urban mood, communicating the semantics of a surrounding as opposed to merely modulating acoustics. Making sense of sound requires slightly different machine learning techniques than making sense of images, as Bello explores in this recent paper. In 50 years time, modern digital hearing aids may seem as eccentric as a throne with lion-mouth ears.

The world abounds in strangeness. The saddest state of affairs is one of utter familiarity, is one where the world we knew yesterday remains the world we will know tomorrow. Is the trap of the filter bubble, the closing of the mind, the resilient force of inertia and sameness. I would have never included a hearing aid in my toolbox of metaphors to help others gain an intuition of how AI works or will be impactful. For I have never lived in the world the exact same way the British gentleman has lived in the world. Let us drink from the cup of the experiences we ourselves never have. Let us embrace the questions from left field. Let each week, let each day, open our perspectives one sliver larger than the day before. Let us keep alive the temperance of commerce and the sacred conditions of curiosity.

The featured image is of Madame de Meuron, a 20th-century Swiss aristocrat and eccentric. Meuron is like the fusion of Jean des Esseintes–the protagonist of Huysman’s paradigmatic decadent novel, À Rebours, the poisonous book featured in Oscar Wilde’s Picture of Dorian Gray–and Gertrude Stein or Peggy Guggenheim. She gives life to characters in Thomas Mann novels. She is a modern day Quijote, her mores and habits out of sync with the tailwinds of modernity. Eccentricity, perhaps, the symptom of history. She viewed her deafness as an asset, not a liability, for she could control the input from her surroundings: “So ghör i nume was i wott! – So I only hear what I want to hear!”

Transitioning from Academia to Business

The wittiest (and longest) tweet thread I saw this week was (((Curtis Perry)))‘s masterful narrative of the life of a graduate student as kin to the life of Job:

Screen Shot 2017-11-12 at 9.32.17 AM
The first tweet chapter in Perry’s grad student life of Job. For the curious, Perry’s Twitter profile reads: Quinquegenarian lit prof and chronic feeder of Titivillus. Professional perseverator. Fellowship of the half-circle.

The timing of the tweet epic was propitious in my little subjective corner of the universe: Just two days before, I’d given a talk for Stanford’s Humanities Education Focal Group about my transition from being a disgruntled PhD in comparative literature to being an almost-functioning-normal-human-being executive at an artificial intelligence startup and a venture partner at a seed-stage VC firm.

Many of the students who attended the talk, ranging from undergrad seniors to sixth- or seventh-year PhDs, reached out afterwards to thank me and ask for additional advice. It was meaningful to give back to the community I came from and provide advice of a kind I sought but couldn’t find (or, more accurately, wasn’t prepared to listen to) when I struggled during the last two years of my PhD.

This post, therefore, is for the thousands of students studying humanities, fearing the gauntlet of the academic job market, and wondering what they might do to explore a different career path or increase their probability of success once they do. I offer only the anecdotes of one person’s successes and failures. Some things will be helpful for others; some will not. If nothing else, it serves as testimony that people need not be trapped in the annals of homogeneity. The world is a big and mighty place.

Important steps in my transition


As I narrated in a previous post, I hit rock bottom in my last year of graduate school. I remember sitting in Stanford’s Green Library in a pinnacle of anxiety, festering in a local minimum where I couldn’t write, couldn’t stick with the plan for my dissertation, couldn’t do much of anything besides play game after game of Sudoku to desperately pass the time. I left Stanford for a bit. I stopped trying. Encouraged by my worrying mother, I worked at a soup kitchen in Boston every day, pretending it was my job. I’d go in every day at 7:00 am and leave every afternoon at 3:00 pm. Working with my hands, working for others, gradually nurtured me back to stability.

It was during this mental breakdown that applications for a sixth-year dissertation fellowship were due. I forced myself to write a god awful application in the guest bedroom at my parents’ Boston townhouse. It was indescribably hard. Paralyzed, I submitted an alienated abstract and dossier. A few months later, I received a letter informing me that the Humanities Center committee had rejected my application.

I remember the moment well. I was at Pluto’s salad joint on University Avenue in Palo Alto. By then, I had returned back to Stanford and was working one day per week at Saint Martin’s Soup Kitchen in San Francisco, 15 hours per week at a location-based targeted advertising startup called Vantage Local (now Frequence), 5 hours per week tutoring Latin and Greek around the Valley, playing violin regularly, running, and reserving my morning hours to write. I had found balance, balance fit for my personality and needs. I had started working with a career counselor to consider alternative career paths, but had yet to commit to a move out of academia.

The letter gave me clarity. It was the tipping point I needed to say, that’s it; I’m done; I’m moving on. It did not feel like failure; it felt like relief. My mind started to plot next steps before I finished reading the rejection letter.


The timing couldn’t have been better. My friend Anaïs Saint-Jude had started Bibliotech,  a forward-thinking initiative devoted to exploring the value graduate-level training in the humanities could provide to technology companies. I was fortunate enough to be one of the students who pitched their dissertation to conference attendees, including Silicon Valley heavyweights like Geoffrey Moore, Edgar Masri, Jeff Thermond, Bob Tinker, and Michael Korcuska, all of whom have since become mentors and friends. My intention to move into the private sector came off loud and clear at the event. Thanks to my internship at the advertising company, I had some exposure to the diction and mores of startups. The connections I made there were invaluable to my career. People opened doors that would have otherwise remained shut. All I needed was the first opportunity, and a few years to recalibrate my sense of self as I adapted to the reward system of the private sector.


I’ve mentored a few students who made similar transitions from academia into tech companies, and all have asked me how to defend their choice of pursuing a PhD instead of going directly into marketing, product, sales, whatever the role may be. Our culture embraces a bizarre essentialism, where we’re supposed to know what we want to be when we grow up from the ripe of old of 14, as opposed to finding ourselves in the self we come to inhabit through the serendipitous meanderings of trial and tribulation. (Ben Horowitz has a great commencement speech on the fallacy of following your passion.) The symptom of this essentialism in the transition from humanities to, say, marketing, is this strange assumption that we need to justify the PhD as playing part of a logical narrative, as some step in a master plan we intended from the beginning.

That just can’t be true. I can’t think of anyone who pursues a PhD in French literature because she feels it’s the most expedient move for a successful career in marketing. We pursue literature degrees because we love literature, we love the life of the mind, we are gluttons for the riches of history and culture. And then we realize that the professional realities aren’t quite what we expected. And, for some of us, acting for our own happiness means changing professions.

One thing I did well in my transition was to remain authentic. When I interviewed and people asked me about my dissertation, I got really great at giving them a 2-minute, crisp explanation of what I wrote about and why it was interesting. What they saw was an ability to communicate a complex topic in simple, compelling words. They saw the marks of a good communicator, which is crucial for enterprise marketing and sales. I never pretended I wanted to be a salesperson. I showed how I had excelled in every domain I’d played in, and could do the same in the next challenge and environment.

Selecting the right opportunity

Every company is different. Truly. Culture, stage, product, ethics, goals, size, role, so many factors contribute to shaping what an experience is like, what one learns in a role, and what future opportunities a present experience will afford.

When I left graduate school, I intentionally sought a mid-sized private company that had a culture that felt like a good fit for a fresh academic. It took some time, but I ended up working at a legaltech startup called Intapp. I wanted an environment where I’d benefit from a mentor (after all, I didn’t really have any business skills besides writing and teaching) and where I would have insight into strategic decisions made by executive management (as opposed to being far removed from executives at a large company like Google or Facebook). Intapp had the right level of nerdiness. I remember talking to the CTO about Confucius during my interviews. I plagued my mentor Dan Bressler with endless existential dribble as I went through the growing pains of becoming a business person. I felt embarrassed and pushy asking for a seat at the table for executive meetings, but made my way in on multiple occasions. Intapp sold business software to law firms. The what of the product was really not that interesting. But I learned that I loved the how, loved supporting the sales teams as a subject matter expert on HIPAA and professional responsibility, loved the complex dance of transforming myriad input from clients into a general product, loved writing on tight timelines and with feedback across the organization. I learned so incredibly much in my first role. It was a foundation for future success.

I am fortunate to be a statistical anomaly as a woman. Instead of applying for jobs where I satisfy skill requirements, I tend to seek opportunities with exponential growth potential. I come in knowing a little about the role I have to accomplish, and leave with a whole new set of skills. This creates a lot of cognitive dissonance and discomfort, but I wouldn’t have it any other way. My grey hairs may lead me to think otherwise soon, but I doubt it.


Last but certainly not least, I have always remained humble and never felt like a task was beneath me. I grew up working crappy jobs as a teenager: I was a janitor; a hostess; a busgirl; a sales representative at the Bombay company in the mall in Salem, New Hampshire; a clerk at the Court Theater at University of Chicago; a babysitter; a lawnmower; an intern at a Blackberry provisioning tech company, where I basically drove a big truck around and lugged stuff from place to place and babysat the CEO’s daughter. I see no work as beneath me, and view grunt work as the sacrifice due to have the amazing, amazing opportunities I have in my work (like giving talks to large audiences and meeting smart and inspiring people almost every day).

Having this humility helps enormously when you’re an entrepreneur. I didn’t mind starting as a marketing specialist, as I knew I could work hard and move up. I’ll yell at the computer in frustration when I have to upload email addresses to a go-to-webinar console or get the HTML to format correctly in a Mailchimp newsletter, but I’m working on showing greater composure as I grow into a leader. I always feel like I am going to be revealed as a fraud, as not good enough. This incessant self-criticism is a hallmark of my personality. It keeps me going.

Advice to current students

Screen Shot 2017-11-12 at 11.02.02 AM
A rad Roman mosaic with the Greek dictum, Know Thyself

Finish your PhD

You’ll buy options for the future. No one cares what you studied or what your grades were. They do care that you have a doctorate and it can open up all sorts of opportunities you don’t think about when you’re envisioning the transition. I’ve lectured at multiple universities and even taught a course at the University of Calgary Faculty of Law. This ability to work as an adjunct professor would have been much, much harder to procure if I were only ABD.

This logic may not hold for students in their first year, where 4 years is a lot of sunk opportunity cost. But it’s not that hard to finish if you lower your standards and just get shit done.

Pity the small-minded

Many professors and peers will frown upon a move to business for all sorts of reasons. Sometimes it’s progressive ideology. Sometimes it’s insecurity. Most of the time it’s just lack of imagination. Most humanists profess to be relativists. You’d think they could do so when it comes to selecting a profession. Just know that the emotional pressure of feeling like a failure if you don’t pursue a research career dwindles almost immediately when your value compass clocks a different true north.

Accept it’s impossible to imagine the unknown

The hardest part of deciding to do something radically different is that you have no mental model of your future. If you follow the beaten path, you can look around to role model professors and know what your life will look like (with some variation depending on which school you end up in). But it’s impossible to know what a different decision will lead to. This riddles the decision with anxiety, requiring something like a blind leap of faith. A few years down the line, you come to appreciate the creative possibility of a blank future.


There are so many free meetups and events taking place everywhere. Go to them. Learn something new. See what other people are doing. Ask questions. Do informational interviews. Talk to people who aren’t like yourself. Talk to me! Keep track of what you like and don’t like.


One of the biggest changes in moving from academia to business is the how of work. Cultures vary, but businesses are generally radically collaborative places and humanities work is generally isolated and entirely individual. It’s worthwhile to co-author a paper with a fellow grad student or build skills running a workshop or meetup. These logistics, communication, and project management skills are handy later on (and are good for your resume).

Experiment with different writing styles

Graduate school prepares you to write 20-page papers, which are great preparation for peer-reviewed journals and, well, nothing else. They don’t prepare you to write a good book. They don’t prepare you to write a good blog post or newspaper article. Business communication needs to be terse and on point so people can act on it. Engineers need guidance and clarity, need a sense of continuity of purpose. Customers need you to understand their point of view. Audiences need stories or examples to anchor abstract ideas. Having the agility to fit form to purpose is an invaluable skill for business communications. It’s really hard. Few do it well. Those who do are prized.

Learn how to give a good talk

Reading a paper aloud to an audience is the worst. Just don’t do it. People like funny pictures.

Know thyself

There is no right path. We’re all different. Business was a great path for me, and I’ve molded my career to match my interests, skill, personality, and emotional sensitivities. You may thrive in a totally different setting. So keep track of what you like and dislike. Share this thinking with others you love and see if what they think of you is similar to what you think of you. Figuring this out is the trickiest and potentially most valuable exercise in life. And sometimes it’s a way to transform what feels like a harrowing experience into an opportunity to gain yet another inch of soul.

The featured image is from William Blake’s illustrated Book of Job, depicting the just man rebuked by his friends. Blake has masterful illustrations of the Bible, including this radical image from Genesis, where Eve’s wandering eye displays a proleptic fall from grace, her vision, her fantasy too large for the limits of what Adam could safely provide – a heroine of future feminists, despite her fall. 

blake adam eve




Censorship and the Liberal Arts

A few months ago, I interviewed a researcher highly respected in his field to support marketing efforts at my company. Before conducting the interview, I was asked to send my questions for pre-approval by the PR team of the corporation with which the researcher is affiliated. Backed by the inimitable power of their brand, the PR scions struck crimson lines through nearly half my questions. They were just doing their job, carrying out policy to draw no public attention to questions of ethics, safety, privacy, security, fear. Power spoke. The sword showed that it is always mightier than the pen, fool ourselves though we may.

Pangs of injustice rose fast in my chest. And yet, I obeyed.

Was this censorship? Was I a coward?

Intellectual freedom is nuanced in the private sector because when we accept a job we sign a social contract. In exchange for a salary and a platform for personal development and growth, we give up full freedom of expression and absorb the values, goals, norms, and virtual personhood of the organization we join. The German philosopher Emmanuel Kant explains the tradeoffs we make when constructing our professional identity in What is Enlightenment? (apologies for the long quotation, but it needed to be cited in full):

“This enlightenment requires nothing but freedom–and the most innocent of all that may be called “freedom”: freedom to make public use of one’s reason in all matters. Now I hear the cry from all sides: “Do not argue!” The officer says: “Do not argue–drill!” The tax collector: “Do not argue–pay!” The pastor: “Do not argue–believe!” Only one ruler in the world says: “Argue as much as you please, but obey!” We find restrictions on freedom everywhere. But which restriction is harmful to enlightenment? Which restriction is innocent, and which advances enlightenment? I reply: the public use of one’s reason must be free at all times, and this alone can bring enlightenment to mankind.

On the other hand, the private use of reason may frequently be narrowly restricted without especially hindering the progress of enlightenment. By ‘public use of one’s reason’ I mean that use which a man, as scholar, makes of it before the reading public. I call ‘private use’ that use which a man makes of his reason in a civic post that has been entrusted to him. In some affairs affecting the interest of the community a certain [governmental] mechanism is necessary in which some members of the community remain passive. This creates an artificial unanimity which will serve the fulfillment of public objectives, or at least keep these objectives from being destroyed. Here arguing is not permitted: one must obey. Insofar as a part of this machine considers himself at the same time a member of a universal community–a world society of citizens–(let us say that he thinks of himself as a scholar rationally addressing his public through his writings) he may indeed argue, and the affairs with which he is associated in part as a passive member will not suffer. Thus it would be very unfortunate if an officer on duty and under orders from his superiors should want to criticize the appropriateness or utility of his orders. He must obey. But as a scholar he could not rightfully be prevented from taking notice of the mistakes in the military service and from submitting his views to his public for its judgment. The citizen cannot refuse to pay the taxes levied upon him; indeed, impertinent censure of such taxes could be punished as a scandal that might cause general disobedience. Nevertheless, this man does not violate the duties of a citizen if, as a scholar, he publicly expresses his objections to the impropriety or possible injustice of such levies. A pastor, too, is bound to preach to his congregation in accord with the doctrines of the church which he serves, for he was ordained on that condition. But as a scholar he has full freedom, indeed the obligation, to communicate to his public all his carefully examined and constructive thoughts concerning errors in that doctrine and his proposals concerning improvement of religious dogma and church institutions. This is nothing that could burden his conscience. For what he teaches in pursuance of his office as representative of the church, he represents as something which he is not free to teach as he sees it. He speaks as one who is employed to speak in the name and under the orders of another. He will say: “Our church teaches this or that; these are the proofs which it employs.” Thus he will benefit his congregation as much as possible by presenting doctrines to which he may not subscribe with full conviction. He can commit himself to teach them because it is not completely impossible that they may contain hidden truth. In any event, he has found nothing in the doctrines that contradicts the heart of religion. For if he believed that such contradictions existed he would not be able to administer his office with a clear conscience. He would have to resign it. Therefore the use which a scholar makes of his reason before the congregation that employs him is only a private use, for no matter how sizable, this is only a domestic audience. In view of this he, as preacher, is not free and ought not to be free, since he is carrying out the orders of others. On the other hand, as the scholar who speaks to his own public (the world) through his writings, the minister in the public use of his reason enjoys unlimited freedom to use his own reason and to speak for himself. That the spiritual guardians of the people should themselves be treated as minors is an absurdity which would result in perpetuating absurdities.”

Kant makes a tricky distinction between our public and private use of reason. What he calls “public use of reason” is what we normally consider to be private: The sacred space of personal opinion, not as unfettered stream of consciousness, but as the reflections and opinions that result from our sense of self as part of the species homo sapiens (some criticize this humanistic focus and think we should expand the space of commonality to include animals, plants, robots, rocks, wind, oceans, and other types of beings). Beliefs that are fair because they apply to me just as they apply to you and everyone else. Kant deems this “public” because he espouses a particular take on reason that is tied up with our ability to project ourselves as part of a larger universal we call humanity: for Kant, our freedom lies not in doing whatever we want, not in behaving like a toddler who gets to cry on a whim or roam around without purpose or drift in opiate stupor, but rather in our willingly adhering to self-imposed rules that enable membership in a collectivity beyond the self. This is hard to grasp, and I’m sure Kant scholars would poke a million holes in my sloppy interpretation. But, at least for me, the point here is public reason relates to the actions of our mind when we consider ourselves as citizens of the world, which, precisely because it is so broad, permits fierce individuality.

By contrast, “private use of reason” relates to a sense of self within a smaller group, not all of humanity. So, when I join a company, by making that decision, I willingly embrace the norms, culture, and personhood of this company. Does this mean I create a fictional sub-self every time I start a new job or join some new club or association? And that this fictional self is governed by different rules than the real me that exercises public reason in the comfort of my own mind and conscience? I don’t think so. It would require a fictional sub-self if the real self were a static thing that persists over time. But there’s no such thing as the real self. It’s a user illusion (hat tip to Dan Dennett for the language). We come as diads and triads, the connections between the neurons in our brains ever morphing to the circumstances we find ourselves in. Because we are mortal, because we don’t have infinite time to explore the permutations of possible selves that would emerge as we shapeshift from one collectivity to the next, it’s important that we select our affiliations carefully, especially if we accept the tradeoffs of “private use of reason.” We don’t have time to waste our willful obedience on groups whose purpose and values skew too far from what our public reason holds dear. And yet, the restriction of self-interest that results from being part of a team is quite meaningful. It is perhaps the most important reason why we must beware the lore of a world without work.

This long exploration of Kant’s distinction between public and private reason leads to the following conclusion: No, I argue, it was not an act of cowardice to obey the PR scions when they censored me. I was exercising my “private use of reason,” as it would not have been good for my company to pick a fight. In this post, by contrast, I exercise my “public use of reason” and make manifest the fact that, as a human being, I feel pangs of rage against any form of censorship, against any limitation of inquiry, curiosity, discourse, and expression.

But do I really mean any? Can I really mean any in this age of Trumpism, where the First Amendment serves as a rhetorical justfication to traffic fake news, racism, or pseudo-scientific justifications to explain why women don’t occupy leadership roles at tech companies?* And, where and how do we draw the line between actions that aren’t right according to public reason but are right according to private reason and those that are simply not right, period? By making a distinction between general and professional ethics, do we not risk a slippery slope where following orders can permit atrocities, as Hannah Arendt explores in Eichmann in Jerusalem?

These are dicey questions.

There are others that are even more dicey and delicate. What happens if the “private use of reason” is exercised not within the a corporation or office, affiliations we choose to make (should we be fortunate enough to choose…), but in a collectivity defined by trait like age, race, gender, sexuality, religion, or class (where elective choice is almost always absent except when it absolutely is present (e.g., a decision to be transgender))? These categories are charged with social meaning that breaks Kant’s logic. Naive capitalists say we can earn our class through hard work. Gender and race are not discrete categories but continuous variables on a spectrum defined by local contexts and norms: In some circles, gender is pure expression of mind over body, a malleable sense of self in a dance with the impressions and reactions of others; in others, the rules of engagement are fixed to the point of submission and violence. Identity politics don’t follow the logic of the social contract. A willed trade off doesn’t make sense here. What act of freedom could result from subsuming individual preference for the greater good of a universal or local whole? (Open to being told why I’m totally off the mark, as these issues are far from my forte.)

What’s dangerous is when the experience of being part of a minority expresses itself as willed censorship, as a cloak to avoid the often difficult challenge of grappling with the paradoxical twists of private and public reason. When the difficult nuances of ethics reduce to the cocoon of exclusion, thwarting the potential of identifying common ground.

The censorship I accepted to enact the constraints of my freedom as a professional differ from the censorship contemporary progressives demand from professors and peers. I agree with the defenders of liberalism that the distinction between private and public reason should collapse at the university. That the university should be a place where young minds are challenged, where we flex the muscles of transforming a gut reaction into an articulated response. Where being exposed to ideas different from one’s own is an opportunity for growth. Where, as dean of students Jay Ellison wrote to the incoming class of 2020 at the University of Chicago, “we do not support so called ‘trigger warnings,’ we do not cancel invited speakers because their topics might prove controversial,** and we do not condone the creation of intellectual ‘safe spaces’ where individuals can retreat from ideas and perspectives at odds with their own.” As an alumna of the University of Chicago, I felt immense pride at reading Bret Stephens’ recent New York Times op-ed about why Robert Zimmer is America’s best university president. Gaining practice in the art of argument and debate, in reading or hearing an idea and subjecting it to critical analysis, in appreciating why we’ve come to espouse some opinion given the set of circumstances afforded to us in our minute slice of experience in the world, in renting our positions until evidence convinces us to change our point of view, in deeply listening to others to understand why they think what they think so we can approach a counterargument from a place of common ground, all of these things are the foundations of being a successful professional. Being a good communicator is not a birthright. It is a skill we have to learn and exercise just like learning how to ride a bike or code or design a website. Except that it is much harder, as it requires a Stoic’s acceptance that we cannot control the minds or emotions of others; We can only seek to influence them from a place of mutual respect.

Given the ungodly cost of a university education in the United States, and our society’s myopic focus on creating productive workers rather than skeptical citizens, it feels horribly elitist to advocate for the liberal arts in this century of STEM, robots, and drones. But my emotions won’t have it otherwise: They beat with the proud tears of truth and meaning upon reading articles like Marilynne Robinson’s What Are We Doing Here?, where she celebrates the humanities as our reverence to the beautiful, to the possible, to the depth we feel in seeing words like grandeur and the sadness that results when imagine a world without the vastness of the Russian imagination or the elegance of the Chinese eye and hand.

But as the desire to live a meaningful life is not enough to fund the liberal arts, perhaps we should settle for a more pragmatic argument. Businesses are made of people, technologies are made by people, technologies are used by people. Every day, every person in every corporation faces ethical conundrums like the censorship example I outlined above. How can we approach these conundrums without tools or skills to break down the problem? How can we work to create the common ground required for effective communication if we’ve siphoned ourselves off into the cocoon of our subjective experience? Our universities should evolve, as the economic-social-political matrix is not what it once was. But they should not evolve at the expense of the liberal arts, which teach us how to be free.

*One of the stranger interviews James Damore conducted after his brief was leaked from Google was with the conservative radio host Stefan Molyneux, who suggested that conservatives and libertarians make better programmers because they are accustomed to dissecting the world in clear, black and white terms, as opposed to espousing the murky relativism of the liberals. It would be a sad world indeed if our minds were so inflexible that they lacked the ability to cleave a space to practice a technical skill.

**Sam Harris has discussed academic censorship and the tyranny of the progressives widely on the Waking Up podcast (and has met no lack of criticism for doing so), interviewing figures like Charles Murray, Nicolas Christakis, Mark Lilla, and others.

The featured image is from some edition of Areopagitica, a speech John Milton (yep, the author of Paradise Lost) gave to the British Parliament to protest censorship. In this speech, Milton argues that virtue is not innate but learned, that just as we have to exercise our self-restraint to achieve the virtue of temperance, so too should we be exposed to all sorts of ideas from all walks of life to train our minds in virtue, to give ourselves the opportunity to be free. I love that bronze hand.


The Sagrada Familia is a castle built by Australian termites.

The Sagrada Familia is not a castle built by Australian termites, and never will be. Tis utter blasphemy.

The Sagrada Familia is not a castle built by Australian termites, and yet, Look! Notice, as Daniel Dennett bids, how in an untrodden field in Australia there emerged and fell, in near silence, near but for the methodical gnawing, not unlike that of a mouse nibbling rapaciously on parched pasta left uneaten all these years but preserved under the thick dust on the thin cardboard with the thin plastic window enabling her to view what remained after she’d cooked just one serving, with butter, for her son, there emerged and fell, with the sublime transience of Andy Goldsworthy, a neo-Gothic church of organic complexity on par with that imagined by Antoni Gaudí i Cornet, whose Sagrada Familia is scheduled for completion in 2026, a full century after the architect died in a tragic tram crash, distracted by the recent rapture of his prayer.

The Sagrada Familia is not a castle built by Australian termites, and yet, Look! Notice, as Daniel Dennett bids, how in an untrodden field in Australia there emerged and fell a structure so eerily resemblant of the one Antoni Gaudí imagined before he died, neglected like a beggar in his shabby clothes, the doctors unaware they had the chance to save the mind that preempted the fluidity of contemporary parametric architectural design by some 80 odd years, a mind supple like that of Poincaré, singular yet part of a Zeitgeist bent on infusing time into space like sandalwood in oil, inseminating Euclid’s cold geometry with femininity and life, Einstein explaining why Mercury moves retrograde, Gaudí rendering the holy spirit palpable as movement in stone, fractals of repetition and difference giving life to inorganic matter, tension between time and space the nadir of spirituality, as Andrei Tarkovsky went on to explore in his films.

tarkovsky mirror
From Andrei Tarkovsky’s Mirror. As Tarkovsky wrote of his films in Sculpting in Time: “Just as a sculptor takes a lump of marble, and, inwardly conscious of the features of his finished piece, removes everything that is not a part of it — so the film-maker, from a ‘lump of time’ made up of an enormous, solid cluster of living facts, cuts off and discards whatever he does not need, leaving only what is to be an element of the finished film.”

The Sagrada Familia is not a castle built by Australian termites, and yet, Look! Notice, as Daniel Dennett bids, how in an untrodden field in Australia there emerged and fell a structure so eerily resemblant of the one Antoni Gaudí imagined before he died, with the (seemingly crucial) difference that the termites built their temple without blueprints or plan, gnawing away the silence as a collectivity of single stochastic acts which, taken together over time, result in a creation that appears, to our meaning-making minds, to have been created by an intelligent designer, this termite Sagrada Familia a marvelous instance of what Dennett calls Darwin’s strange inversion of reasoning, an inversion that admits to the possibility that absolute ignorance can serve as master artificer, that IN ORDER TO MAKE A PERFECT AND BEAUTIFUL MACHINE, IT IS NOT REQUISITE TO KNOW HOW TO MAKE IT*, that structures might emerge from the local activity of multiple parts, amino acids folding into proteins, bees flying into swarms, bumper-to-bumper traffic suddenly flowing freely, these complex release valves seeming like magic to the linear perspective of our linear minds.

The Sagrada Familia is not a castle built by Australian termites, and yet, the eerie resemblance between the termite and the tourist Sagrada Familias serves as a wonderful example to anchor a very important cultural question as we move into an age of post-intelligent design, where the technologies we create exhibit competence without comprehension, diagnosing lungs as cancerous or declaring that individuals merit a mortgage or recommending that a young woman would be a good fit for a role on a software engineering team or getting better and better at Go by playing millions of games against itself in a schizophrenic twist resemblant of the pristine pathos of Stephan Zweig, one’s own mind an asylum of exiled excellence during the travesty of the second world war, why, we’ve come full circle and stand here at a crossroads, bidden by a force we ourselves created to accept the creative potential of Lucretius’ swerve, to kneel at the altar of randomness, to appreciate that computational power is not just about shuffling 1s and 0s with speed but shuffling them fast enough to enable a tiny swerve to result in wondrous capabilities, and to watch as, perhaps tragically, we apply a framework built for intelligent design onto a Darwinian architecture, clipping the wings of stochastic potential, working to wrangle our gnawing termites into a straight jacket of cause, while the systems beating Atari, by no act of strategic foresight but by the blunt speed of iteration, make a move so strange and so outside the realm of verisimilitude that, as Kasparov succumbing to Deep Blue, we misinterpret a bug for brilliance.

The Sagrada Familia is not a castle built by Australian termites, and yet, it seems plausible that Gaudí would have reveled in the eerie resemblance between a castle built by so many gnawing termites and the temple Josep Maria Bocabella i Verdaguer, a bookseller with a popular fundamentalist newspaper, “the kind that reminded everybody that their misery was punishment for their sins,”**commissioned him to build.

A portrait of Josep Maria Bocabella, who commissioned Gaudí to build the Sagrada Familia.

Or would he? Gaudí was deeply Catholic. He genuflected at the temple of nature, seeing divine inspiration in the hexagons of honeycombs, imagining the columns of the Sagrada Familia to lean, buttresses, as symbols of the divine trilogy of the father (the vertical axis), son (the horizontal axis), and holy spirit (the vertical meeting the horizontal in crux of the diagonal). His creativity, therefore, always stemmed from something more than intelligent design, stood as an act of creative prayer to render homage to God the creator by creating an edifice that transformed, in fractals of repetition in difference, inert stone into movement and life.

The top of the columns inside the Sagrada Familia have twice as many lines as the roots,             the doubling generating a sense of movement and life.

The Sagrada Familia is not a castle built by Australian termites, and yet, the termite Sagrada Familia actually exists as a complete artifact, its essence revealed to the world rather than being stuck in unfinished potential. And yet, while we wait in joyful hope for its imminent completion, this unfinished, 144-year-long architectural project has already impacted so many other architects, from Frank Gehry to Zaha Hadid. This unfinished vision, this scaffold, has launched a thousand ships of beauty in so many other places, changing the skylines of Bilbao and Los Angeles and Hong Kong. Perhaps, then, the legacy of the Sagrada Family is more like that of Jodorowsky’s Dune, an unfinished film that, even from its place of stunted potential,  changed the history of cinema. Perhaps, then, the neglect the doctors showed to Gaudí, the bearded beggar distracted by his act of prayer, was one of those critical swerves in history. Perhaps, had Gaudí lived to finish his work, architects during the century wouldn’t have been as puzzled by the parametric requirements of his curves and the building wouldn’t have gained the puzzling aura it gleans to this day. Perhaps, no matter how hard we try to celebrate and accept the immense potential of stochasticity, we will always be makers of meaning, finders of cause, interpreters needing narrative to live grounded in our world. And then again, perhaps not.

The Sagrada Familia is not a castle built by Australian termites. The termites don’t care either way. They’ll still construct their own Sagrada Familia.

The Sagrada Familia is a castle built by Australian termites. How wondrous. How essential must be these shapes and forms.

The Sagrada Familia is a castle built by Australian termites. It is also an unfinished neo-Gothic church in Barcelona, Spain. Please, terrorists, please don’t destroy this temple of unfinished potential, this monad brimming the history of the world, each turn, each swerve a pivot down a different section of the encyclopedia, coming full circle in its web of knowledge, imagination, and grace.

The Sagrada Familia is a castle built by Australian termites. We’ll never know what Gaudí would have thought about the termite castle. All we have are the relics of his Poincaréan curves, and fish lamps to illuminate our future.

Frank Gehry’s fish lamps, which carry forth the spirit of Antoni Gaudí

*Dennett reads these words, penned in 1868 by Robert Beverley MacKenzie, with pedantic panache, commenting that the capital letters were in the original.

**Much in this post was inspired by Roman Mars’ awesome 99% Invisible podcast about the Sagrada Familia, which features the quotation about Bocabella’s newspaper.

The featured image comes from Daniel Dennett’s From Bacteria to Bach and Back. I had the immense pleasure of interviewing Dan on the In Context podcast, where we discuss many of the ideas that appear in this post, just in a much more cogent form. 


On Mentorship

On Tuesday, together with four fellow eloquent and inspiring women, I addressed an audience of a hundred and fifty (I think?) odd young women about becoming a woman leader in technology.

I recently passed a crucial threshold in my life. I am no longer primarily a seeker of mentors and role models, but primarily a mentor and role model for others. I will always have mentors. Forever. Wherever. In whatever guise they appear. I have a long way to go in my career, much to work on in my character. Three female mentors who currently inspire me are Maura Grossman (a kickass computer science professor at Waterloo who was effectively the founder of using machine learning to find relevant documents in a lawsuit as a former partner at Wachtell); Janet Bannister (a kickass venture capital partner at Real Ventures who has led multiple businesses and retains a kind, open energy); and Venerable Pannavati (a kickass Buddhist monk and former Christian pastor who infuses Metta Meditation with the slave spirit of Billy Holiday, man it’s incredible, and who practices a stance of radical compassion and forgiveness, to the point of transforming all victimhood–including rape–into grounded self-reliance).

I’m in my early thirties. I have no children, no little ones whose minds and emotions are shaped by my example. I hope someday I will. I live every day with the possibility that I may not. The point is, I’m not practiced in the art of living where every action matters, of living with the awareness that I’m impacting and affecting others, others looking to me for guidance, inspiration, example. And here, suddenly, I find myself in a position where others look up to me for inspiration every day. How should I act? How can I lead by example? How might I inspire? How must I fuel ambition, passion, curiosity, kindness?

What a marvelous gift. What a grave responsibility.

I ask myself, should I project strength, should I perform the traits we want all women to believe they can and should have, or should I expose vulnerability, expose all the suffering and doubts and questions and pain and anxiety I’ve dealt with–and continue to deal with, just tempered–on this meandering path to this current version of me?

There is an art to exposing vulnerability to inspire good. Acting from a place of insecurity or anxiety leads to nothing but chaos. I’ve done it a zillion times; it’s hurt a zillion and one. Having a little temper tantrum, gossiping, breaking cool in a way that poisons a mood, enforcing territory, displaying sham superiority, all this stuff sucks. Being aware of weaknesses and asking for help to compensate for them; relaying anecdotes or examples of lessons learned; apologizing; regretting; accepting a mess of a mind for the moment and trying one’s damnedest not to act on it out of awareness of the damage it may cause, all this stuff is great.

I believe in the healing power of identification and of embracing our humanity. Being a strong woman leader in tech need not only be about projecting strength and awesomeness. It can be about sharing what lies under the covers, sharing what hurt, sharing the doubts. Finding strength in the place of radical acceptance so we can all say, “Nevertheless, she persisted.”

This is me saying something at Tuesday’s event.

Many of the audience members reached out over LinkedIn after the event. Here is the message that touched me deepest.

It was great to meet you and hear you speak last night. Thanks for taking the time to share your experience. It is comforting to know that other women, especially ones as accomplished as those on the panel, have doubts about their capabilities too.

As sharing doubts can inspire comfort and even inspiration, I figured I’d share some more. As I sat meditating this morning, I was suddenly overcome by the sense that I had a truth worth sharing. Not a propositional truth, but an emotional truth. Perhaps we call that wisdom. Here’s the story.

I had a very hard time in the last two years of my PhD. So hard, in fact, that I decided to leave Stanford for a bit and spend time at home with my family in Boston. It was a dark time. My mind was rattled, lost, unshackled, unfettered, unable. My mother had recommended for a while that I start volunteering, that I use the brute and basic reality of doing work for others as a starter, as yeast for my daily bread, to reset my neurons and work my way back to stability. Finally, I acquiesced. It was a way to pass the time. Like housekeeping.

I started working every day at the Women’s Lunch Place, a women’s-only soup kitchen located in the basement of an old church at the corner of Boylston and Arlington streets in Boston. Homeless and practically homeless women came there as a sanctuary from the streets, as a safe space after a night staving off unwanted sexual advances at a shelter, as a space for community or a space to be left alone in peace. Some were social: they painted and laughed together. Some were introverted, watching from the shadows. Some were sober. Some were drunk. I treated the Women’s Lunch Place like my job, coming in every morning to start at 7:00 am. The guests didn’t know I needed the kitchen as much as they did.

Except for one. Her name was Anne. When I asked her where she was from, she told me she was from the world.

Anne was one of the quiet, solitary guests at the kitchen. I’d never noticed her, as she hung out in a corner to the left of the kitchen, a friend of the shadows. One afternoon towards the end of my shift she approached me, touching my shoulder. I was startled.

The first thing Anne did was to thank me. She told me she’d been watching me for the better part of a month and was impressed by my diligence and leadership skills. She watched me chop onions, noticing how I gradually honed my knife skills, transferring the motions to a more graceful wrist and turning the knife upside down to scrap the chopped pieces into the huge soup pots without dumbing the blade. She watched how new volunteers naturally flocked to me for directions on what to do next, watched how I fell into a place of leadership without knowing it, just as my mother had done before me. She watched how I cared, how deeply I cared for the guests and how I executed my work with integrity. I think she may have known I needed this more than they.

For then, out of the blue, without knowing anything about my history and my experiences beyond the actions she’d observed, she told me a story.

“Once upon a time,” started Anne from the World, “there was a medieval knight. Like all medieval knights, he was sent on a quest to pass through the forbidden castle and save the beautiful princess captured by the dragon. He set out, intrepid and brave. He arrived at the castle and found the central door all legends had instructed him to pass through to reach the dragon’s den, where lay captured the beautiful princess. He reached the door and went to turn the knob. It was locked. He pulled and pushed harder, without any luck. He tried and struggled for hours, for days, bloodying his hands, bruising his legs, wearing himself down to nothing. Eventually he gave up in despair, sunk with the awareness of his failure. He turned back for home, readying his emotions for shame. But after starting out, something inspired him to turn around and scan the castle one more time. His removed vantage point afforded a broader perspective of the castle, not just the local view of the door. And then he noticed something. The castle had more than just the central door, there were two others at the flanks. Crestfallen and doubting, he nevertheless mustered the courage to try another door, just in case. He approached, turned the knob, and the door opened, effortlessly.”

This wonderful gift I’ve been given to serve as a role model for other women did not come easily. It was not a clear path, not the stuff of trodden legends. It was a path filled with struggles and doubts, filled with moments of grueling uncertainty where I knew not what the future might hold, for the path I was tracing for myself was not one commonly traced before.

I’ve been fortunate to have had many people open doors for me, turning knobs on my behalf. My deepest wisdom to date is that we can’t know the future. All we can do is try our best, always, and trust that opportunities we’ve never considered will unfold. When I struggled hopelessly at the end of graduate school, I never imagined the life that has since unfolded. I was so scared of failing that I couldn’t embrace what it might mean to succeed. Finally, with the patient support of many friends and lovers, I gained the ability to step back and find a door that I could open with less effort and more joy.

Since I earned my PhD in 2012, I’ve spoken to many audiences about my experiences transitioning from literature to technology. I frequently start my talks with this story, with this gift from Anne from the World. God only knows why Anne knew it was the right story to tell. But she did. And her meme evolves, here as elsewhere. She is one of the most important mentors I’ve ever had, my Athena waiting in the shadows, a giver of wisdom and grace. I will forever be grateful I took the time to listen and look.

I can’t figure out where the featured image comes from, but it’s the most beautiful image of Telemachus, Odysseus’ son, on the web. The style looks like a fusion between Fragonard and Blake. I love the color palette and the forlorn look on the character’s face. A seemingly humble and unimportant man, Mentor was actually the goddess Athena, wisdom donning a surprising habit, showing up where we least expect it, if only we are open to attend. 

Degrees of Knowledge

That familiar discomfort of wanting to write but not feeling ready yet.*

(The default voice pops up in my brain: “Then don’t write! Be kind to yourself! Keep reading until you understand things fully enough to write something cogent and coherent, something worth reading.”

The second voice: “But you committed to doing this! To not write** is to fail.***”

The third voice: “Well gosh, I do find it a bit puerile to incorporate meta-thoughts on the process of writing so frequently in my posts, but laziness triumphs, and voilà there they come. Welcome back. Let’s turn it to our advantage one more time.”)

This time the courage to just do it came from the realization that “I don’t understand this yet” is interesting in itself. We all navigate the world with different degrees of knowledge about different topics. To follow Wilfred Sellars, most of the time we inhabit the manifest image, “the framework in terms of which man came to be aware of himself as man-in-the-world,” or, more broadly, the framework in terms of which we ordinarily observe and explain our world. We need the manifest image to get by, to engage with one another and not to live in a state of utter paralysis, questioning our every thought or experience as if we were being tricked by the evil genius Descartes introduces at the outset of his Meditations (the evil genius toppled by the clear and distinct force of the cogito, the I am, which, per Dan Dennett, actually had the reverse effect of fooling us into believing our consciousness is something different from what it actually is). Sellars contrasts the manifest image with the scientific image: “the scientific image presents itself as a rival image. From its point of view the manifest image on which it rests is an ‘inadequate’ but pragmatically useful likeness of a reality which first finds its adequate (in principle) likeness in the scientific image.” So we all live in this not quite reality, our ability to cooperate and coexist predicated pragmatically upon our shared not-quite-accurate truths. It’s a damn good thing the mess works so well, or we’d never get anything done.

Sellars has a lot to say about the relationship between the manifest and scientific images, how and where the two merge and diverge. In the rest of this post, I’m going to catalogue my gradual coming to not-yet-fully understanding the relationship between mathematical machine learning models and the hardware they run on. It’s spurring my curiosity, but I certainly don’t understand it yet. I would welcome readers’ input on what to read and to whom to talk to change my manifest image into one that’s slightly more scientific.

So, one common thing we hear these days (in particular given Nvidia’s now formidable marketing presence) is that graphical processing units (GPUs) and tensor processing units (TPUs) are a key hardware advance driving the current ubiquity in artificial intelligence (AI). I learned about GPUs for the first time about two years ago and wanted to understand why they made it so much faster to train deep neural networks, the algorithms behind many popular AI applications. I settled with an understanding that the linear algebra–operations we perform on vectors, strings of numbers oriented in a direction in an n-dimensional space–powering these applications is better executed on hardware of a parallel, matrix-like structure. That is to say, properties of the hardware were more like properties of the math: they performed so much more quickly than a linear central processing unit (CPU) because they didn’t have to squeeze a parallel computation into the straightjacket of a linear, gated flow of electrons. Tensors, objects that describe the relationships between vectors, as in Google’s hardware, are that much more closely aligned with the mathematical operations behind deep learning algorithms.

There are two levels of knowledge there:

  • Basic sales pitch: “remember, GPU = deep learning hardware; they make AI faster, and therefore make AI easier to use so more possible!”
  • Just above the basic sales pitch: “the mathematics behind deep learning is better represented by GPU or TPU hardware; that’s why they make AI faster, and therefore easier to use so more possible!”

At this first stage of knowledge, my mind reached a plateau where I assumed that the tensor structure was somehow intrinsically and essentially linked to the math in deep learning. My brain’s neurons and synapses had coalesced on some local minimum or maximum where the two concepts where linked and reinforced by talks I gave (which by design condense understanding into some quotable meme, in particular in the age of Twitter…and this requirement to condense certainly reinforces and reshapes how something is understood).

In time, I started to explore the strange world of quantum computing, starting afresh off the local plateau to try, again, to understand new claims that entangled qubits enable even faster execution of the math behind deep learning than the soddenly deterministic bits of C, G, and TPUs. As Ivan Deutsch explains this article, the promise behind quantum computing is as follows:

In a classical computer, information is stored in retrievable bits binary coded as 0 or 1. But in a quantum computer, elementary particles inhabit a probabilistic limbo called superposition where a “qubit” can be coded as 0 and 1.

Here is the magic: Each qubit can be entangled with the other qubits in the machine. The intertwining of quantum “states” exponentially increases the number of 0s and 1s that can be simultaneously processed by an array of qubits. Machines that can harness the power of quantum logic can deal with exponentially greater levels of complexity than the most powerful classical computer. Problems that would take a state-of-the-art classical computer the age of our universe to solve, can, in theory, be solved by a universal quantum computer in hours.

What’s salient here is that the inherent probabilism of quantum computers make them even more fundamentally aligned with the true mathematics we’re representing with machine learning algorithms. TPUs, then, seem to exhibit a structure that best captures the mathematical operations of the algorithms, but exhibit the fatal flaw of being deterministic by essence: they’re still trafficking in the binary digits of 1s and 0s, even if they’re allocated in a different way. Quantum computing seems to bring back an analog computing paradigm, where we use aspects of physical phenomena to model the problem we’d like to solve. Quantum, of course, exhibits this special fragility where, should the balance of the system be disrupted, the probabilistic potential reverts down to the boring old determinism of 1s and 0s: a cat observed will be either dead or alive, as the harsh law of the excluded middle haunting our manifest image.

What, then, is the status of being of the math? I feel a risk of falling into Platonism, of assuming that a statement like “3 is prime” refers to some abstract entity, the number 3, that then gets realized in a lesser form as it is embodied on a CPU, GPU, or cup of coffee. It feels more cogent to me to endorse mathematical fictionalism, where mathematical statements like “3 is prime” tell a different type of truth than truths we tell about objects and people we can touch and love in our manifest world.****

My conclusion, then, is that radical creativity in machine learning–in any technology–may arise from our being able to abstract the formal mathematics from their substrate, to conceptually open up a liminal space where properties of equations have yet to take form. This is likely a lesson for our own identities, the freeing from necessity, from assumption, that enables us to come into the self we never thought we’d be.

I have a long way to go to understand this fully, and I’ll never understand it fully enough to contribute to the future of hardware R&D. But the world needs communicators, translators who eventually accept that close enough can be a place for empathy, and growth.

*This holds not only for writing, but for many types of doing, including creating a product. Agile methodologies help overcome the paralysis of uncertainty, the discomfort of not being ready yet. You commit to doing something, see how it works, see how people respond, see what you can do better next time. We’re always navigating various degrees of uncertainty, as Rich Sutton discussed on the In Context podcast. Sutton’s formalization of doing the best you can with the information you have available today towards some long-term goal, but learning at each step rather than waiting for the long-term result, is called temporal-difference learning.

**Split infinitive intentional.

***Who’s keeping score?

****That’s not to say we can’t love numbers, as Euler’s Identity inspires enormous joy in me, or that we can’t love fictional characters, or that we can’t love misrepresentations of real people that we fabricate in our imaginations. I’ve fallen obsessively in love with 3 or 4 imaginary men this year, creations of my imagination loosely inspired by the real people I thought I loved.

The image comes from this site, which analyzes themes in films by Darren Aronofsky. Maximilian Cohen, the protagonist of Pi, sees mathematical patterns all over the place, which eventually drives him to put a drill into his head. Aronofsky has a penchant for angst. Others, like Richard Feynman, find delight in exploring mathematical regularities in the world around us. Soap bubbles, for example, offer incredible complexity, if we’re curious enough to look.

The arabesques of a soap bubble


The Secret Miracle

….And God made him die during the course of a hundred years and then He revived him and said: “How long have you been here?” “A day, or part of a day,” he replied.  – The Koran, II 261

The embryo of this post has gestated between my prefrontal cortex and limbic system for one year and eight months. It’s time.*

There seem to be two opposite axes from which we typically consider and evaluate character. Character as traits, Eigenschaften (see Musil), the markers of personality, virtue, and vice.

One extreme is to say that character is formed and reinforced through our daily actions and habits.** We are the actions we tend towards, the self not noun but verb, a precipitate we shape using the mysterious organ philosophers have historically called free will. Thoughts rise up and compete for attention,*** drawing and calling us to identify as a me, a me reinforced as our wrists rotate ever more naturally to wash morning coffee cups, a me shocked into being by an acute feeling of disgust, coiling and recoiling from some exogenous stimulus that drives home the need for a barrier between self and other, a me we can imagine looking back on from an imagined future-perfect perch to ask, like Ivan Ilyich, if we have indeed lived a life worth living. Character as daily habit. Character, as my grandfather used to say, as our ability to decide if today will be a good or a bad day when we first put our feet on the ground in the morning (Naturally, despite all the negative feelings and challenges, he always chose to make today a good day).

The other extreme is to say that true character is revealed in the fox hole. That traits aren’t revealed until they are tested. That, given our innate social nature, it’s relatively easy to seem one way when we float on, with, and in the waves of relative goodness embodied in a local culture (a family, a team, a company, a neighborhood, a community, perhaps a nation, hopefully a world, imagine a universe!), but that some truer nature will be shamelessly revealed when the going gets tough. This notion of character is the stuff of war movies. We like the hero who irrationally goes back to save one sheep at the expense of the flock when the napalm shit hits the fan. It seems we need these moments and myths to keep the tissue of social bonds intact. They support us with tears nudged and nourished by the sentimental cadences of John Williams soundtracks.

How my grandfather died convinced me that these two extremes are one.

On the evening of January 14, 2016, David William Hume (Bill, although it’s awesome to be part of a family with multiple David Humes!) was taken to a hospital near Pittsburgh. He’d suffered from heart issues for more than ten years and on that day the blood simply stopped pumping into his legs. He was rushed behind the doors of the emergency operating room, while my aunts, uncles, and grandmother waited in the silence and agony one comes to know in the limbo state upon hearing that a loved one has just had a heart attack, has just been shot, has just had a stroke, has just had something happen where time dilates to a standstill and, phenomenologically, the principles of physics linking time and space are halted in the pinnacle of love, of love towards another, of all else in the world put on hold until we learn whether the loved one will survive. (It may be that this experience of love’s directionality, of love at any distance, of our sense of self entangled in the existence and well being of another, is the clearest experiential metaphor available build our intuitions of quantum entanglement.****) My grandfather survived the operation. And the first thing he did was to call my grandmother and exclaim, with the glee and energy of a young boy, that he was alive, that he was delighted to be alive, and that he couldn’t have lived without her beside him, through 60 years of children crying and making pierogis and washing the floor and making sure my father didn’t squander his life at the hobby shop in Beaver Meadows Pennsylvania and learning that Katie, me, here, writing, the first grandchild was born, my eyebrows already thick and black as they’ll remain my whole life until they start to grey and signing Sinatra off key and loving the Red Sox and being a role model of what it means to live a good life, what it means to be a patriarch for our family, yes he called her and said he did it, that he was so scared but that he survived and it was just the same as getting out of bed every morning and making a choice to be happy and have a good day.

She smiled, relieved.

A few minutes later, he died.

It’s like a swan song. His character distilled to its essence. I think about this moment often. It’s so perfectly representative of the man I knew and loved.

And when I first heard about my grandfather’s death, I couldn’t help but think of Borges’s masterful (but what by Borges is not masterful?) short story The Secret Miracle. Instead of explaining why, I bid you, reader, to find out for yourself.

 * Mark my words: in 50 years time, we will cherish the novels of Jessie Ferguson, perhaps the most talented novelist of our time. Jessie was in my cohort in the comparative literature department at Stanford. The depth of her intelligence, sensitivity, and imagination eclipsed us all. I stand in awe of her talents as Jinny to Rhoda in Virginia Woolf’s The Waves. At her wedding, she asked me to read aloud Paul Celan’s Corona. I could barely do it without crying, given how immensely beautiful this poem is. Tucked away in the Berkeley Hills, her wedding remains the most beautiful ceremony I’ve ever attended.

**My ex-boyfriends, those privileged few who’ve observed (with a mixture of loving acceptance and tepid horror) my sacrosanct morning routine, certainly know how deeply this resonates with me.

***Thomas Metzinger shares some wonderful thoughts about consciousness and self-consciousness in his interview with Sam Harris on the Waking Up podcast. My favorite part of this episode is Metzinger’s very cogent conclusion that, should an AI ever suffer like we humans do (which Joanna Bryson compelling argues will not and should not occur), the most rational action it would then take would be to self-annihilate. Pace Bostrom and Musk, I find the idea that a truly intelligent being would choose non-existence over existence to be quite compelling, if only because I have first-hand experience with the acute need to allay acute suffering like anxiety immediately, whereas boredom, loneliness, and even sadness are emotional states within which I more comfortably abide.

****Many thanks to Yanbo Xue at D-Wave for first suggesting that metaphor. Jean-Luc Marion explores the subjective phenomenon of love in Le Phénomène Erotique; I don’t recall his mentioning quantum physics, although it’s been years since I read the book, but, based on conversations I had with him years ago at the University of Chicago, I predict this would be a parallel he’d be intrigued to explore.

My last dance with my grandfather, the late David William Hume. Snuff, as we lovingly called him, was never more at home than on the dance floor, even though he couldn’t sing and couldn’t dance. He used to do this cute knees-back-and-forth dance. He loved jazz standards, and would send me mix CDs he burned when I lived in Leipzig, Germany. In his 80s, he embarrassed the hell out of my grandmother, his wife of 60 years, by joining the local Dancing with the Stars chapter and taking Zumba lessons. He lived. He lived fully and with great integrity. 

AI Standing On the Shoulders of Giants

My dear friend and colleague Steve Irvine and I will represent our company at the ElevateToronto Festival this Wednesday (come say hi!). The organizers of a panel I’m on asked us to prepare comments about what makes an “AI-First Organization.”

There are many bad answers to this question. It’s not helpful for business leaders to know that AI systems can just-about reliably execute perception tasks like recognizing a puppy or kitty in a picture. Executives think that’s cute, but can’t for the life of them see how that would impact their business. Seeing these parallels requires synthetic thinking and expertise in AI, the ability to see how the properties of a business’ data set are structurally similar to those of the pixels in an image, which would merit the application of similar mathematical model to solve two problems that instantiate themselves quite differently in particular contexts. Most often, therefore, being exposed to fun breakthroughs leads to frustration. Research stays divorced from commercial application.

Another bad answer is mindlessly mobilize hype to convince businesses they should all be AI First. That’s silly.

On the one hand, as Bradford Cross convincingly argues, having “AI deliver core value” is a pillar of a great vertical AI startup. Here, AI is not an afterthought added like a domain suffix to secure funding from trendy VCs, but rather a necessary and sufficient condition of solving an end user problem. Often, this core competency is enhanced by other statistical features. For example, while the core capability of satellite analysis tools like Orbital Insight or food recognition tools like Bitesnap is image recognition*, the real value to customers arises with additional statistical insights across an image set (Has the number of cars in this Walmart parking lot increased year over year? To feel great on my new keto diet, what should I eat for dinner if I’ve already had two sausages for breakfast?).

On the other hand, most enterprises have been in business for a long time and have developed the Clayton Christensen armature of instilled practices and processes that make it too hard to flip a switch to just become AI First. (As Gottfried Leibniz said centuries before Darwin, natura non saltum facit  – nature does not make jumps). One false assumption about enterprise AI is that large companies have lots of data and therefore offer ripe environments for AI applications. Most have lots of data indeed, but have not historically collected, stored, or processed their data with an eye towards AI. That creates a very different data environment than those found at Google or Facebook, requiring tedious work to lay the foundations to get started. The most important thing enterprises need to keep in mind is to never to let perfection be the enemy of the good, knowing that no company has perfect data. Succeeding with AI takes a guerrilla mindset, a willingness to make do with close enough and the knack of breaking down the ideal application into little proofs of concepts that can set the ball rolling down the path towards a future goal.

Screen Shot 2017-09-10 at 12.14.38 PM
The swampy reality of working with enterprise data.

What large enterprises do have is history. They’ve been in business for a while. They’ve gotten really good at doing something, it’s just not always something a large market still wants or needs. And while it’s popular for executives to say that they are “a technology company that just so happen to be financial services/healthcare/auditing/insurance company,” I’m not sure this attitude delivers the best results for AI. Instead, I think it’s more useful for each enterprise to own up to its identity as a Something-Else-First company, but to add a shift in perspective to go from a Just-Plain-Old-Something-Else-First Company to a Something-Else-First-With-An-AI-Twist company.

The shift in perspective relates to how an organization embodies its expertise and harnesses traces of past work.** AI enables a company to take stock of the past judgments, work product, and actions of employees – a vast archive of years of expertise in being Something-Else-First – and either concatenate together these past actions to automate or inform a present action.

To be pithy, AI makes it easier for us to stand on the shoulder of giants.

An anecdote helps illustrate what this change in perspective might look like in practice. A good friend did his law degree ten years ago at Columbia. One final exam exercise was to read up on a case and write how a hypothetical judge would opine. Having procrastinated until the last minute, my friend didn’t have time to read and digest all the materials. What he did have was a study guide comprising answers former Columbia law students had given to the same exam question for the past 20 years. And this gave him a brilliant idea. As students all have to have high LSAT scores and transcripts to get into Columbia Law, he thought, we can assume that all past students have more or less the same capability of answering the question. So wouldn’t he do a better job predicting a judge’s opinion by finding the average answer from hundreds of similarly-qualified students rather than just reporting his own opinion? So as opposed to reading the primary materials, he shifted and did a statistical analysis of secondary materials, an analysis of the judgments that others in his position had given for a given task. When he handed in his assignment, the professor remarked on the brilliance of the technique, but couldn’t reward him with a good grade because it missed the essence of what he was tested for. It was a different style of work, a different style of jurisprudence.

Something-Else-First AI organizations work similarly. Instead of training each individual employee to do the same task, perhaps in a way similar to those of the past, perhaps with some new nuance, organizations capture past judgments and actions across a wide base of former employees and use these judgments – these secondary sources – to inform current actions. With enough data to train an algorithm, the actions might be completely automated. Most often there’s not enough to achieve satisfactory accuracy in the predictions, and organizations instead present guesses to current employees, who can provide feedback to improve performance in the future.

This ability to recycle past judgments and actions is very powerful. Outside enterprise applications, AI’s ability to fast forward our ability to stand on the shoulders of giants is shifting our direction as a species. Feedback loops like filtering algorithms on social media sites have the potential to keep us mired in an infantile past, with consequences that have been dangerous for democracy. We have to pay attention to that, as news and the exchange of information, all the way back to de Tocqueville, has always been key to democracy. Expanding self-reflexive awareness broadly across different domains of knowledge will undoubtedly change how disciplines evolve going forward. I remain hopeful, but believe we have some work to do to prepare the citizenship and workforce of the future.

*Image recognition algorithms do a great job showing why it’s dangerous for an AI company to bank its differentiation and strategy on an algorithmic capability as opposed to a unique ability to solve a business problem or amass a proprietary data set. Just two years ago, image recognition was a breakthrough capability just making its way to primetime commercial use. This June, Google released image recognition code for free via its Tensorflow API. That’s a very fast turnaround from capability to commodity, a transition of great interest to my former colleagues at Fast Forward Labs.

**See here for ethical implications of this backward-looking temporality.

The featured image comes from a twelfth-century manuscript by neo-platonist philosopher Bernard de Chartres. It illustrates this quotation: 

“We are like dwarfs on the shoulders of giants, so that we can see more than they, and things at a greater distance, not by virtue of any sharpness of sight on our part, or any physical distinction, but because we are carried high and raised up by their giant size.”

It’s since circulated from Newton to Nietzsche, each indicating indebtedness to prior thinkers as inspiration for present insights and breakthroughs. 

Analogue Repeaters

Screen Shot 2017-08-27 at 10.14.14 AM
Imagine my disappointment (gosh, never know if it’s two s’s or two p’s (gosh, never know if I should use an apostrophe to designate plural letters, i.e., not one unit of letter stwo units of letter s! (gosh, never know if I should use italics to emphasize a word or idea in a sentence, as my mind’s ear echoes the judging-but-because-too-polite-to-outrightly-judge-nudging voice of a dissertation advisor of yore, reprimanding me for the immaturity of style, as the semantics (the meaning (gosh, why in god’s name do people use such fancy words, just to exclude the rest of us?, diction synonymous with power (gosh, David Foster Wallace’s essay AUTHORITY AND AMERICAN USAGE* is so bold, so brilliant, so relevant today, as we skirt the elephant prancing around the delicate Sèvres teacups in Trump’s ramshackle cabinet of curiosities (gosh, the INCREDIBLE (intentional) elegance of Charles Sanders Peirce‘s prose, master of metaphysical metaphor, expert in epistemological eloquence, who writes sentences like That much-admired “ornament of logic” — the doctrine of clearness and distinctness — may be pretty enough, but it is high time to relegate to our cabinet of curiosities the antique bijou, and to wear about us something better adapted to modern uses and Thought is a thread of melody running through the succession of our sensations), a polka dot, maladroit elephant screaming at the top of her lungs that SOCIAL CLASS IS TABOO!, as we can’t mention it, we hide it under euphemisms like “income inequality” and our bad faith creates warts manifest as mean and hateful ideologies like white supremacy and terrorism as we ignore the root cause, cloaking our fears in political correctness and identity politics, it being too damn hard to change the system, too damn hard to imagine a different sociopolitical constellation, too damn different from what we’ve inherited, a system showing signs of wear and tear like my battered GI tract (gosh, it would be fucking wonderful if Western Medicine could get its fucking act together and stop poisoning us (me) with its antibiotics, its linear “science”, its specialities, its discrete anatomies that create nothing but carcasses and bulbous gout (no, fortunately, I don’t have gout!), for Christ’s sake why is it so hard to figure out what the hell we should eat to be healthy? Gluten, no gluten, Dairy, no dairy. No sugar (that one at least is clear). Legumes, no legumes. Onions, no onions. Meat, no meat. For fuck’s sake each microbiome is different, stop subjecting us (me) to your blunt diagnostics!))) (I think that’s the right number of close parentheses; does this mean I’d be a shitty programmer?) should carry enough weight without needing the crutches of form (gosh, Thomas Bernhard would be disappointed, as would so many crappy deconstructionists following the crumbs littering the pitiful trail created by the third-rate-metaphysical essays of Derrida and de Man)))) (again, I may have fucked up the number of close parentheses) upon clicking the URL for ERA Welcome** (ERA an acronym for Escarpment Repeater Association, an amateur radio club in Ontario presumably eponymous for the Niagara Escarpment) only to find that service was temporarily unavailable! (Yes, those are my bookmarks. I have multiple email inboxes because I have multiple jobs, each enabling different vectors of curiosity and expressing different sides of my personality. This post excavates the one side of me, a side unfettered by any professional obligations, unindexed by form, without requirement to keep those emails short and sweet, as it doesn’t matter if no one will read this or no one will respond, doesn’t matter if pure confusion thwarts action, a refuge (or, for fans of puns, a hamlet (personally most fond of Asta Nielsen’s 1920 interpretation)) from the day-to-day toil of pragmatic communication, where it’s so damn hard to muster the courage to cleave the continuous and create the necessary and sufficient form to catalyze “next steps” (gosh, how deeply Thomas Mann’s*** statement I wanted to write you a short letter but I didn’t have the time! resonates!))****  *(or, “POLITICS AND THE ENGLISH LANGUAGE” IS REDUNDANT) (parenthesis and capital letters original) ** I no longer remember how I found the ERA. It showed up during my search for all things related to Treasure Island, the subject of this post. It seemed quite fitting for a post about recursion, even if the members of the ERA use the word repeater much differently than I. ***One of those quotations (I was also taught that quote is a verb and quotation is a noun, and that I should display my erudition and never placate to common use) attributed to 5000 different people, just like that which doesn’t kill me make me stronger, which people attribute to St. John the Baptist, Nietzsche, or Rose Kennedy, depending on taste, experience, and predilection (admittedly redundant, but I liked the tricolon). ****Footnote Four, the most famous footnote in American Constitutional Law, comes from the 1938 ruling US v. Carolene Products CoIt reads: “There may be narrower scope for operation of the presumption of constitutionality when legislation appears on its face to be within a specific prohibition of the Constitution, such as those of the first ten amendments, which are deemed equally specific when held to be embraced within the Fourteenth….
It is unnecessary to consider now whether legislation which restricts those political processes which can ordinarily be expected to bring about repeal of undesirable legislation, is to be subjected to more exacting judicial scrutiny under the general prohibitions of the Fourteenth Amendment than are most other types of legislation….
Nor need we inquire whether similar considerations enter into the review of statutes directed at particular religious… or nations… or racial minorities…: whether prejudice against discrete and insular minorities may be a special condition, which tends seriously to curtail the operation of those political processes ordinarily to be relied upon to protect minorities, and which may call for a correspondingly more searching judicial inquiry… (italics added by the author of the Wikipedia article from which I copied and pasted the quotation). Ruth Bader Ginsburg has apparently drawn upon it during the Roberts’ Court to push the Court to do a better job protecting minorities, who, as recent politics and hate acts have shown, still need protecting.
Asta Nielsen - Hamlet (1921) cape
Had to put in this photo because it is just that awesome. Playing Hamlet, the beautiful Asta Nielsen rushes in to challenge Claudius, the new king. Nielsen uses her gender superbly to channel the great prince’s doubts.

Treasure Island is a nightmare for the field of location intelligence.* That’s because it is:

  • an Island
  • in a lake (namely, Lake Mindemoya)
  • on an island (namely, Manitoulin Island)
  • in a lake (namely, Lake Huron)

While said to be the world’s largest island in a lake on an island in a lake, Treasure Island is actually quite small: 1.4 kilometers long x 400 meters wide, housing only a few cottages and no permanent residents.** It has a wonderful history. William McPherson, former deputy chief of police for Toronto, purchased the island for $60 in 1883, only to sell it to Joe and Jean Hodgson in 1928. On July 13, 2015 around 11:30 am the Manitoulin Detachment of the Ontario Provincial Police (OPP) was notified of a series of break and enters that had occurred sometime on July 12, 2015 to one of the few buildings on Treasure Island; hooligans entered the garage area and caused damage to two golf carts, estimated in the thousands of dollars.

Folklore etiologies for the genesis of Treasure Island are equivocal. One tradition plays on the perennial frustrations between husband and wife:

According to local tradition, Treasure Island was originally named Mindemoya, because of the distinctive shape of the island: rising at one end to a long flat hill, with a steep drop to a short low area at the other end. According to legend, a great chieftain or demi-god who once lived in Sault Ste. Marie, Ontario had a wife who would not give him any peace. In frustration he eventually kicked her and sent her flying, to land on her hands and knees in Lake Mindemoya, leaving her back and rump above the water, which we see today as the island. The word “Mindemoya” supposedly means “Old Lady’s Bottom”. See dubious Wikipedia

The Anishinaabe tradition, by contrast, features a story about a rogue Odysseus-like trickster hero whose moral defies any heuristic logic (and is thereby much more interesting):

Treasure Island, or as it is also known, Mindemoya Island, can be seen from almost all vantage points around the lake. The shape of the island is of a person lying prostrate with hands outstretched in front. One Anishinaabe tale tells of Nanabush, the Trickster with magic powers, who was carrying his grandmother over his shoulder, and suddenly stumbling, caused her to fly through the air to the middle of the lake, landing on her hands and knees, where she has remained ever since. This is Mindemoya (Mndimooyenh), the legendary old woman of the lake. See The Manitoulin Expositor

A pictogram of Nanabozho, an alternative Romanized version of Nanabush’s name, which itself varies across Ojibwe dialects. Nanabozho is part Shiva, a spirit involved in the world’s creation, part Odysseus, a wily trickster hero who outsmarts bad guys and throws grandmothers into the middle of the lake.

In today’s data-driven world, where quantitative interpretations of phenomena have replaced classical, Ovidian etiologies (i.e., where grandmothers or testy wives metamorphosize into islands within lakes within islands within lakes), Nanabozho’s guiles have been recast as topological oddities, recursive structures that break the consistency and unity required to pinpoint a location.

Indeed, what kind of data structure could possibly capture the recursive identity of Treasure Island? At one level of granularity, say measured with satellites that capture diameters of 50 kilometers, our location intelligence analyst (LIA) would say “at 45.762°N 82.209°W there is an island!” (this being Manitoulin Island, the Island around Lake Mindemoya, around Treasure Island). And our heroic LIA would be right, but right for the wrong referent! And that could cause all sorts of problems later on. So if she wanted to be more accurate, she could use smaller satellites that capture locations more precisely, or even a little drone, which could capture distances at, say, the 5 kilometer mark, at which point she would say, “at 45.762°N 82.209°W there is a lake!”, which would be wrong, but also right, just not right enough. And so on and so on, peeling away the layers of the topological onion, unpacking the nested babushkas of the inherited Russian Doll, the lips still crimson, the flowers a pattern indexing styles of yore, styles lost in the clean blankness of modernism. 

But isn’t this very recursion the key to consciousness? If we could solve the elusive identity of Treasure Island, might we not have found our topology for the mind’s emergence from matter, Nanabozho laughing heartily from his perch in the past, the old lady’s bottom the key to sentience all along, if we were only wise enough to look?

Why, yes and no.

I don’t know the scientific explanation behind the genesis of Treasure Island, as the internet focuses on the myths fit for tourists, perpetuated year after year in the oral tradition of volunteer guides, kindly ladies with kindly graying hair, ever ready to greet the city folk on holiday from the cottage. But it certainly seems plausible that Treasure Island evolved through some aleatory, stochastic whim of nature, the product of perfectly uncomprehending and incomprehensible forces that, through sheer force of repetition, through mindless trial and error, created a perfect recursive structure, Time outwitting Mind with paleolithic patience, repeating and repeating until chance and probability land on something that exhibits the mastery of Andy Goldsworthy‘s invisible hand, only to blow away in the autumn winds, our secrets transient, momentary missives that disappear upon observation, our Cumaean Sibyl whispering her truth to Schrödinger’s dead cat.

Imagine creating art destined to disappear. Imagine not caring if it didn’t last, but focusing on the momentary beauty, on the trick of the mind, where intentionality appears as natural as aleatory design. (If this sounds cool, check out Rivers and Tides.)

Here’s the punchline: many of the wondrous feats of contemporary artificial intelligence arise from similar forces of competence without comprehension (indebted to Dennett). Machines did not learn to beat Atari or Go because they designed a strategy to win, envisioning the game and moves and pieces like we human thinkers do. They did a bunch of stochastic random shit a million trillion times, creating what looks like intelligent design in what feels like an evolutionary microsecond, powered by the speed and efficiency of modern computation. That is, AI is like evolution on steroids, evolution put on super-duper-mega-fast-forward thanks to the simulation environments of computation. But if we break things down, each individual step in training an AI is a mindless guess, a mutation, a slip in transcription that, when favored by guiding forces we call “objective functions” – tools to minimize error that are a bit like survival of the fittest – can lead to something that just so happens to work.

And it goes without saying that Nanabozho has the last laugh. Throwing grandma into the lake defies logic. It’s an act of absurdity fit for the French, a nihilism fit for Germans donning leather pants as the Dude sips white Russians (will always hate the fucking Eagles), fit for Ionesco’s rhinoceroses prancing on stage. And any attempt we make to impose meaning through reduction will falter under the weight of determinism, strawmen too flimsy for the complexity of our non-linear world.

* A warm thank you to Arthur Berrill for helping me understanding the topological art behind location intelligence, which, when done well, involves intricate data structures that transform spatial relationships into rows and columns or relate space and time, or takes into account phenomenological aspects of people’s appreciation of the space around them (e.g., an 80-year-old widow experiences the buildings around her condo quite differently than a 25-year-old single gal). Arthur introduced me to Manitoulin Island, which inspired this post.

**I once swam to an island of similar size in the Pacific Ocean near Fiji. There was a palm tree and a few huts. I didn’t think there were people, and then some man started to scream at me to shoo me away. I got scared, and swam back to our boat. For a moment, I enjoyed the imagined awesomeness of being all alone on a small deserted island.

The featured image is of Frank Swannell surveying Takla Lake in British Columbia on behalf of the Grand Trunk Pacific Railway in 1912. To learn more about Swannell’s surveying efforts, read this article by Stephen Hume, a columnist for the Vancouver Sun who has written an entire series of vignettes associated with Canada’s 150th anniversary. Hume isn’t a last name one sees that often, so Google’s surfacing his articles second only to Wikipedia — which, like the is simply not loading well for me recently — for the search term “Frank Swannell” must carry metaphysical significance. 

When Writing Fails

This post is for writers.

I take that back.

This post shares my experience as a writer to empathize with anyone working to create something from nothing, to break down the density of an intuition into a communicable sequence of words and thoughts, to digitize, which Daniel Dennett eloquently defines as “obliging continuous phenomena to sort themselves out into discontinuous, all-or-nothing phenomena” (I’m reading and very much enjoying From Bacteria to Bach and Back: The Evolution of Minds), to perform an act of judgment that eliminates other possibilities, foreclosing other forms to create its own form, Shiva and Vishnu forever linked in cycles of destruction, creation, and stability. That is to say, this post shares my experience as a writer as metonymy for our human experience as finite beings living finite lives.

The Nataraja, Shiva in his form as the cosmic ecstatic dancer, inspires trusting calm in me.

Earlier this morning, I started a post entitled Competence without Comprehension. I’ll publish it eventually, hopefully next week. It will feature a critique of explainable artificial intelligence (AI), efforts in the computer science and policy communities to develop AI systems that make sense for human users. I have tons to say here. I think it’s ok for systems to be competent without being comprehensible (my language is inspired by Dan Dennett, who thinks consciousness is an illusion) because I think there’s a lot of cognitive competencies we exhibit without comprehension (ranging from ways of transforming our habits or even become believers in some religious system by going through the motions, as I wrote about in my dissertation, to training students in operations like addition and subtraction before they learn the theoretical underpinnings of abstract algebra – which many people never even learn!). I think the word why is a complex word that we use in different ways: Aristotle thought there were four types of causes and, again following Dennett, we can distinguish between why as “how come” (what input data created this output result?) and why as “what for” (what action will be taken from this output result?). Aristotle’s causal theory was largely toppled during the scientific revolution and then again by Sartre in Existentialism is Humanism (where he shows we humans exist in a very different ways from paper knifes, which are an outdated technology!), but I think there’s value in resurrecting his categories to think about machine learning pipelines and explainable AI. I think there are different ethical implications for using AI in different settings, and I think there’s something crucial about social norms – how we expect humans to behave towards other humans – that is driving widespread interest in this topic and that, when analyzed, can help us understand what may (or may not!) be unique about the technology in its use in society.

In short, my blog post was a mess. I was trying to do too much at once, there were multiple lines of synthetic thought that need to be teased out to make sense to anyone, including myself. I will understand my position better once I devote the time and patience to exploring it, formalizing it, unpacking ideas that currently sit inchoate like bile. What I started today contains at least five different blog posts’ worth of material, on topics that many other people are thinking about, so could have some impact in the social circles that are meaningful for me and my identity. This is crucial: I care about getting this one right, because I can imagine the potential readers, or at least the hoped-for readers. That said, upon writing this, I can also step back and remember that the approval I think I’m seeking rarely matters in the end. I always feel immense gratitude when anyone — a perfect stranger — reads my work, and the most gratitude when someone feels inspired to write or grow herself.

So I allowed myself to pivot from seeking approval to instilling inspiration. To manifesting the courage to publish whatever – whatever came out from the primordial sludge of my being, the stream of consciousness that is the dribble of expression, ideas without form, but ideas nonetheless, the raw me sitting here trying my best on a Sunday afternoon in August, imagining the negative response of anyone who would bother to read this, but also knowing the charity I hold within my own heart for consistency, habit, effort, exposure, courage to display what’s weakest and most vulnerable to the public eye.

I see my experience this morning as metonymy for our experience as finite beings living finite lives because of the anxiety of choice. Each word written conditions the space of possibility of what can, reasonably, come next (Skip-Thought vectors assume this to function). The best writing is not about everything but is about something, just as many of the happiest and most successful people become that way by accepting the focus required to create and achieve, focus that shuts doors — or at least Japanese screens — on unrealized selves. I find the burden of identity terrific. My being resists the violence of definition and prefers to flit from self to self in the affordance of friendships, histories, and contexts. It causes anxiety, confusion, false starts, but also a richness I’m loathe to part with. It’s the give and take between creation and destruction, Shiva dancing joyfully in the heavens, her smile peering ironic around the corners of our hearts like the aura of the eclipse.

The featured image represents Tim Jenison’s recreation of Vermeer’s The Music Lesson. Tim’s Vermeer is a fantastic documentary about Jenison’s quest to confirm his theory of Vermeer’s optical painting technique, which worked somewhat similarly to a camera (refracting light to create a paint-by-number-like format for the artist). It’s a wonderful film that makes us question our assumptions about artistic genius and creativity. I firmly believe creativity stems from constraint, and that Romantic ideas of genius miss the mark in shaping cultural understandings of creativity. This morning, I lacked the constraints required to write.