Applying Humanities Skills at Work

Most writing defending the value of the humanities in a world increasingly dominated by STEM focuses on what humanists (should) know. If Mark Zuckerberg had read Mill and De Tocqueville, suggests John Naughton, he would have foreseen political misuse of his social media platform. If machine learning scientists were familiar with human rights law, we wouldn’t be so mired in confusion on how to conceptualize the bias and privacy pitfalls that sully statistical models. If greedy CEOs had read Dickens, they would cultivate empathy, the skill we all need to “put ourselves in someone else’s shoes and see the world through the eyes of those who are different from us.”

I agree with Paul Musgrave that “arguments that the humanities will save STEM from itself are untenably thin.” Reading Aristotle’s Nicomachean Ethics won’t actually make anyone ethical. Reading literature may cultivate empathy, but not nearly enough to face complex workplace emotions and politics without struggle. And given how expensive a university education has become, it’s hard to make the case of art for art’s sake when only the extremely elite have the luxury not to build marketable skills.

But what if training in the humanities actually does build skills valuable for a STEM economy? What if we’ve been making the wrong arguments, working too hard to make the case for what humanists know and not hard enough to make the case for how humanists think and behave? Perhaps the questions could be: what habits of mind do students cultivate in the humanities classroom and are those habits of mind valuable in the workplace?

I wrote about the value of the humanities in the STEM economy in early 2017. Since that time, I’ve advanced in my career from being an “individual contributor” (my first role was as a marketing content specialist for a legal software company) to leading teams responsible for getting things done. As my responsibilities have grown, I’ve come to appreciate how valuable the ways of speaking, writing, reading, and relating to others I learned during my humanities PhD are to the workplace. As a mentor Geoffrey Moore once put it, it’s the verbs that transfer, not the nouns. I don’t apply my knowledge of Isaac Newton and Gottfried Leibniz at work. I do apply the various critical reading and epistemological skills I honed as a humanities student, which have helped me quickly grow into a leader who can weave the communication fabric required to enable teams to collaborate to do meaningful work.

This post describes a few of these practical skills, emphasizing how they were cultivated through deep work in the humanities and are therefore not easily replaced by analogue training in business administration or communication.

David Foster Wallace’s 2005 Kenyon College Commencement speech shows how powerful arguments in favor of a liberal arts education can be when they need not justify material payoff 

Socratic Dialogue and Facilitating Team Discussion

Humanities courses are taught differently from math and science courses. Precisely because there is no one right answer, most humanities courses use the Socratic Method. The teacher poses questions to guide student dialogue around a specific topic, helping students question their assumptions and leave with a deeper understanding of a text than they did when they came to the seminar. It’s hard to do this well. Students get off topic or cite arguments or examples that aren’t common knowledge. Some hog the conversation and others are shy. Some teachers aren’t truly open to dialogue, and pretend the discuss when they’re really just leading students to accept their interpretation.

At Stanford, where I did my graduate degree, History professor Keith Baker stands out as the king of Socratic dialogue. Keith always reread required reading before class and opened discussion with one turgid question, dense with ways the discussion might unfold. Teaching D’Alembert and Diderot’s preface to the French Encyclopédie, for example, he started by asking us to explain the difference between an encyclopedia and a dictionary. The fact this feels like common sense is what made the question so poignant, and, for the French Enlightenment authors, the distinction between the two revealed much about the purpose of their massive work. The question forced us to step outside our contemporary assumptions and pay attention to what the same words meant in a different historical context. Whenever the discussion got off track, Keith gracefully posed a new question to bring things back on point without offending a student, fostering a space for intellectual safety while maintaining rigor.

The habits of mind and dialogue trained in a Socratic seminar are directly applicable to product management, which largely consists in facilitating structured discussions between team members who see a problem differently. In my work leading machine learning product teams, I frequently facilitate discussions with scientists, software developers, and business subject matter experts. Each thinks differently about what should be done and how long it will take. Researchers are driven by novelty and discovery, by developing an algorithm that pushes the boundary of what has been possible but which may not work given constraints from data and the randomness in statistical distributions. Engineers want to find the right solution to meet the constraints for a problem. They need clarity and don’t mind if things change, but need some stability so they can build. The business team represents the customer, focusing on success metrics and what will please or hook a user. The product manager sits between all these inputs and desires, and must take into account all the different points of view, making sure everyone is heard and respected, but getting the team to align on the next action.

Socratic methods are useful in this situation. People don’t want to be told what to do; they want to be part of a collective decision process where, as a team, they have each put forth and understood compromises and trade-offs, and collectively decided to go forward with a particular approach. A great product manager starts a discussion the same way Keith Baker would, by providing a structure to guide thinking and posing the critical question to help a group make a decision. The product manager pays attention to what everyone says, watches body language and emotional cues to capture team dynamics. She nudges the dialogue back on track when teams digress without alienating anyone and builds a moral of collective and autonomous decision making so the team and progress forward. She applies the habits of mind and dialogue practiced in the years in a humanities classroom.

Philology and Writing Emails for Many Audiences

My first year in graduate school, I took a course called Epic and Empire, which traced the development of the Western European epic literary tradition from Homer’s Iliad to Milton’s Paradise Lost. The first thing we analyzed when starting a new text was how the opening lines compared and contrasted to those we’d read before. Indeed, epics start with a trope called the invocation of the muse, where the poet, like a journalist writing a lede, informs the reader what the subject of the poem is about using a humble-boasting move that asks the gods to imbue him with knowledge and inspiration.

So Homer in the Iliad: 

Sing, Goddess, sing the rage of Achilles, son of Peleus—
that murderous anger which condemned Achaeans

And Vergil signaling that the Aeneid is Rome’s answer to the Iliad, but that an author as talented as Vergil need not depend on the support from the gods:

Arms and the man I sing, who first made way,
predestined exile, from the Trojan shore
to Italy, the blest Lavinian strand.

And Ariosto, an Italian author, coyly signaling that it’s time women had their chance at being the heroines of epics (the Italian starts with Le Donne):

Of loves and ladies, knights and arms, I sing,
Of courtesies, and many a daring feat;
In the same strain of Roland will I tell
Things unattempted yet in prose or rhyme…
And finally Milton, who, at the time when Cromwell was challenging the English monarchy, signals his aim to critique contemporary politics and society by daftly merging the Judaic and Greek literary traditions:
OF Mans First Disobedience, and the Fruit
Of that Forbidden Tree, whose mortal tast
Brought Death into the World, and all our woe,
With loss of Eden, till one greater Man
Restore us, and regain the blissful Seat,
Sing Heav’nly Muse…

 

Studying literature this way, one learns not just knowledge of historical texts, but the techniques authors use to respond to others who came before them. Students learn how to tease out an extra layer of meaning above and beyond what’s written. The first layer of meaning in the first lines of Paradise Lost is simply what the words refer to: this is a story about Adam and Eve eating the forbidden fruit. But the philologist sees much more: Milton decides to hold the direct invocation to the muse until line six so he could foreground a succinct encapsulation of the being in time of all Christians, waiting from the time of the fall until the coming of Christ; does that mean he wanted to signal that first and foremost this is a Christian story, with the Greek tradition, signaled by the reference to Homer, only arriving 5 lines later?

Reading between the lines like this is valuable for executive communications, in particular in the age of email where something written for one audience is so easily forwarded to others without our intending or knowing. Business communications don’t just articulate propositions about states of affairs; they present facts and findings to persuade someone to do something (to commit resources to a project, to spend money on something, to hire or fire someone, to alter the way the work, or simply to recognize that everything is on track and no worry is required at this time). Every communication requires sensitivity to the reader’s presumed state of mind and knowledge, reconstructing what we think they know or could know to ensure the framing of the new communication lands. Each communication should build on the last communication, not using the stylistic invocation to the muse like Homer, but presenting what’s said next as a step in a narrative in time. And at one moment in time, different people in different roles interpret communications differently, based on their particular point of view, but more importantly their particular sensitivities, ambitions, and potential to be threatened or impacted by something you say. Executives have to think about this in advance, write things as if they were to be shared far beyond the intended recipient with his or her point of view and stakes in a situation. Philology training in classes like Epic and Empire is a good proxy for the multi-vocal aspects of written communications.

Making Sense of Another’s World: The Practice of Analytical Empathy

In 2013, I gave a talk about why my graduate work in intellectual history formed skills I would later need to become a great product marketer. As in this post, my argument in that talk focused not on what I knew about the past, but how I thought about the past: as an intellectual historian focused on René Descartes’ impact on 17th-century French culture, I sought to reconstruct what Descartes thought he was thinking, not whether Descartes’ arguments were right or wrong and should continue to be relevant today or relegated to the dustbin of history (as a philosopher would approach Descartes).

Doing this well entails that one get outside the inheritance of 400 years of interpretation that shape how we interpret something like Descartes’ famous Cogito, ergo sum, I think, therefore I am. Most philosophers get accustomed to seeing Descartes show up as a strawman for all sorts of arguments, and consider his substance dualism (i.e., that mind and body are totally separate kinds of matter) is junk in the wake of improved understanding about the still mysterious emergence of mind from matter. They solidify an impression of what they think he’s saying as seen from the perspective of the work philosophy and cognitive science seeks to do today. As an intellectual historian, I sought to suspend all temptation to read contemporary assumptions into Descartes, and to do what I could to reconstruct what he was thinking when he wrote the famous Cogito. I read about his upbringing as a Jesuit and read Ignatius of Loyola’s Spiritual Exercises to better understand the genre of early-modern meditations, I read the texts by Aristotle he was responding to, I read seminal math texts from the Greeks through the 16th-century Italians to understand the state of mathematics at the time he wrote the Géometrie, I read not only his Meditations, but also all the surrounding responses and correspondence to better understand the work he was trying to accomplish in his short and theatrical philosophical prose. And after doing all this work, I concluded that we’ve misunderstood Descartes, and that is famous Cogito isn’t a proposition about the prominence of mind over body, but rather a meditative mantra philosophers should use to train their minds to think “clear and distinct” thoughts, the axiomatic backbones for the method he wanted to propose to ground the new science. And Descartes was aware we are all to prey to fall into old habits, that we had to practice mantras every day to train the mind to take on new habits to complete a program of self-transformation. I didn’t care if he was right or wrong; I cared to persuade readers of my dissertation that this was the work Descartes thought the Cogito was doing.

A talk I gave in 2012 about why my training in intellectual history helped be become a good product marketer, a role that requires analytical empathy.

This skill, the skill of suspending one’s own assumptions about what others think, of not approaching another’s worldview to evaluate whether its right or wrong, but of working to make sense of how another lives and feels in the world, is critical for product management, product marketing, and sales. Product has migrated from being an analytical discipline focused on triaging what feature to build next to maximize market share to being an ethnographic discipline focused on what Christian Madsbjerg calls analytical empathy (Sensemaking), a “process of understanding supported by theory, frameworks, and an engagement with the humanities.” This kind of empathy isn’t just noticing that something might be off with another person and searching to feel what that other person likely feels. It’s the hard work of coming to see the world the way another sees it, patiently mapping the workflows and functional significance and emotions and daily habits of a person who encounters a product or service. When trying to decide what feature to build next in a software product, an excellent product manager doesn’t structure interviews with users by posing questions about the utility of different features. They focus on what the users do, seek to become them, just for one day, watch what they touch, what where they move cursors on screens, watch how the muscles around their eyes tighten when they get frustrated with a button that’s not working or when they receive a stern email from a superior. They work to suspend their assumptions about what they assume the user wants or needs and to be open to experiencing a whole different point of view. Similarly, an excellent sales person comes to know what makes their buyers tick, what personal ambitions they have above and beyond their professional duties. They build business cases that reconstruct the buyers’ world and convincingly show how that world would differ after the introduction of the sellers’ product or service. They don’t showcase bells and whistles; they explain the functional significance of bells and whistles within the world of the buyer. They make it make sense through analytical empathy.

Business school, in particular as curriculum exists today, isn’t the place to practice analytical empathy. And humanities courses that are diluted to hone supposedly transferrable skills aren’t either. The humanities, practiced with rigor and fueled by the native curiosity of a student seeking deeply to understand an author they care about, is an avenue to build the hermeneutic skills that make product organizations thrive.

Narrative Detail Helping with Feedback and Coaching

It’s table stakes that narrative helps get early funding and sales at a startup, in particular for founders who lack product specificity and have nothing to sell but an idea (and their charisma, network, and reputation). But the constraints of the pitch deck genre are so formulaic that humanities training may be a crutch, not an asset, to succeeding at creating them. Indeed, anyone versed in narratology (the study of narrative structure) can easily see how rigid the pick deck genre is, and anyone with creative impulses will struggle to play by the rules. 

I first understood this while attending a TechStars FinTech startup showcase in 2016. A group of founders came on stage one by one and gave pithy pitches to showcase what they were working on. By pitch three, it was clear that every founder was coached to use the exact same narrative recipe: describe the problem the company will address; imagine a future state changed with the product; sketch the business model and scope out the total addressable market; marshal biographical details to prove why the business has the right team; differentiate from competitors; close with an ask to prospective investors. By pitch nine, I had trouble concentrating on the content beyond the form. It reminded me of Vladimir Propp’s Morphology of the Folktale.

vladimir-propp-theory-1-638
20th-century Russian literary theorist Vladimir Propp analyzed the formal structure common to folktales. My sensitivity to literary form makes me cynical about the structure of startup pitch decks and yearning for something creative and new.

This doesn’t mean that storytelling isn’t part of technology startup lore. It is. But the expectations of how those stories are told is often so constrained that rigorous humanities training isn’t that helpful (and it’s downright soul-destroying to feel forced to adopt the senseless jargon of most tech marketing). In my experience, narrative has been more poignant and powerful in a different area of my organizational life: coaching fellow employees through difficult interpersonal situations or life decisions.

A first example is the act of giving feedback to a colleague. There are many different takes on the art of making feedback constructive and impactful, but the style that resonates most with me is to still all impulses towards abstraction (“Sally is such a control freak!”) and focus on the details of a particular action in a particular moment (“In yesterday’s standup, Sally interrupted Joe when he was overviewing his daily priority to say he should do his task differently than planned.”). As I described in a former post, what sticks with me most from my freshman year Art History 101 seminar was learning how to overcome the impulse towards interpretation and focus on observing plain details. When viewing a Rembrandt painting, everyone defaulted to symbolic interpretation. And it took work to train our vision and language to articulate that we saw white ruffled shirts and different levels of frizziness in curly hair and tatters on the edges of red tablecloths and light emanating on from one side of the painting. It’s this level of detailed perception that is required to provide constructive feedback, feedback specific enough to enable someone to isolate a behavior, recognize it if it comes up again, and intentionally change it. When stripped of the overtones of judgment (“control freak!”) and isolated on the impact behavior has had on others (“after you said that, Joe was withdrawn throughout the rest of the meeting”), feedback is a gift. Now, no training in art history or literature prepares one to brace the emotional awkwardness of providing negative feedback to a colleague. I think that only comes through practice. But the mindset of getting underneath abstraction to focus on the details is certainly a habit of mind cultivated in humanities courses.

A second example is in relating something from one’s own experience to help another better understand their own situation. Not a day goes by where a colleagues doesn’t feel frustrated they have to do task they feel is beneath them, anxious about the disarray of a situation they’ve inherited, confused about whether to stay in a role or take a new job offer, resentful towards a colleague for something they’ve said or done, etc… When someone reaches out to me for advice, I still impulses to tell them what to do and instead scan my past for a meaningful analogue, either in my own experience or someone else’s, and tell a story. And here narrative helps. Not to craft a fiction that manipulates the other to reach the outcome I want him or her to reach, but to provide the right framing and the right amount of detail to make the story resonate, to provide the other with something they can turn back to as they reflect. Wisdom that transcends the moment but can only be transmitted in full through anecdote rather than aphorism.

Leadership Communication

I’ll close with an example of an executive speech act that humanities education does not help prepare for. A constructive and motivating company-wide speech is, at least in my experience, the hardest task executives face. Giving an excellent public speech to 2000 people is a cakewalk in contrast to giving a great speech about a company matter to 100 colleagues. The difficulty lies in the kind of work a company speech does.

The work of a public speech is to teach something to an audience. A speaker wants to be relevant, wants to know what their audiences knows and doesn’t know, reads and doesn’t read, to adapt content to their expectations and degree of understanding. Wants to vary the pace and pitch in the same way an orchestra would vary dynamics and phrasing in a performance. Wants to control movement and syncopate images and short phrases on a slide with the spoken word to maximally capture the audiences’ attention. There are a lot of similarities between giving a great university lecture and giving a great talk. This doesn’t mean training in the humanities prepares one for public speaking. On the contrary, most humanists read something they’ve written in advance, forcing the listener to follow long, convoluted sentences. Training in the humanities would be much more beneficial for future industry professionals if the format of conference talks were a little more, well, human.

The work of a company speech is to share a decision or a plan that impacts the daily lives and sense of identity of individuals that share the trait that, at this time, they work in a particular organization. It’s not about teaching; the goal is not to get them to leave knowing something they didn’t know before. The goal is to help them clearly understand how what is said impacts what they do, how they work, how they relate to this collective they are currently part of, and, hopefully, to help them feel inspired by what they are asked to accomplish. Unnecessary tangents confuse rather than delight, as the audience expects every detail to be relevant and cogent. Humor helps, but it must be tactfully displayed. It helps to speak with awareness of different individuals’ predispositions and fears: “If I say this this way, Sally will be reminded of our recent conversation about her product, but if I say it that way, Joe will freak out because of his particular concern.” People join and leave companies all the time, and a leader has to still impulses towards originality to make sure newcomers hear what others have heard many times before without boring people who’ve been in the company for a while. The speech that resonates best is often extremely descriptive and leaves no room for assumption or ambiguity: one has to explicitly communicate assumptions or rationale that would feel cumbersome in most other settings, almost the way parents describe every next movement or intention to young children. At the essence of a successful company talk is awareness of what everyone else could be thinking, about the company, about themselves, and about the speaker, as one speaks. It’s a funhouse of epistemological networks, of judgment reflected in furrowed brows and groups silently leaving for coffee to complain about decisions just after they’ve been shared. It’s really hard, and I’m not sure how to train for it outside of learning through mistakes.

What This Means for Humanities Training

This post presented a few examples of how habits of mind I developed in the humanities classroom helped me in common tasks in industry. The purpose of the post is to reframe arguments defending the value of the humanities from knowledge humanists gain to the ways of being humanists practice. Without presenting detailed statistics to make the case, I’ll close by mentioning Christian Madsbjerg’s claim in Sensemaking that humanities students may be best positioned for non-linear growth in business careers. They start off making significantly lower salaries than STEM counterparts, but disproportionately go on to make much higher salaries and occupy more significant leadership positions in organizations in the long run. I believe this stems from the habits of mind and behavior cultivated in a rigorous humanities education, and that we shouldn’t dilute it by making it more applicable to business topics and genres, but focus on articulating just how valuable these skills can be.

The featured image is of the statue of David Hume in Edinburgh. His toe is shiny because of tourist lore that touching it provides good fortune and wisdom, a superstition Hume himself would have likely abhorred. I used this image as the nice low-angle shot made it feel like a foreboding allegory for the value of the humanities. 

What makes a memory profound?

Not every event in your life has had profound significance for you. There are a few, however, that I would consider likely to have changed things for you, to have illuminated your path. Ordinarily, events that change our path are impersonal affairs, and yet are extremely personal. - Don Juan Matus, a (potentially fictional) Yaqui shaman from Mexico

The windowless classroom was dark. We were sitting around a rectangular table looking at a projection of Rembrandt’s Syndics of the Drapers’ Guild. Seated opposite the projector, I could see student faces punctuate the darkness, arching noses and blunt hair cuts carving topography through the reddish glow.

“What do you see?”

Barbara Stafford’s voice had the crackly timbre of a Pablo Casals record and her burnt-orange hair was bi-toned like a Rothko painting. She wore downtown attire, suits far too elegant for campus with collars that added movement and texture to otherwise flat lines. We were in her Art History 101 seminar, an option for University of Chicago undergrads to satisfy a core arts & humanities requirement. Most of us were curious about art but wouldn’t major in art history; some wished they were elsewhere. Barbara knew this.

“A sort of darkness and suspicion,” offered one student.

“Smugness in the projection of power,” added another.

“But those are interpretations! What about the men that makes them look suspicious or smug? Start with concrete details. What do you see?”

No one spoke. For some reason this was really hard. It didn’t occur to anyone to say something as literal as “I see a group of men, most of whom have long, curly, light-brown hair, in black robes with wide-brimmed tall black hats sitting around a table draped with a red Persian rug in the daytime.” Too obvious, like posing a basic question about a math proof (where someone else inevitably poses the question and the professor inevitably remarks how great a question it is to our curious but proud dismay). We couldn’t see the painting because we were too busy searching for a way of seeing that would show others how smart we were.

“Katie, you’re our resident fashionista. What strikes you about their clothing?”

Adrenaline surged. I felt my face glow in the reddish hue of the projector, watched others’ faces turn to look at mine, felt a mixture of embarrassment at being tokenized as the student who cared most about clothes and appearance and pride that Barbara found something worth noticing, in particular given her own evident attention to style. Clothes weren’t just clothes for me: they were both art and protection. The prospect of wearing the same J Crew sweater or Seven jeans as another girl had been cruelly beaten out of me in seventh grade, when a queen mean girl snidely asked, in chemistry class, if I knew that she had worn the exact same salmon-colored Gap button-down crew neck cotton sweater, simply in the cream color, the day before. My mom had gotten me the sweater. All moms got their kids Gap sweaters in those days. The insinuation was preposterous but stung like a wasp: henceforth I felt a tinge of awkwardness upon noticing another woman wearing an article of clothing I owned. In those days I wore long ribbons in my ponytails to make my hair seem longer than it was, like extensions. I often wore scarves, having admired the elegance of Spanish women tucking silk scarves under propped collared shirts during my senior year of high school abroad in Burgos, Spain. Material hung everywhere around me. I liked how it moved in the wind and encircled me in the grace I feared I lacked.

“I guess the collars draw your attention. The three guys sitting down have longer collars. They look like bibs. The collar of the guy in the middle is tied tight, barely any space between the folds. A silver locket emerges from underneath. The collars of the two men to his left (and our right) billow more, they’re bunchy, as if those two weren’t so anal retentive when they get dressed in the morning. They also have kinder expressions, especially the guy directly to the left of the one in the center. And then it’s as if the collars of the men standing to the right had too much starch. They’re propped up and overly stiff, caricature stiff. You almost get the feeling Rembrandt added extra air to these puffed up collars to make a statement about the men having their portrait done. Like, someone who had taste and grace wouldn’t have a collar that was so visibly puffy and stiff. Also, the guy in the back doesn’t have a hat like the others.”

Barbara glowed. I’d given her something to work with, a constraint from which to create a world. I felt like I’d just finished a performance, felt the adrenaline subside as students’ turned their heads back to face the painting again, shifted their attention to the next question, the next comment, the next brush stroke in Syndics of the Drapers’ Guild. 

After a few more turns goading students to describe the painting, Barbara stepped out of her role as Socrates and told us about the painting’s historical context. I don’t remember what she said or how she looked when she said it. I don’t remember every class with her. I do remember a homework assignment she gave inspired by André Breton’s objet trouvé, a surrealist technique designed to get outside our standard habits of perception, to let objects we wouldn’t normally see pop into our attention. I wrote about my roommate’s black high-heeled shoes and Barbara could tell I was reading Nietzsche’s Birth of Tragedy because I kept referencing Apollo and Dionysus, godheads for constructive reason and destructive passion, entropy pulling us ever to our demise.[1] I also remember a class where we studied Cindy Sherman photos, in particular her self portraits as Caravaggio’s Bacchus and her film still from Hitchcock’s Vertigo. We took a trip to the Chicago Art Institute and looked at few paintings together. Barbara advised us never to use the handheld audio guides as they would pollute our vision. We had to learn how to trust ourselves and observe the world like scientists.

cindy-sherman_caravaggio
Cindy Sherman’s Untitled #224, styled after Caravaggio’s Bacchus
Cindy_Sherman_Untitled_Film_Still_21
Cindy Sherman’s Untitled Film Still 21, styled after Hitchcock’s Vertigo

In the fourth paragraph of the bio on her personal website, Barbara says that “she likes to touch the earth without gloves.” She explains that this means she doesn’t just write about art and how we perceive images, but also “embodies her ideas in exhibitions.”

I interpret the sentence differently. To touch the earth without gloves is to see the details, to pull back the covers of intentionality and watch as if no one were watching. Arts and humanities departments are struggling to stay relevant in an age where we value computer science, mathematics, and engineering. But Barbara didn’t teach us about art. She taught us how to see, taught us how to make room for the phenomenon in front of us. Paintings like Rembrandt’s Syndics of the Drapers’ Guild were a convenient vehicle for training skills that can be transferred and used elsewhere, skills which, I’d argue, are not only relevant but essential to being strong leaders, exacting scientists, and respectful colleagues. No matter what field we work in, we must all work all the time to notice our cognitive biases, the ever-present mind ghosts that distort our vision. We must make room for observation. Encounter others as they are, hear them, remember their words, watch how their emotions speak through the slight curl of their lips and the upturned arch of their eyebrows. Great software needs more than just engineering and science: it needs designers who observe the world to identify features worth building.

I am indebted to Barbara for teaching me how to see. She is integral to the success I’ve had in my career in technology.

BarbaraStafford
A picture that captures what I remember about Barbara

Of all the memories I could share about my college experience, why share this one? Why do I remember it so vividly? What makes this memory profound?

I recently read Carlos Casteñeda’s The Active Side of Infinity and resonated with book’s premise as “a collection of memorable events” Casteñeda recounts as an exercise to become a warrior-traveler like the shamans who lived in Mexico in ancient times. Don Juan Matus, a (potentially fictional) Yaqui shaman who plays the character of Casteñeda’s guru in most of his work, considers the album “an exercise in discipline and impartiality…an act of war.” On his first pass, Casteñeda picks out memories he assumes should be important in shaping him as an individual, events like getting accepted to the anthropology program at UCLA or almost marrying a Kay Condor. Don Juan dismisses them as “a pile of nonsense,” noting they are focused on his own emotions rather than being “impersonal affairs” that are nonetheless “extremely personal.”

The first story Casteñeda tells that don Juan deems fit for a warrior-traveler is about Madame Ludmilla, “a round, short woman with bleached-blond hair…wearing a a red silk robe with feathery, flouncy sleeves and red slippers with furry balls on top” who performs a grotesque strip tease called “figures in front of a mirror.” The visuals remind me of dream sequence from a Fellini movie, filled with the voluptuousness of wrinkled skin and sagging breasts and the brute force of the carnivalesque. Casteñeda’s writing is noticeably better when he starts telling Madame Ludmilla’s story: there’s more detail, more life. We can picture others, smell the putrid stench of dried vomit behind the bar, relive the event with Casteñeda and recognize a truth in what he’s lived, not because we’ve had the exact same experience, but because we’ve experienced something similar enough to meet him in the overtones. “What makes [this story] different and memorable,” explains don Juan, “is that it touches every one of us human beings, not just you.”

ludmilla
This is how I imagined Madame Ludmilla, as depicted in Fellini’s 8 1/2. As don Juan says, we are all “senseless figures in front of a mirror.”

Don Juan calls this war because it requires discipline to see the world this way. Day in and day out, structures around us bid us to focus our attention on ourselves, to view the world through the prism of self-improvement and self-criticism: What do I want from this encounter? What does he think of me? When I took that action, did she react with admiration or contempt? Is she thinner than I am? Look at her thighs in those pants-if I keep eating desserts they way I do, my thighs will start to look like that too. I’ve fully adopted the growth mindset and am currently working on empathy: in that last encounter, I would only give myself a 4/10 on my empathy scale. But don’t you see that I’m an ESFJ? You have to understand my actions through the prism of my self-revealed personality guide! It’s as if we live in a self-development petri dish, where experiences with others are instruments and experiments to make us better. Everything we live, everyone we meet, and everything we remember gets distorted through a particular analytical prism: we don’t see and love others, we see them through the comparative machine of the pre-frontal cortex, comparing, contrasting, categorizing, evaluating them through the prism of how they help or hinder our ability to become the future self we aspire to become.

Warrior-travelers like don Juan fight against this tendency. Collecting an album of memorable events is a exercise in learning how to live differently, to change how we interpret our memories and first-person experiences. As non-warriors, we view memories as scars, events that shape our personality and make us who we are. As warriors, we view ourselves as instruments and vessels to perceive truths worth sharing, where events just so happen to happen to us so we can feel them deeply enough and experience the minute details required to share vivid details with others. Warriors are instruments of the universe, vessels for the universe to come to know itself. We can’t distort what others feel because we want them to like us or act a certain way because of us: we have to see others for who they are, make space for negative and positive emotions. What matters isn’t that we improve or succeed, but that we increase the range of what’s perceivable. Only then can we transmit information with the force required to heal or inspire. Only then are we fearless. 

Don Juan’s ways of seeing and being weren’t all new to me (although there were some crazy ideas of viewing people as floating energy balls). There are sprinklings of my quest to live outside the self in many posts on the blog. Rather, The Active Side of Infinity helped me clarify why I share first-person stories in the first place. I don’t write to tell the world about myself or share experiences in an effort to shape my identity. This isn’t catharsis. I write to be a vessel, a warrior-traveller. To share what I felt and saw and smelled and touched as I lived experiences that I didn’t know would be important at the time but that have managed to stick around, like Argos, always coming back, somehow catalyzing feelings of love and gratitude as intense today as they were when I first experienced them. To use my experiences to illustrate things we are all likely to experience in some way or another. To turn memories into stories worth sharing, with details concrete enough that you, reader, can feel them, can relate to them, and understand a truth that, ill-defined and informal though it may be, is searing in its beauty.

This post features two excerpts from my warrior-traveler album, both from my time as an undergraduate at the University of Chicago. I ask myself: if I were speaking to someone for the first time and they asked me to tell them about myself, starting in college, would I share these memories? Likely not. But it’s a worthwhile to wonder if doing so might change the world for the good.


When I attended the University of Chicago, very few professors gave students long reading assignments for the first class. Some would share a syllabus, others would circulate a few questions to get us thinking. No one except Loren Kruger expected us to read half of Anna Karenina and be prepared to discuss Tolstoy’s use of literary from to illustrate 19th-century Russian class structures and ideology.

Loren was tall and big boned. A South African, she once commented on J.M. Coetzee’s startling ability to wield power through silence. She shared his quiet intensity, demanded such rigor and precision in her own work that couldn’t but demand it from others. The tiredness of the old world laced her eyes, but her work was about resistance; she wrote about Brecht breaking boundaries in theater, art as an iron-hot rod that could shed society’s tired skin and make room for something new. She thought email destroyed intimacy because the virtual distance emboldened students to reach out far more frequently than when they had to brave a face-to-face encounter. About fifteen students attended the first class. By the third class, there were only three of us. With two teaching assistants (a French speaker and a German speaker), the student:teacher ratio became one:one.[2]

LK_Santiago_web
A picture that captures what I remember about Loren

Loren intimated me, too. The culture at the University of Chicago favored critical thinking and debate, so I never worried about whether my comments would offend others or come off as bitchy (at Stanford, sadly, this was often the case). I did worry about whether my ideas made sense. Being the most talkative student in a class of three meant I was constantly exposed in Loren’s class, subjecting myself to feedback and criticism. She criticized openly and copiously, pushing us for precision, depth, insight. It was tough love.

The first thing Loren taught me was the importance of providing concrete examples to test how well I understood a theory. We were reading Karl Marx, either The German Ideology or the first volume of Das Kapital.[3] I confidently answered Loren’s questions about the text, reshuffling Marx’s words or restating what he’d written in my own words. She then asked me to provide a real-world example of one of his theories. I was blank. Had no clue how to answer. I’d grown accustomed to thinking at a level of abstraction, riding text like a surfer rides the top of a wave without grounding the thoughts in particular examples my mind could concretely imagine.[4] The gap humbled me, changed how I test whether I understand something. This happens to be a critical skill in my current work in technology, given how much marketing and business language is high-level and general: teams think they are thinking the same thing, only to realize that with a little more detail they are totally misaligned.

We wrote midterm papers. I don’t remember what I wrote about but do remember  opening the email with the grade and her comments, laptop propped on my knees and back resting against the powder-blue wall in my bedroom off the kitchen in the apartment on Woodlawn Avenue. B+. “You are capable of much more than this.” Up rang my old friend imposture syndrome: no, I’m not, what looks like eloquence in class is just a sham, she’s going to realize I’m not what she thinks I am, useless, stupid, I’ll never be able to translate what I can say into writing. I don’t know how. Tucked behind the fiddling furies whispered the faint voice of reason: You do remember that you wrote your paper in a few hours, right? That you were rushing around after the house was robbed for the second time and you had to move? 

Before writing our final papers, we had to submit and receive feedback on a formal prospectus rather than just picking a topic. We’d read Franz Fanon’s The Wretched of the Earth and I worked with Dustin (my personal TA) to craft a prospectus analyzing Gillo Pontecorvo’s Battle of Algiers in light of some of Fanon’s descriptions of the experience of colonialism.[7]

Once again, Loren critiqued it harshly. This time I panicked. I didn’t want to disappoint her again, didn’t want the paper to confirm to both of us that I was useless, incompetent, unable to distill my thinking into clear and cogent writing. The topic was new to me and out of my comfort zone: I wasn’t an expert in negritude and or post-colonial critical theory. I wrote her a desperate email suggesting I write about Baudelaire and Adorno instead. I’d written many successful papers about French Romanticism and Symbolism and was on safer ground.

la pointe.gif
Ali La Pointe, the martyred revolutionary in The Battle of Algiers

Her response to my anxious plea was one of the more meaningful interactions I’ve ever had with a professor.

Katie, stop thinking about what you’re going to write and just write. You are spending far too much energy worrying about your topic and what you might or might not produce. I am more than confident you are capable of writing something marvelous about the subject you’ve chosen. You’ve demonstrated that to me over the quarter. My critiques of your prospectus were intended to help you refine your thinking, not push you to work on something else. Just work!

I smiled a sigh of relief. No professor had ever said that to me before. Loren had paid attention, noticed symptoms of anxiety but didn’t placate or coddle me. She remained tough because she believed I could improve. Braved the mania. This interaction has had a longer-lasting impact on me than anything I learned about the subject matter in her class. I can call it to mind today, in an entirely different context of activity, to galvanize myself to get started when I’m anxious about a project at work.

The happiest moments writing my final paper about the Battle of Algiers were the moments describing what I saw in the film. I love using words to replay sequences of stills, love interpreting how the placement of objects or people in a still creates an emotional effect. My knack for doing so stems back to what I learned in Art History 101. I think I got an A on the paper. I don’t remember or care. What stays with me is my gratitude to Loren for not letting me give up, and the clear evidence she cared enough about me to put in the work required to help me grow.


[1] This isn’t the first time things I learned in Barbara’s class have made it into my blog. The objet trouvé exercise inspired a former blog post.

[2] I ended up having my own private teaching assistant, a French PhD named Dustin. He told me any self-respecting comparative literature scholar could read and speak both French and German fluently, inspiring me to spend the following year in Germany.

[3] I picked up my copy of The Marx-Engels Reader (MER) to remember what text we read in Loren’s class. I first read other texts in the MER in Classics of Social and Political Thought, a social sciences survey course that I took to fulfilled a core requirement (similar to Barbara’s Art History 101) my sophomore year. One thing that leads me to believe we read The German Ideology or volume one of Das Kapital in Loren’s class is the difference in my handwriting between years two and four of college. In year two, my handwriting still had round playfulness to it. The letters are young and joyful, but look like they took a long time to write. I remember noticing that my math professors all seemed to adopt a more compact and efficient font when they wrote proofs on the chalkboard: the a’s were totally sans-serif, loopless. Letters were small. They occupied little space and did what they could not to draw attention to themselves so the thinker could focus on the logic and ideas they represented. I liked those selfless a’s and deliberately changed my handwriting to imitate my math professors. The outcome shows in my MER. I apparently used to like check marks to signal something important: they show up next to straight lines illuminating passages to come back to. A few great notes in the margins are: “Hegelian->Too preoccupied w/ spirit coming to itself at basis…remember we are in (in is circled) world of material” and “Inauthenticity->Displacement of authentic action b/c always work for later (university/alienation w/ me?)”

[4] There has to be a ton of analytic philosophy ink spilled on this question, but it’s interesting to think about what kinds of thinking is advanced by pure formalisms that would be hampered by ties to concrete, imaginable referents and what kinds of thinking degrade into senseless mumbo jumbo without ties to concrete, imaginable referents. Marketing language and politically correct platitudes definitely fall into category two. One contemporary symptom of not knowing what one’s talking about is the abuse of the demonstrative adjective that. Interestingly enough, such demonstrative abusers never talk about thises, they only talk about thats. This may be used emphatically and demonstratively in a Twitter or Facebook conversation: when someone wholeheartedly supports a comment, critique, or example of some point, they’ll write This as a stand-alone sentence with super-demonstrative reference power, power strong enough to encompass the entire statement made before it. That’s actually ok. It’s referring to one thing, the thing stated just above it. It’s dramatic but points to something the listener/reader can also point to. The problem with the abused that is that it starts to refer to a general class of things that are assumed, in the context of the conversation, to have some mutually understood functional value: “To successfully negotiate the meeting, you have to have that presentation.” “Have that conversation — it’s the only way to support your D&I efforts!” Here, the listener cannot imagine any particular that that these words denote. The speaker is pointing to a class of objects she assumes the listener is also familiar with and agrees exist. A conversation about what? A presentation that looks like what? There are so many different kinds and qualities of conversations or presentations that could fit the bill. I hear this used all the time and cringe a little inside every time. I’m curious to know if others have the same reaction I do, or if I should update my grammar police to accept what has become common usage. Leibniz, on the other hand, was an early modern staunch defender of cogitatio caeca (Latin for blind thought), which referred to our ability to calculate and manipulate formal symbols and create truthful statements without requiring the halting step of imagining the concrete objects these symbols refer to. This, he argued against conservatives like Thomas Hobbes, was crucial to advance mathematics. There are structural similarities in the current debates about explainability of machine learning algorithms, even though that which is imagined or understood may lie on a different epistemological, ontological, and logical plane.

[5] People tell me that one reason they like my talks about machine learning is that I use a lot of examples to help them understand abstract concepts. Many talks are structured like this one, where I walk an audience through the decisions they would have to make as a cross-functional team collaborating on a machine learning application. The example comes from a project former colleagues worked on. I realized over the last couple of years that no matter how much I like public speaking, I am horrified by the prospect of specializing in speaking or thought leadership and not being actively engaged in the nitty-gritty, day-to-day work of building systems and observing first-person how people interact with them. I believe the existential horror stems from my deep-seated beliefs about language and communication, in my deep-seated discomfort with words that don’t refer to anything. Diving into this would be worthwhile: there’s a big difference between the fictional imagination, the ability to bring to life the concrete particularity of something or someone that doesn’t exist, and the vagueness of generalities lacking reference. The second does harm and breeds stereotypes. The first is not only potent in the realm of fiction, but, as my fiancé Mihnea is helping me understand, may well be one of the master skills of the entrepreneur and executive. Getting people aligned and galvanized around a vision can only occur if that vision is concrete, compelling, and believable. An imaginable state of the world we can all inhabit, even if it doesn’t exist yet. A tractable as if that has the power to influence what we do and how we behave today so as to encourage its creation and possibility.[6]

[6] I believe this is the first time I’ve had a footnote referring to another footnote (I did play around with writing an incorrigibly long photo caption in Analogue Repeaters). Funny this ties to the footnote just above (hello there, dear footnote!) and even funnier that footnote 4 is about demonstrative reference, including the this discursive reference. But it’s seriously another thought so I felt it merited it’s own footnote as opposed to being the second half of footnote 5. When I sat down to write this post, I originally planned to write about the curious and incredible potency of imagined future states as tools to direct action in the present. I’ve been thinking about this conceptual structure for a long time, having written about it in the context of seventeenth-century French philosophy, math, and literature in my dissertation. The structure has been around since the Greeks  (Aristotle references it in Book III of the Nicomachean Ethics) and is used in startup culture today. I started writing a post on the topic in August, 2018. Here’s the text I found in the incomplete draft when I reopened it a few days ago:

A goal is a thinking tool.

A good goal motivates through structured rewards. It keeps people focused on an outcome, helps them prioritize actions and say no to things, and stretches them to work harder than they would otherwise. Wise people say that a good goal should be about 80% achievable. Wise leaders make time reward and recognize inputs and outputs.

A great goal reframes what’s possible. It is moonshot and requires the suspension of disbelief, the willingness to quiet all the we can’ts and believe something surreal, to sacrifice realism and make room for excellence. It assumes a future outcome that is so outlandish, so bold, that when you work backwards through the series of steps required to achieve it, you start to do great things you wouldn’t have done otherwise. Fools say that it doesn’t matter if you never come close to realizing a great goal, because the very act of supposing it could be possible and reorienting your compass has already resulted in concrete progress towards a slightly more reasonable but still way above average outcome. 

Good goals create outcomes. Great goals create legacies.

This text alienates me. It reminds me of an inspirational business book: the syncopation and pace seem geared to stir pathos and excitement. How curious that the self evolves so quickly, that the I looking back on the same I’s creations of a few months ago feels like she is observing a stranger, someone speaking a different language and inhabiting a different world. But of course that’s the case. Of course being in a different environment shapes how one thinks and what one sees. And the lesson here is not one of fear around instability of character: it’s one that underlines to crucial importance of context, the crucial importance of taking care to select our surroundings so we fill our brains with thoughts and words that shape a world we find beautiful, a world we can call home. The other point of this footnote is a comment on the creative process. Readers may have noted the quotation from Pascal that accompanies all my posts: “The last thing one settles in writing a book is what one should put in first.” The joy of writing, for me, as for Mihnea and Kevin Kelly and many others, lies in unpacking an intuition, sitting down in front of a silent wall and a silent world to try to better understand something. I’m happiest when, writing fast, bad, and wrong to give my thoughts space to unfurl, I discover something I wouldn’t have discovered had I not written. Writing creates these thoughts. It’s possible they lie dormant with potential inside the dense snarl of an intuition and possible they wouldn’t have existed otherwise. Topic for another post. With this post, I originally intended to use the anecdote about Stafford’s class to show the importance of using concrete details, to illustrate how training in art history may actually be great training for the tasks of a leader and CEO. But as my mind circled around the structure that would make this kind of intro make sense, I was called to write about Casteñeda, pulled there by my emotions and how meaningful these memories of Barbara and Loren felt. I changed the topic. Followed the path my emotions carved for me. The process was painful and anxiety-inducing. But it also felt like the kind of struggle I wanted to undertake and live through in the service of writing something worth reading, the purpose of my blog.

[7] About six months ago, I learned that an Algerian taxi driver in Montréal was the nephew of Ali La Pointe, the revolutionary martyr hero in Battle of Algiers. It’s possible he was lying, but he was delighted by the fact that I’d seen and loved the film and told me about the heroic deeds of another uncle who didn’t have the same iconic stardom as Ali. Later that evening I attended a dinner hosted by Element AI and couldn’t help but tell Yoshua Bengio about the incredible conversation I had in the taxi cab. He looked at me with confusion and discomfort, put somewhat out of place and mind by my not accommodating the customary rules of conversation with acquaintances.

The featured image is the Syndics of the Drapers’ Guild, which Rembrandt painted in 1662. The assembled drapers assess the quality of different weaves and cloths, presumably, here, assessing the quality of the red rug splayed over the table. In Ways of Seeing, John Berger writes about how oil paintings signified social status in the early modern period. Having your portrait done showed you’d made it, the way driving a Porsche around town would do so today. When I mentioned that the collars seemed a little out of place, Barbara Stafford found the detail relevant precisely because of the plausibility that Rembrandt was including hints of disdain and critique in the commissioned portraits, mocking both his subjects and his dependence on them to get by. 

A Turing Test for Empathy?

On Wednesday evening, I was the female participant on a panel about artificial intelligence.[1] The event was hosted at the National Club on Bay Street in Toronto. At Friday’s lunch, a colleague who attended to support mentioned that the venue smelled like New York, carried the grime of time in its walls after so many rain storms. Indeed, upon entering, I rewalked into Manhattan’s Downtown Association, returned to April, 2017, before the move to Toronto, peered down from the attic of my consciousness to see myself gently placing a dripping umbrella in the back of the bulbous cloak room, where no one would find it, feeling the mahogany enclose me in peaty darkness, inhaling a mild must that could only tolerate a cabernet, waiting with my acrylic green silk scarf from Hong Kong draped nonchalant around my neck, hanging just above the bottom seam of my silk tunic, dangling more than just above the top seam of my black leather boots, when a man walked up, the manager, and beaming with welcome he said “you must be the salsa instructor! Come, the class is on the third floor!” I laughed out loud. Alfred arrived. Alfred who was made for another epoch, who is Smith in our Hume-Smith friendship, fit for the ages, Alfred who had become a member of the association and, a gentleman of yore, would take breakfast there before work, Acshenbach in Venice, tidily wiping a moist remnant of scrambled eggs from the right corner lip, a gesture chiseled by Joseon porcelain and Ithaca’s firefly summer, where took his time to ruminate about his future, having left, again, his past.

joseon
An early 18th-century Joseon jar. Korean ceramics capture Alfred’s gentle elegance. Yet, he has a complex relationship with his Koreanness.

Upstairs we did the microphone dance, fumbling to hook the clip on my black jeans (one of the rare occasions where I was wearing pants). One of my father’s former colleagues gave the keynote. He walked through the long history of artificial intelligence, starting with efforts to encode formal logic and migrating through the sine curve undulations of research moving from top-down intelligent design (e.g., expert systems) to bottom-up algorithms (e.g., deep convolutional neural networks), abstraction moving ever closer to data until it fuses, meat on a bone, into inference.[2] He proposed that intellectual property had shifted from owning the code to building the information asset. He hinted at a thesis I am working to articulate in my forthcoming book about how contemporary, machine learning-based AI refracts humanity through the convex (or concave or distorted or whatever shape it ends up being) mirror of the space of observation we create with our mechanisms for data capture (which are becoming increasingly capacious with video and Alexa in every home, as opposed to being truncated to bilps in clickstream behavior or point of sale transactions), our measurement protocol, and the arabesque inversions of our algorithms. They key thing is that we no longer start with an Aristotelian formal cause when we design computational systems, which means, we no longer imagine the abstract, Platonic scaffold of some act of intelligence as a pre-condition of modeling it. Instead, as Andrei Karpathy does a good job articulating, we stipulate the conditions for a system to learn bottom-up from the data (this does not mean we don’t design, it’s just that the questions we ask as we make the systems require a different kind of abstraction that is affiliated with induction (as Peter Sweeney eloquently illustrates in this post)). This has pretty massive consequences for how we think about the relationship between man and machine. We need to stop pitting machine against man. And we need to stop spouting obsequious platitudes that the “real power comes from the collaboration of man and machine.” There’s something of a sham humanism in those phrases that I want to get to the bottom of. The output of a machine learning algorithm always already is, and becomes even more, as the flesh of abstraction moves closer to the bone of data (or vice versa?), the digested and ruminated and stomach acid-soaked replication of human activity and behavior. It’s about how we regurgitate. That’s why it does indeed make sense to think about bias in machine learning as the laundering of human prejudice.

A woman in the audience posed the final question to the panelists: you’ve spoken about the narrow capabilities of machine learning systems, but will it be possible for artificial intelligence to learn empathy?

A fellow panelist took the Turing Test approach: why yes, he said, there has been remarkable progress in mimicking even this sacred hallmark of the limbic system. It doesn’t matter if the machine doesn’t actually feel anything. What matters it that the machine manifests the signals of having felt something, and that may well be all that matters to foster emotional intelligence. He didn’t mention Soul Machines, a New Zealand-based startup making “incredibly life-like, emotionally responsive artificial humans with personality and character,” but that’s who I’d cite as the most sophisticated example of what UX/UI design can look like when you fuse the skill set of cinematic avatars, machine learning scientists, and neuroscientists (and even the voice of Cate Blanchett).

I disagreed. I am no affect expert (just a curious generalist fumbling my way through life), but believe empathy is remarkably complex for many reasons.

I looked at her directly, deeply. At not just at her, I looked into her. And what I mean by looking into her is that I opened myself up a little, wasn’t just a person protected by the distance of the stage (or, more precisely, the 4 brown leather bar stools with backs so low they only came up to vertebra 4 or 5, and all of us leaned in and out trying to find and keep a dignified posture, hands crossed into serenity, sometimes leaning forward). Yes, when I opened myself to engage with her I leaned forward almost to the point of resting my elbows on my thighs, no longer leaning back and, every few moments, returning my attention to the outer crevices of my eyes to ensure they were soft as my fellow panelists spoke. And I said, think about this. I’m up here on stage perceiving what I’m perceiving and thinking what I’m thinking and feeling what I’m feeling, and somehow, miraculously, I can project what I think you’re perceiving, what I think you’re thinking, what I think you’re feeling, and then, on top of that, I can perhaps, maybe, possibly start to feel what you feel as a result of the act of thinking that I think what you perceive, think, and feel. But even this model is false. It’s too isolated. For we’ve connected a little, I’m really looking at you, watching your eyes gain light as I speak, watching your head nod and your hands flit a little with excitement, and as I do this we’re coming together a little, entangling ourselves to become, at least for this moment, a new conjoint person that has opened a space for us to jointly perceive, think, and feel. We’re communicating. And perhaps it’s there, in that entangled space, where the fusion of true empathy takes place, where it’s sound enough to impact each of us, real enough to enable us to notice a change in what we feel inside, a change occasioned by connection and shared experience.

A emotional Turing test would be a person’s projection that another being is feeling with her. It wouldn’t be entangled. It would be isolated. That can’t be empathy. It’s not worthy of the word.

But, how could we know that two people actually feel the same feeling? If we’re going to be serious, let’s be serious. Let’s impose a constraint and say that empathy isn’t just about feeling some feeling when you infer that another person is feeling something, most often feeling something that would cause pain. It’s literally feeling the same thing. Again, I’m just a curious generalist, but know that psychologists have tools to observe areas of the brain that light up when some emotional experience takes place; so we could see if, during an act of empathy, the same spot lights up.[3] Phenomenologically,  however, that is, as the perceived, subjective experience of the feeling, it has to be basically impossible for us to ever feel the exact same feeling. Go back to the beginning of this blog post. When I walked into the National Club, my internal experience was that of walking into the Downtown Association more than 1.5 years earlier. I would hazard that no one else felt that, no one else’s emotional landscape for the rest of the evening was then subtly impacted by the emotions that arose during this reliving. So, no matter how close we come to feeling with someone when our emotional world us usurped, suddenly, by the experience of another, it’s still grafted upon and filtered through the lens of time, of the various prior experiences we’ve had that trigger that response and come to shape it. As I write, I am transported back to two occasions in my early twenties when I held my lovers in my arms, comforting and soothing them after each had learned about a friend’s suicide. We shared emotion. Deeply. But it was not empathy. My experience of their friends’ suicide was far removed. It was compassion, sympathy, but close enough to the bone to provide them space to cry.

So then we ask, if it’s likely impossible to feel the exact same feeling, then we should relax the constraint and permit that empathy need not be deterministic and exact, but can be recognized within a broader range. We can make it a probabilistic shared experience, an overlap within a different bound. If we relax that constraint, then can we permit a Turing test?

I still don’t think so. Unless we’re ok with sociopaths.

But how about this one. Once I was running down Junipero Serra Boulevard in Palo Alto. It was a dewy morning, dewy as so many mornings are in Silicon Valley. The rhythms of the summer are so constant: one wakes up to fog, daily, fog coming thick over the mountains from the Pacific. Eventually the fog burns and if you go on a bike ride down Page Mill road past the Sand Hill exit to 280 you can watch how the world comes to life in the sun, reveals itself like Michelangelo reveals form from marble. There was a pocket of colder, denser, sweeter smelling air on the 6.5-mile run I’d take from the apartment in Menlo Park through campus and back up Junipero Serra. I would anticipate it as I ran and was always delighted by the smell that hit me; it was the smell of hose water when I was a child. And then I saw a deer lying on the side of the road. She was huge. Her left foot shook in pain. Her eyes were pleading in fear. She begged as she looked at me, begged for mercy, begged for feeling. I was overcome by empathy. I stopped and stood there, still, feeling with her for a moment before I slowly walked closer. Her foot twitched more rapidly with the wince of fear. But as I put my hand on her huge, hot, sweating belly, she settled. Her eyes relaxed. She was calmed and could allow her pain without the additional fear of further hurt. I believe we shared the same feeling at that moment. Perhaps I choose to believe that, if only because it is beautiful.

The moment of connection only lasted a few minutes, although it was so deep it felt like hours. It was ruptured by men in a truck. They honked and told me I was an idiot and would get hurt. The deer was startled enough to jump up and limp into the woods to protect herself. I told the men their assumptions were wrong and ran home.

You might say that this is textbook Turing test empathy. If I can project that I felt the exact same feeling as an animal, if I can be that deluded, then what’s stopping us from saying that that the projection and perception of shared feeling is precisely what this is all about, and therefore it’s fair game to experience the same with a machine?

The sensation of love I felt with that deer left a lasting impression on me. We were together. I helped her. And she helped me by allowing me to help her. Would we keep the same traces of connection from machines? Should empathy, then, be defined by its durability? By the fact that, if we truly do connect, it changes us enough to stay put and be relived?

There are, of course, moments when empathy breaks down.

Consider breakdowns in communication at work or in intimate relationships. Just as my memory of the Downtown Association shaped, however slightly, my experience at Wednesday’s conference, so too do the accumulated interactions we have with our colleagues and partners reinforce models of what we think others think about us (and vice versa). These mental models then supervene upon the act of imagination to perceive, think, and feel like someone else. It breaks. Or, at the very latest, distorts the hyperparameters of what we can perceive. Should anything be genuinely shared in such a tangled web, it would be the shared awareness of the impossibility of identification. I’ve seen this happen with teams and seen it happen with partners. Ruts and little walls that, once built, are very difficult to erode.

Another that comes to mind is the effort required to empathize deeply with people far away from where we live and what we experience. When I was in high school, Martha Nussbaum, a philosopher at the University of Chicago who has written extensively about affect, came and gave a talk about the moral failings of our imagination. This was in 2002. I call her mentioning that we obsess far more deeply, we feel far more acutely, about a paper cut on our index finger or a blister on our right heel, than we do when we try to experience, right here and now, the pain of Rwandans during the genocide, of Syrian refugees packed damp on boats, of the countless people in North America razed from fentanyl. On the talk circuit for his latest book, Yuval Harari comments that we’ve done the conceptual work required to construct and experience a common identity (and perhaps some sort of communal empathy) with people we’ll never meet, who are far outside the local perception of the tribe, in constructing the nation. And that this step from observable, local community to imagined, national community was a far steeper step function than the next rung in the ladder from national to global identity (8,000,000 and 7,000,000,000 are more or less the same for the measly human imagination, whereas 8,000,000 feels a lot different than 20). Getting precise on the limits of these abstractions feels like worthwhile work for a 21st-century ethicists. After all, in its original guise, the trolley problem was not a deontological tool for us to pre-ponder and encode utilitarian values into autonomous vehicles. It was a thinking tool to illustrate the moral inevitability of presence.


I received LinkedIn invites after the talk. One man commented that he found my thoughts about empathy particularly insightful. I accepted his invitation because he took the time to listen and let me know my commentary had at least a modicum of value. I’ll never know what he felt as he sat in the audience during the panel. I barely know what I felt, as two and a half days of experience have already intervened to reshape the experience. So we grow, beings in time.


[1] Loyal blog readers will have undoubtedly noticed how many posts open with a similar sentence. I speak at a ton of conferences. I enjoy it: it’s the teacher’s instinct. As I write today, however, I feel alienated from the posts’ algorithmic repetition, betokening the rhythm of my existence. Weeks punctuated by the sharp staccato of Monday’s 15-minute (fat fully cut) checkins, the apportioned two hours to rewrite the sales narrative, the public appearances that can be given the space to dilate, and the perturbations flitting from interaction to interaction, as I gradually cultivate the restraint to clip empathy and guard my inside from noxious inputs. Tuesday morning, a mentor sent me this:

Screen Shot 2018-11-10 at 7.43.32 AM

[2] This is a loaded term. I’m using it here as a Bayesian would, but won’t take the time to unpack the nuances in this post. I interviewed Peter Wang for the In Context podcast yesterday (slated to go live next week) and we spoke about the deep transformation of the concept of “software” we’re experiencing as the abstraction layer that commands computers to perform operations moves ever closer to the data. Another In Context guest, David Duvenaud, is allergic to the irresponsible use of the word “inference” in the machine learning community (here’s his interview). Many people use inference to refer to a prediction made by a trained algorithm on new data it was not trained on: so, for example, if you make a machine learning system that classifies cats and dogs, the training stage is when you show the machine many examples of images with labels cat and dog and the “inference” stage is when you show the machine a new picture without a label and ask it, “is this a cat or a dog?” Bayesians like Duvenaud (I think it’s accurate to refer to him that way…) reserve the term inference for the act of updating the probability of a hypothesis in light of new observations and data. Both cases imply the delicate dance of generalization and induction, but imply it in different ways. Duvenaud’s concern is that by using the word imprecisely, we lose the nuance and therefore our ability to communicate meaningfully and therefore hamper research and beauty.

[3] Franco Moretti once told me that similar areas of the brain light up when people read Finnegans Wake (or was it Ulysses? or was it Portrait of the Artist? and the Bible (maybe Ecclesiastes?).

The featured image is Edouard Manet’s Olympia, unveiled in Paris in 1856. In the context of this post, it illustrates the utter impossibility of our empathizing with Olympia. The scorn and contempt in her eyes protects her and gives her power. She thwarts any attempt at possession through observation and desire, perhaps because she is so distanced from the maid offering her flowers, deflecting her gaze out towards the observer but looking askance, protecting within her the intimations of what she has just experienced, of the fact that there was a real lover but it was and will never be you. Manet cites Titian’s Venus of Urbino (1534), but blocks all avenues for empathy and connection, empowering Olympia through her distance. 

380px-Tiziano_-_Venere_di_Urbino_-_Google_Art_Project
Titian’s Venus of Urbino has a little sleeping dog, not a bristling black cat

Transitioning from Academia to Business

The wittiest (and longest) tweet thread I saw this week was (((Curtis Perry)))‘s masterful narrative of the life of a graduate student as kin to the life of Job:

Screen Shot 2017-11-12 at 9.32.17 AM
The first tweet chapter in Perry’s grad student life of Job. For the curious, Perry’s Twitter profile reads: Quinquegenarian lit prof and chronic feeder of Titivillus. Professional perseverator. Fellowship of the half-circle.

The timing of the tweet epic was propitious in my little subjective corner of the universe: Just two days before, I’d given a talk for Stanford’s Humanities Education Focal Group about my transition from being a disgruntled PhD in comparative literature to being an almost-functioning-normal-human-being executive at an artificial intelligence startup and a venture partner at a seed-stage VC firm.

Many of the students who attended the talk, ranging from undergrad seniors to sixth- or seventh-year PhDs, reached out afterwards to thank me and ask for additional advice. It was meaningful to give back to the community I came from and provide advice of a kind I sought but couldn’t find (or, more accurately, wasn’t prepared to listen to) when I struggled during the last two years of my PhD.

This post, therefore, is for the thousands of students studying humanities, fearing the gauntlet of the academic job market, and wondering what they might do to explore a different career path or increase their probability of success once they do. I offer only the anecdotes of one person’s successes and failures. Some things will be helpful for others; some will not. If nothing else, it serves as testimony that people need not be trapped in the annals of homogeneity. The world is a big and mighty place.


Important steps in my transition

Failure

As I narrated in a previous post, I hit rock bottom in my last year of graduate school. I remember sitting in Stanford’s Green Library in a pinnacle of anxiety, festering in a local minimum where I couldn’t write, couldn’t stick with the plan for my dissertation, couldn’t do much of anything besides play game after game of Sudoku to desperately pass the time. I left Stanford for a bit. I stopped trying. Encouraged by my worrying mother, I worked at a soup kitchen in Boston every day, pretending it was my job. I’d go in every day at 7:00 am and leave every afternoon at 3:00 pm. Working with my hands, working for others, gradually nurtured me back to stability.

It was during this mental breakdown that applications for a sixth-year dissertation fellowship were due. I forced myself to write a god awful application in the guest bedroom at my parents’ Boston townhouse. It was indescribably hard. Paralyzed, I submitted an alienated abstract and dossier. A few months later, I received a letter informing me that the Humanities Center committee had rejected my application.

I remember the moment well. I was at Pluto’s salad joint on University Avenue in Palo Alto. By then, I had returned back to Stanford and was working one day per week at Saint Martin’s Soup Kitchen in San Francisco, 15 hours per week at a location-based targeted advertising startup called Vantage Local (now Frequence), 5 hours per week tutoring Latin and Greek around the Valley, playing violin regularly, running, and reserving my morning hours to write. I had found balance, balance fit for my personality and needs. I had started working with a career counselor to consider alternative career paths, but had yet to commit to a move out of academia.

The letter gave me clarity. It was the tipping point I needed to say, that’s it; I’m done; I’m moving on. It did not feel like failure; it felt like relief. My mind started to plot next steps before I finished reading the rejection letter.

Luck

The timing couldn’t have been better. My friend Anaïs Saint-Jude had started Bibliotech,  a forward-thinking initiative devoted to exploring the value graduate-level training in the humanities could provide to technology companies. I was fortunate enough to be one of the students who pitched their dissertation to conference attendees, including Silicon Valley heavyweights like Geoffrey Moore, Edgar Masri, Jeff Thermond, Bob Tinker, and Michael Korcuska, all of whom have since become mentors and friends. My intention to move into the private sector came off loud and clear at the event. Thanks to my internship at the advertising company, I had some exposure to the diction and mores of startups. The connections I made there were invaluable to my career. People opened doors that would have otherwise remained shut. All I needed was the first opportunity, and a few years to recalibrate my sense of self as I adapted to the reward system of the private sector.

Authenticity

I’ve mentored a few students who made similar transitions from academia into tech companies, and all have asked me how to defend their choice of pursuing a PhD instead of going directly into marketing, product, sales, whatever the role may be. Our culture embraces a bizarre essentialism, where we’re supposed to know what we want to be when we grow up from the ripe of old of 14, as opposed to finding ourselves in the self we come to inhabit through the serendipitous meanderings of trial and tribulation. (Ben Horowitz has a great commencement speech on the fallacy of following your passion.) The symptom of this essentialism in the transition from humanities to, say, marketing, is this strange assumption that we need to justify the PhD as playing part of a logical narrative, as some step in a master plan we intended from the beginning.

That just can’t be true. I can’t think of anyone who pursues a PhD in French literature because she feels it’s the most expedient move for a successful career in marketing. We pursue literature degrees because we love literature, we love the life of the mind, we are gluttons for the riches of history and culture. And then we realize that the professional realities aren’t quite what we expected. And, for some of us, acting for our own happiness means changing professions.

One thing I did well in my transition was to remain authentic. When I interviewed and people asked me about my dissertation, I got really great at giving them a 2-minute, crisp explanation of what I wrote about and why it was interesting. What they saw was an ability to communicate a complex topic in simple, compelling words. They saw the marks of a good communicator, which is crucial for enterprise marketing and sales. I never pretended I wanted to be a salesperson. I showed how I had excelled in every domain I’d played in, and could do the same in the next challenge and environment.

Selecting the right opportunity

Every company is different. Truly. Culture, stage, product, ethics, goals, size, role, so many factors contribute to shaping what an experience is like, what one learns in a role, and what future opportunities a present experience will afford.

When I left graduate school, I intentionally sought a mid-sized private company that had a culture that felt like a good fit for a fresh academic. It took some time, but I ended up working at a legaltech startup called Intapp. I wanted an environment where I’d benefit from a mentor (after all, I didn’t really have any business skills besides writing and teaching) and where I would have insight into strategic decisions made by executive management (as opposed to being far removed from executives at a large company like Google or Facebook). Intapp had the right level of nerdiness. I remember talking to the CTO about Confucius during my interviews. I plagued my mentor Dan Bressler with endless existential dribble as I went through the growing pains of becoming a business person. I felt embarrassed and pushy asking for a seat at the table for executive meetings, but made my way in on multiple occasions. Intapp sold business software to law firms. The what of the product was really not that interesting. But I learned that I loved the how, loved supporting the sales teams as a subject matter expert on HIPAA and professional responsibility, loved the complex dance of transforming myriad input from clients into a general product, loved writing on tight timelines and with feedback across the organization. I learned so incredibly much in my first role. It was a foundation for future success.

I am fortunate to be a statistical anomaly as a woman. Instead of applying for jobs where I satisfy skill requirements, I tend to seek opportunities with exponential growth potential. I come in knowing a little about the role I have to accomplish, and leave with a whole new set of skills. This creates a lot of cognitive dissonance and discomfort, but I wouldn’t have it any other way. My grey hairs may lead me to think otherwise soon, but I doubt it.

Humility

Last but certainly not least, I have always remained humble and never felt like a task was beneath me. I grew up working crappy jobs as a teenager: I was a janitor; a hostess; a busgirl; a sales representative at the Bombay company in the mall in Salem, New Hampshire; a clerk at the Court Theater at University of Chicago; a babysitter; a lawnmower; an intern at a Blackberry provisioning tech company, where I basically drove a big truck around and lugged stuff from place to place and babysat the CEO’s daughter. I see no work as beneath me, and view grunt work as the sacrifice due to have the amazing, amazing opportunities I have in my work (like giving talks to large audiences and meeting smart and inspiring people almost every day).

Having this humility helps enormously when you’re an entrepreneur. I didn’t mind starting as a marketing specialist, as I knew I could work hard and move up. I’ll yell at the computer in frustration when I have to upload email addresses to a go-to-webinar console or get the HTML to format correctly in a Mailchimp newsletter, but I’m working on showing greater composure as I grow into a leader. I always feel like I am going to be revealed as a fraud, as not good enough. This incessant self-criticism is a hallmark of my personality. It keeps me going.


Advice to current students

Screen Shot 2017-11-12 at 11.02.02 AM
A rad Roman mosaic with the Greek dictum, Know Thyself

Finish your PhD

You’ll buy options for the future. No one cares what you studied or what your grades were. They do care that you have a doctorate and it can open up all sorts of opportunities you don’t think about when you’re envisioning the transition. I’ve lectured at multiple universities and even taught a course at the University of Calgary Faculty of Law. This ability to work as an adjunct professor would have been much, much harder to procure if I were only ABD.

This logic may not hold for students in their first year, where 4 years is a lot of sunk opportunity cost. But it’s not that hard to finish if you lower your standards and just get shit done.

Pity the small-minded

Many professors and peers will frown upon a move to business for all sorts of reasons. Sometimes it’s progressive ideology. Sometimes it’s insecurity. Most of the time it’s just lack of imagination. Most humanists profess to be relativists. You’d think they could do so when it comes to selecting a profession. Just know that the emotional pressure of feeling like a failure if you don’t pursue a research career dwindles almost immediately when your value compass clocks a different true north.

Accept it’s impossible to imagine the unknown

The hardest part of deciding to do something radically different is that you have no mental model of your future. If you follow the beaten path, you can look around to role model professors and know what your life will look like (with some variation depending on which school you end up in). But it’s impossible to know what a different decision will lead to. This riddles the decision with anxiety, requiring something like a blind leap of faith. A few years down the line, you come to appreciate the creative possibility of a blank future.

Explore

There are so many free meetups and events taking place everywhere. Go to them. Learn something new. See what other people are doing. Ask questions. Do informational interviews. Talk to people who aren’t like yourself. Talk to me! Keep track of what you like and don’t like.

Collaborate

One of the biggest changes in moving from academia to business is the how of work. Cultures vary, but businesses are generally radically collaborative places and humanities work is generally isolated and entirely individual. It’s worthwhile to co-author a paper with a fellow grad student or build skills running a workshop or meetup. These logistics, communication, and project management skills are handy later on (and are good for your resume).

Experiment with different writing styles

Graduate school prepares you to write 20-page papers, which are great preparation for peer-reviewed journals and, well, nothing else. They don’t prepare you to write a good book. They don’t prepare you to write a good blog post or newspaper article. Business communication needs to be terse and on point so people can act on it. Engineers need guidance and clarity, need a sense of continuity of purpose. Customers need you to understand their point of view. Audiences need stories or examples to anchor abstract ideas. Having the agility to fit form to purpose is an invaluable skill for business communications. It’s really hard. Few do it well. Those who do are prized.

Learn how to give a good talk

Reading a paper aloud to an audience is the worst. Just don’t do it. People like funny pictures.

Know thyself

There is no right path. We’re all different. Business was a great path for me, and I’ve molded my career to match my interests, skill, personality, and emotional sensitivities. You may thrive in a totally different setting. So keep track of what you like and dislike. Share this thinking with others you love and see if what they think of you is similar to what you think of you. Figuring this out is the trickiest and potentially most valuable exercise in life. And sometimes it’s a way to transform what feels like a harrowing experience into an opportunity to gain yet another inch of soul.


The featured image is from William Blake’s illustrated Book of Job, depicting the just man rebuked by his friends. Blake has masterful illustrations of the Bible, including this radical image from Genesis, where Eve’s wandering eye displays a proleptic fall from grace, her vision, her fantasy too large for the limits of what Adam could safely provide - a heroine of future feminists, despite her fall. 

blake adam eve

 

 

 

Censorship and the Liberal Arts

A few months ago, I interviewed a researcher highly respected in his field to support marketing efforts at my company. Before conducting the interview, I was asked to send my questions for pre-approval by the PR team of the corporation with which the researcher is affiliated. Backed by the inimitable power of their brand, the PR scions struck crimson lines through nearly half my questions. They were just doing their job, carrying out policy to draw no public attention to questions of ethics, safety, privacy, security, fear. Power spoke. The sword showed that it is always mightier than the pen, fool ourselves though we may.

Pangs of injustice rose fast in my chest. And yet, I obeyed.

Was this censorship? Was I a coward?

Intellectual freedom is nuanced in the private sector because when we accept a job we sign a social contract. In exchange for a salary and a platform for personal development and growth, we give up full freedom of expression and absorb the values, goals, norms, and virtual personhood of the organization we join. The German philosopher Emmanuel Kant explains the tradeoffs we make when constructing our professional identity in What is Enlightenment? (apologies for the long quotation, but it needed to be cited in full):

“This enlightenment requires nothing but freedom-and the most innocent of all that may be called “freedom”: freedom to make public use of one’s reason in all matters. Now I hear the cry from all sides: “Do not argue!” The officer says: “Do not argue-drill!” The tax collector: “Do not argue-pay!” The pastor: “Do not argue-believe!” Only one ruler in the world says: “Argue as much as you please, but obey!” We find restrictions on freedom everywhere. But which restriction is harmful to enlightenment? Which restriction is innocent, and which advances enlightenment? I reply: the public use of one’s reason must be free at all times, and this alone can bring enlightenment to mankind.

On the other hand, the private use of reason may frequently be narrowly restricted without especially hindering the progress of enlightenment. By ‘public use of one’s reason’ I mean that use which a man, as scholar, makes of it before the reading public. I call ‘private use’ that use which a man makes of his reason in a civic post that has been entrusted to him. In some affairs affecting the interest of the community a certain [governmental] mechanism is necessary in which some members of the community remain passive. This creates an artificial unanimity which will serve the fulfillment of public objectives, or at least keep these objectives from being destroyed. Here arguing is not permitted: one must obey. Insofar as a part of this machine considers himself at the same time a member of a universal community-a world society of citizens-(let us say that he thinks of himself as a scholar rationally addressing his public through his writings) he may indeed argue, and the affairs with which he is associated in part as a passive member will not suffer. Thus it would be very unfortunate if an officer on duty and under orders from his superiors should want to criticize the appropriateness or utility of his orders. He must obey. But as a scholar he could not rightfully be prevented from taking notice of the mistakes in the military service and from submitting his views to his public for its judgment. The citizen cannot refuse to pay the taxes levied upon him; indeed, impertinent censure of such taxes could be punished as a scandal that might cause general disobedience. Nevertheless, this man does not violate the duties of a citizen if, as a scholar, he publicly expresses his objections to the impropriety or possible injustice of such levies. A pastor, too, is bound to preach to his congregation in accord with the doctrines of the church which he serves, for he was ordained on that condition. But as a scholar he has full freedom, indeed the obligation, to communicate to his public all his carefully examined and constructive thoughts concerning errors in that doctrine and his proposals concerning improvement of religious dogma and church institutions. This is nothing that could burden his conscience. For what he teaches in pursuance of his office as representative of the church, he represents as something which he is not free to teach as he sees it. He speaks as one who is employed to speak in the name and under the orders of another. He will say: “Our church teaches this or that; these are the proofs which it employs.” Thus he will benefit his congregation as much as possible by presenting doctrines to which he may not subscribe with full conviction. He can commit himself to teach them because it is not completely impossible that they may contain hidden truth. In any event, he has found nothing in the doctrines that contradicts the heart of religion. For if he believed that such contradictions existed he would not be able to administer his office with a clear conscience. He would have to resign it. Therefore the use which a scholar makes of his reason before the congregation that employs him is only a private use, for no matter how sizable, this is only a domestic audience. In view of this he, as preacher, is not free and ought not to be free, since he is carrying out the orders of others. On the other hand, as the scholar who speaks to his own public (the world) through his writings, the minister in the public use of his reason enjoys unlimited freedom to use his own reason and to speak for himself. That the spiritual guardians of the people should themselves be treated as minors is an absurdity which would result in perpetuating absurdities.”

Kant makes a tricky distinction between our public and private use of reason. What he calls “public use of reason” is what we normally consider to be private: The sacred space of personal opinion, not as unfettered stream of consciousness, but as the reflections and opinions that result from our sense of self as part of the species homo sapiens (some criticize this humanistic focus and think we should expand the space of commonality to include animals, plants, robots, rocks, wind, oceans, and other types of beings). Beliefs that are fair because they apply to me just as they apply to you and everyone else. Kant deems this “public” because he espouses a particular take on reason that is tied up with our ability to project ourselves as part of a larger universal we call humanity: for Kant, our freedom lies not in doing whatever we want, not in behaving like a toddler who gets to cry on a whim or roam around without purpose or drift in opiate stupor, but rather in our willingly adhering to self-imposed rules that enable membership in a collectivity beyond the self. This is hard to grasp, and I’m sure Kant scholars would poke a million holes in my sloppy interpretation. But, at least for me, the point here is public reason relates to the actions of our mind when we consider ourselves as citizens of the world, which, precisely because it is so broad, permits fierce individuality.

By contrast, “private use of reason” relates to a sense of self within a smaller group, not all of humanity. So, when I join a company, by making that decision, I willingly embrace the norms, culture, and personhood of this company. Does this mean I create a fictional sub-self every time I start a new job or join some new club or association? And that this fictional self is governed by different rules than the real me that exercises public reason in the comfort of my own mind and conscience? I don’t think so. It would require a fictional sub-self if the real self were a static thing that persists over time. But there’s no such thing as the real self. It’s a user illusion (hat tip to Dan Dennett for the language). We come as diads and triads, the connections between the neurons in our brains ever morphing to the circumstances we find ourselves in. Because we are mortal, because we don’t have infinite time to explore the permutations of possible selves that would emerge as we shapeshift from one collectivity to the next, it’s important that we select our affiliations carefully, especially if we accept the tradeoffs of “private use of reason.” We don’t have time to waste our willful obedience on groups whose purpose and values skew too far from what our public reason holds dear. And yet, the restriction of self-interest that results from being part of a team is quite meaningful. It is perhaps the most important reason why we must beware the lore of a world without work.

This long exploration of Kant’s distinction between public and private reason leads to the following conclusion: No, I argue, it was not an act of cowardice to obey the PR scions when they censored me. I was exercising my “private use of reason,” as it would not have been good for my company to pick a fight. In this post, by contrast, I exercise my “public use of reason” and make manifest the fact that, as a human being, I feel pangs of rage against any form of censorship, against any limitation of inquiry, curiosity, discourse, and expression.

But do I really mean any? Can I really mean any in this age of Trumpism, where the First Amendment serves as a rhetorical justfication to traffic fake news, racism, or pseudo-scientific justifications to explain why women don’t occupy leadership roles at tech companies?* And, where and how do we draw the line between actions that aren’t right according to public reason but are right according to private reason and those that are simply not right, period? By making a distinction between general and professional ethics, do we not risk a slippery slope where following orders can permit atrocities, as Hannah Arendt explores in Eichmann in Jerusalem?

These are dicey questions.

There are others that are even more dicey and delicate. What happens if the “private use of reason” is exercised not within the a corporation or office, affiliations we choose to make (should we be fortunate enough to choose…), but in a collectivity defined by trait like age, race, gender, sexuality, religion, or class (where elective choice is almost always absent except when it absolutely is present (e.g., a decision to be transgender))? These categories are charged with social meaning that breaks Kant’s logic. Naive capitalists say we can earn our class through hard work. Gender and race are not discrete categories but continuous variables on a spectrum defined by local contexts and norms: In some circles, gender is pure expression of mind over body, a malleable sense of self in a dance with the impressions and reactions of others; in others, the rules of engagement are fixed to the point of submission and violence. Identity politics don’t follow the logic of the social contract. A willed trade off doesn’t make sense here. What act of freedom could result from subsuming individual preference for the greater good of a universal or local whole? (Open to being told why I’m totally off the mark, as these issues are far from my forte.)

What’s dangerous is when the experience of being part of a minority expresses itself as willed censorship, as a cloak to avoid the often difficult challenge of grappling with the paradoxical twists of private and public reason. When the difficult nuances of ethics reduce to the cocoon of exclusion, thwarting the potential of identifying common ground.

The censorship I accepted when I constrained my freedom as a professional differs from the censorship contemporary progressives demand from professors and peers. I agree with the defenders of liberalism that the distinction between private and public reason should collapse at the university. That the university should be a place where young minds are challenged, where we flex the muscles of transforming a gut reaction into an articulated response. Where being exposed to ideas different from one’s own is an opportunity for growth. Where, as dean of students Jay Ellison wrote to the incoming class of 2020 at the University of Chicago, “we do not support so called ‘trigger warnings,’ we do not cancel invited speakers because their topics might prove controversial,** and we do not condone the creation of intellectual ‘safe spaces’ where individuals can retreat from ideas and perspectives at odds with their own.” As an alumna of the University of Chicago, I felt immense pride at reading Bret Stephens’ recent New York Times op-ed about why Robert Zimmer is America’s best university president. Gaining practice in the art of argument and debate, in reading or hearing an idea and subjecting it to critical analysis, in appreciating why we’ve come to espouse some opinion given the set of circumstances afforded to us in our minute slice of experience in the world, in renting our positions until evidence convinces us to change our point of view, in deeply listening to others to understand why they think what they think so we can approach a counterargument from a place of common ground, all of these things are the foundations of being a successful professional. Being a good communicator is not a birthright. It is a skill we have to learn and exercise just like learning how to ride a bike or code or design a website. Except that it is much harder, as it requires a Stoic’s acceptance that we cannot control the minds or emotions of others; We can only seek to influence them from a place of mutual respect.

Given the ungodly cost of a university education in the United States, and our society’s myopic focus on creating productive workers rather than skeptical citizens, it feels horribly elitist to advocate for the liberal arts in this century of STEM, robots, and drones. But my emotions won’t have it otherwise: They beat with the proud tears of truth and meaning upon reading articles like Marilynne Robinson’s What Are We Doing Here?, where she celebrates the humanities as our reverence to the beautiful, to the possible, to the depth we feel in seeing words like grandeur and the sadness that results when imagine a world without the vastness of the Russian imagination or the elegance of the Chinese eye and hand.

But as the desire to live a meaningful life is not enough to fund the liberal arts, perhaps we should settle for a more pragmatic argument. Businesses are made of people, technologies are made by people, technologies are used by people. Every day, every person in every corporation faces ethical conundrums like the censorship example I outlined above. How can we approach these conundrums without tools or skills to break down the problem? How can we work to create the common ground required for effective communication if we’ve siphoned ourselves off into the cocoon of our subjective experience? Our universities should evolve, as the economic-social-political matrix is not what it once was. But they should not evolve at the expense of the liberal arts, which teach us how to be free.

*One of the stranger interviews James Damore conducted after his brief was leaked from Google was with the conservative radio host Stefan Molyneux, who suggested that conservatives and libertarians make better programmers because they are accustomed to dissecting the world in clear, black and white terms, as opposed to espousing the murky relativism of the liberals. It would be a sad world indeed if our minds were so inflexible that they lacked the ability to cleave a space to practice a technical skill.

**Sam Harris has discussed academic censorship and the tyranny of the progressives widely on the Waking Up podcast (and has met no lack of criticism for doing so), interviewing figures like Charles Murray, Nicolas Christakis, Mark Lilla, and others.

The featured image is from some edition of Areopagitica, a speech John Milton (yep, the author of Paradise Lost) gave to the British Parliament to protest censorship. In this speech, Milton argues that virtue is not innate but learned, that just as we have to exercise our self-restraint to achieve the virtue of temperance, so too should we be exposed to all sorts of ideas from all walks of life to train our minds in virtue, to give ourselves the opportunity to be free. I love that bronze hand.

Why Study Foreign Languages?

My ability to speak multiple languages is a large part of who I am.* Admittedly, the more I languages I learn, the less mastery I have over each of the languages I speak. But I decided a while back I was ok with trading depth for breadth because I adore the process of starting from scratch, of gradually bringing once dormant characters to life, of working with my own insecurities and stubbornness as people respond in English to what must sound like pidgin German or Italian or Chinese, of hearing how the tone of my voice changes in French or Spanish, absorbing the Fanonian shock when a foreign friend raises his** eyebrows upon first hearing me speak English, surprised that my real, mother-tongue personality is far more harsh and masculine than the softer me embodied in metaphors of my not-quite-accurate French.***

You have to be comfortable with alienation to love learning foreign languages. Or perhaps so aware of how hard it is to communicate accurately in your mother tongue that it feels like a difference of degree rather than kind to express yourself in a language that’s not your own. Ferdinand Céline captures this feeling well in Mort à Crédit (one of the few books whose translated English title, Death on the Installment Plan, may be superior to the original!), when, as an exchange student in England, he narrates the gap between his internal dialogue and the self he expresses in his broken English to strangers at a dinner table. As a ruthless self critic, I’ve taken great solace in being able to hide behind a lack of precision: I wanted to write my undergraduate BA thesis (which argued that Proust was decidedly not Platonic) in French because the foreign language was a mask for the inevitable imperfection of my own thinking. Exposing myself, my vulnerabilities, my imperfections, my stupidity, was too much for me to handle. I felt protected by the veil of another tongue, like Samuel Beckett or Nabokov**** deliberately choosing to write in a language other than their own to both escape their past and adequately capture the spirit of their present.

But there’s more than just a desire to take refuge in the sanctuary of the other. There’s also the gratitude of connection. The delight the champagne producer in a small town outside Reims experiences upon learning that you, an American, have made the effort to understand her culture. The curiosity the Bavarian scholar experiences when he notices that your German accent is more hessisch than bayerisch (or, in Bavarian, bairisch, as one reader pointed out), his joy at teaching you how to gently roll your r’s and sound more like a southerner when you visit Neuschwanstein and marvel at the sublime decadence of Ludwig II. The involuntary smile that illuminates the face of the Chinese machine learning engineer on his or her screening interview when you tell him or her about your struggles to master Chinese characters. Underlying this is the joy we all experience when someone makes an effort to understand us for who we are, to crack open the crevices that permit deeper connections, to further our spirituality and love.

In short, learning a new language is wonderful. And the tower of Babel separating one culture from another adds immense richness to our world.

To date, linguae francae have been the result of colonial power and force: the world spoke Greek because the Greeks had power; the world spoke French because the French had power; the world speaks English because the Americans have had power (time will tell if that’s true in 20 years…). Efforts to synthesize a common language, like Esperanto or even Leibniz’s Universal Characteristic, have failed. But Futurists claim we’re reaching a point where technology will free us from our colonial shackles. Neural networks, they claim, will be able to apply their powers of composition and sequentiality to become the trading floor or central exchange for all the world’s languages, a no man’s land of abstraction general enough to represent all the nuances of local communication. I’m curious to know how many actual technologists believe this is the case. Certainly, there have been some really rad breakthroughs of late, as Gideon Lewis-Kraus eloquently captured in his profile of the Google Brain team and as the Economist describes in a tempered article about tasks automated translators currently perform well. My friend Gideon Mann and I are currently working on a fun project where we send daily emails filtered through the many available languages on Google Translate, which leads to some cute but generally comprehensible results (the best part is just seeing Nepali or Zulu show up in my inbox). On the flip side, NLP practitioners like Yoav Goldberg find these claims arrogant and inflated: the Israeli scientist just wrote a very strong Medium post critiquing a recent arXiv paper by folks at MILA that claims to generate high-quality prose using generative adversarial networks.*****

Let’s assume, for the sake of the exercise, that the tools will reach high enough quality performance that we no longer need to learn another language to communicate with others. Will language learning still be a valuable skill, or will it be outsourced to computers like multiplication?

I think there’s value in learning foreign languages even if computers can speak them better than we can. Here are some other things I value about language learning:

  • Foreign languages train your mind in abstraction. You start to see grammatical patterns in how languages are constructed and can apply these patterns to rapidly acquire new languages once you’ve learned one or two.
  • Foreign languages help you appreciate how our experiences are shaped by language. For example, in English we fall in love with someone, in French we fall in love of someone, in German we fall in love in someone. Does that directionality impact our experience of connection?
  • Foreign languages force you to read things more slowly, thereby increasing your retention of material and interpretative rigor.
  • Foreign languages encourage empathy and civic discourse, because you realize the relativity of your own ideas and opinions.
  • Foreign languages open new neural pathways, increasing your creativity.
  • Foreign languages are fun and it’s gratifying to connect with people in their mother tongue!
  • Speaking in a foreign language adds another level of mental difficulty to any task, making even the most boring thing (or conversation) more interesting.

I also polled Facebook and Twitter to see what other people thought. Here’s a selection of responses:

Screen Shot 2017-06-10 at 9.20.42 AM

Screen Shot 2017-06-10 at 9.21.50 AMScreen Shot 2017-06-10 at 9.22.24 AMScreen Shot 2017-06-10 at 9.22.57 AM

Screen Shot 2017-06-10 at 10.22.12 AM

Screen Shot 2017-06-10 at 9.25.42 AMScreen Shot 2017-06-10 at 9.26.21 AMScreen Shot 2017-06-10 at 9.27.04 AMScreen Shot 2017-06-10 at 9.27.56 AMScreen Shot 2017-06-10 at 9.28.24 AMScreen Shot 2017-06-10 at 9.28.52 AMScreen Shot 2017-06-10 at 9.29.29 AM.png

The best part of this exercise was how quickly and passionately people responded. It was a wonderful testimony to open-mindedness, curiosity, courage, and thirst for learning in an age where values like these are threatened. Let’s keep up the good fight!

*Another perk of living in Canada is that I get to speak French on a regular basis! Granted, Québecois is really different than my Parisian French, but it’s still awesome. And I’m here on a francophone work permit, which was the fastest route to getting me legal working status before the fast-track tech visa program that begins today.

**Gender deliberate.

*** It really irritates me when people say French is an easy language for native English speakers to learn. It’s relatively (i.e., versus Chinese or Arabic) easy to get to proficiency in French, but extremely difficult to achieve the fluency of the language’s full expressive power, which includes ironical nuances for different concessive phrases (“although this happened…”), the elegant ability to invert subject and verb to intimate doubt or suspicion, the ability to couple together conditional phrases, resonances with literary texts, and so much more.

****A reader wrote in correcting this statement about Nabokov. Apparently Nabokov could read and write in English before Russian. Said reader entitled his email to me “Vivian Darkbloom,” a character representing Nabokov himself who makes a cameo appearance in Lolita. If it’s false to claim that Nabokov uses English as a protected veil for his psychology, it may be true that cameos in anagram are his means to cloack presence and subjectivity, as he also appears - like Hitchcock in his films - as the character Blavdak Vinomori “King, Queen, Knave.”

*****Here’s the most interesting technical insight from Goldberg’s post: “To summarize the technical contribution of the paper (and the authors are welcome to correct me in the comments if I missed something), adversarial training for discrete sequences (like RNN generators) is hard, for the following technical reason: the output of each RNN time step is a multinomial distribution over the vocabulary (a softmax), but when we want to actually generate the sequence of symbols, we have to pick a single item from this distribution (convert to a one-hot vector). And this selection is hard to back-prop the gradients through, because its non-differentiable. The proposal of this paper is to overcome this difficulty by feeding the discriminator with the softmaxes (which are differentiable) instead of the one-hot vectors.” Goldberg cites the MILA paper as a symptom of a larger problem in current academic discourse in the ML and technology community, where platforms like arxiv short circuit the traditional peer review process. This is a really important and thorny issue, as traditional publishing techniques slow research, reserve the privilege of research to a selected few, and place pay walls around access. However, it’s also true that naive readers want to trust the output of top tier research labs, and we’ll fall prey to reputation without proper quality controls. A dangerous recent example of this was the Chinese study of automatic criminality detection, masterfully debunked by some friends at Google.

The featured image comes from Vext Magazine’s edition of Jorge Luis Borges’s Library of Babel (never heard of Vext until just now but looks worth checking out!). It’s a very apt representation of the first sentence in Borges’s wonderful story: The universe (which others call the Library) is composed of an indefinite and perhaps infinite number of hexagonal galleries, with vast air shafts between, surrounded by very low railings. From any of the hexagons one can see, interminably, the upper and lower floors. Having once again moved to a new city, being once again in the state of incubation and potentiality, and yet from an older vantage point, where my sense of self and identity is different than in my 20s, I’m drawn to this sentence: Like all men of the Library, I have traveled in my youth; I have wandered in search of a book, perhaps the catalogue of catalogues…

The Utility of the Humanities in the 21st Century

I did my PhD in Comparative Literature at Stanford. There is likely no university in the US with a culture more antithetical to the humanities: Stanford embodies the libertarian, technocratic values of Silicon Valley, where disruptive innovation has crystallized into a platitude* and engineers are the new priestly caste. Stanford had massive electrical engineering and computer science graduate cohorts; there were five students in my cohort in comparative literature (all women, of diverse backgrounds, and quite large in contrast to the two- or three-student cohorts in Italian, German, and French). I had been accepted into several graduate programs across the country, but felt a responsibility to study at a university where the humanities were threatened. I didn’t want the ivory tower, the prestigious rare book collection, the ability to misuse words like isomorphism and polymorphic because they sounded scientific (I was a math undergrad), the stultified comfort that Wordsworth and Shelley were on the minds of strangers on the street. I wanted to learn what it would mean to defend a discipline undervalued by society, in an age where universities were becoming private businesses tailoring to undergraduate student consumers and the rising costs of education made it borderline irresponsible not to pursue vocational training that would land a decent job coding for a startup. Stanford’s very libertarianism also enabled me to craft an interdisciplinary methodology-crossing literature, history of science and mathematics, analytic philosophy, and classics-that more conservative departments would never entertain. This was wonderful during my coursework, and my Achilles heel when I had to write a dissertation and build a professional identity more conservative departments could recognize. I went insane, but mustered the strength and resilience required to complete my dissertation (in retrospect, I’m very grateful I did, as having a PhD has enabled me to teach as adjunct faculty alongside my primary job). After graduation, I left academia for the greener, freer pastures of the private sector.

The 2008-2009 financial crisis took place in the midst of my graduate studies. Ever tighter departmental budgets exacerbated the identity crisis the humanities were already facing. Universities had to cut costs, and French departments or film studies departments or German departments were the first to go. This shrank the already minuscule demand for humanities faculty, and exponentially increased the level of anxiety my fellow PhDs and I experienced regarding our future livelihoods. In keeping with the futurism of the Valley, Stanford (or at least a few professors at Stanford) was at the vanguard for considering alternative career paths for humanities PhDs: professors discussed shortening the time to degree, providing students with more vocational communications training so they could land jobs as social media marketers, extolling the values of academic administration as a career path equal to that of a researcher. Others resisted vehemently. There was also a wave of activity defending the utility of the humanities to cultivate empathy and other social skills. I’ve spent a good portion of my life reading fiction, but must say it was never as rich a moral training ground as actual life experience. I’ve learned more about regulating my emotions and empathizing with others’ points of view in my four years in the private sector than I had in the 28 years of life before I embraced work as a career (rather than just a job). Some people are really hard to deal with, and you have to face these challenges head on to grow.

All this is context for my opinions defending the utility of the humanities in our contemporary society and economy. To be clear, in proposing these economic arguments, I’m not abandoning claims for the importance of the humanities in individual personal and intellectual development. On the contrary, I strongly believe that a balanced, liberal arts education is critical to foster the development of personal autonomy and civic judgement, to preserve and potentially resurrect our early Republican (as political experiment, not party) goals that education cultivate critical citizens, not compliant economic agents.  I was miserable as a graduate student, but don’t regret my path for a minute. And I think there is a case to be made that humanities will be as-if not more-important than STEM to our national interests in the near future. Here’s why:

Technology and White-Collar Professions - In The Future of the ProfessionsRichard and Daniel Susskind demonstrate how technology is changing professions like medicine, law, investment management, accounting, and architecture. Their key insight is to structurally define white-collar professionals by the information asymmetry that exists between professional and client. Professionals know things it is hard for laymen to know: the tax code is complex and arcane, and it would take too much time for the Everyman (gender intentional) to understand it well enough to make judgments in her (gender intentional) favor. Same goes for diagnosing and treating an illness or managing finances of a large corporation. The internet, and, perhaps more importantly, the new machine learning technologies that enable us to use the internet to answer hard, formerly professional, questions, however, levels this information asymmetry. Suddenly, tools can do what trained professionals used to do, and at a much lower costs (contrast the billed hours of a good lawyer with the economies of scale of Google). As such, the skills and activities professionals need are changing and will continue to change. Working in machine learning, I can say from experience that we are nowhere near an age where machines are going to flat out replace people, creating a utopian world with universal basic income and bored Baudelaires assuaging ennui with opiates, sex, and poetry (laced with healthy doses of Catholic guilt). What is happening is that the day-to-day work of professionals is changing and will continue to change. Machines are ready and able to execute many of the repetitive tasks done by many professionals (think young associates reviewing documents to find relevant information for lawsuit - in 2015, the Second Circuit tried to define what it means to practice law by contrasting tasks humans can do with tasks computers can do). As machines creep ever further into work that requires thinking and judgment, critical thinking, creativity, interpretation, emotions, and reasoning will become increasingly important. STEM may just lead to its own obsoleteness (AI software is now making its own AI software), and in doing so is increasing the value of professionals trained in the humanities. This value lies in the design methodologies required to transform what were once thought processes into statistical techniques, to crystallize probabilistic outputs into intuitive features for non-technical users. It lies in creating the training data required to make a friendly chat bot. Most importantly, it lies in the empathy and problem-solving skills that will be the essence of professional work in the future.

Autonomy and Mores in the Gig Economy - In October, 2015, I spoke at a Financial Times conference about corporate sustainability. The audience was filled with executives from organizations like the Hudson Bay Company (they started by selling beaver pelts and now own department stores like Saks Fifth Avenue) that had stayed in business over literally hundreds of years by gradually evolving and adding new business lines. The silver-haired rich men on the panel with me kept extolling the importance of “company values” as the key to keeping incumbents relevant in today’s society. And my challenge to them was to ask how modern, global organizations, in particular those with large, temporary 1099 workforces managed by impersonal algorithms, could cultivate mores and values like the small, local companies of the past. Indeed, I spent a few years helping international law firms build centralized risk and compliance operations, and in doing so came to appreciate that the Cravath model, an apprenticeship culture where skills and corporate culture and mores and passed down from generation to generation, as there is very low mobility between firms, simply does not scale to our mobile, changing, global workforce. As such, inculcating values takes a very different form and structure than it did in the past. We read a lot about how today’s careers are more like jungle gyms than ladders, where there is a need to constantly revamp and acquire new skills to keep up with changing technologies and demand, but this often overlooks the fact that companies - like clubs and societies - used to also shape our moral characters. You may say that user reviews (the five stars you can get as an Uber rider or AirBnB lodger) take the place of what was formerly subjective judgment of colleagues and peers. But these cold metrics are a far cry from the suffering and satisfaction we experience when we break from or align with a community’s mores. This merits much more commentary than the brief suggestions I’ll make here, but I believe our globalized, gig economy requires a self-reliant morality and autonomy that has no choice but to be cultivated apart from the workplace. And the seat of that cultivation would be some training in philosophy, ethics, and humanities. Otherwise corporate values will be reduced to the cold rationality of some algorithm measuring OKRs and KPIs.

Ethics and Emerging Technologies - Just this morning, Guru Banavar, IBM’s Chief Science Officer for Cognitive Computing, posted a blog admonishing technologists building AI products that they “now shoulder the added burden of ensuring these technologies are developed, deployed and adopted in responsible, ethical and enduring ways.” Banavar’s post is a very brief advertisement for the Partnership on AI IBM, Google, Microsoft, Amazon, Facebook, and Apple have created to formalize attention around the ethical implications of the technologies they are building. Elon Musk founded OpenAI with a similar mission to research AI technologies with an eye towards ethics and safety. Again, there is much to say about the different ethical issues new technologies present (I surveyed a few a year ago in a Fast Forward Labs newsletter). The point here is that ethics is moving from a niche interest of progressive technologists to a core component of large corporate technology strategy. And the ethical issues new technologies pose are not trivial. It’s very easy to fall into chicken little logic traps (where scholars like Nick Bostrom speculate on worst-case scenarios just because they are feasible for us to imagine) that grab headlines instead of sticking with the discipline required to recognize how data technologies can amplify existing social biases. As Ted Underwood recently tweeted, doing this well requires both people who are motivated by critical thinking and people who are actually interested in machine learning technologies. But the and is critical, else technologists will waste a lot of time reinventing methods philosophers and ethicists have already honed. And even if the auditing of algorithms is carried out by technologists, humanists can help voice and articulate what they find. Finally, it goes without saying that we all need to sharpen our critical reading skills to protect our democracy in the age of Trump, filter bubbles, and fake news.

This is just a start. Each of these points can be developed, and there are many more to make. My purpose here is to shift the dialogue on the value of the humanities from utility in cultivating empathy and emotional character to real economic and social impact. The humanities are worth fighting for.

 

*For those unaware, Clayton Christensen coined the term disruptive innovation in The Innovator’s DilemmaHe contrasted it with sustaining innovation, the gradual technical improvements companies make to a product to meet market and customer demands. Inspired by Thomas Kuhn’s Structure of Scientific Revolutions, Christensen artfully demonstrates how great companies miss out on opportunities for disruptive innovation precisely because they are well run: disruptive innovations seize upon new markets with an unserved need, and only catch up to incumbents because technology can change faster than market preferences and demand. As disruption has crystallized into ideology, people often overlook that most products are sustaining innovations, incremental improvements upon an existing product or market need. It’s admittedly much more exciting to carry out a Copernican revolution, but if we consider that Trump may well be a disruptive innovator, who identified a latent market whose needs were underserved only to topple the establishment, we might sit back, pause, and reconsider our ideological assumptions.

The image is Jacques-Louis David’s The Death of Socrates from 1787. Plato sits at the front with his head down and his legs and arms peacefully and plaintively crossed.