What makes a memory profound?

Not every event in your life has had profound significance for you. There are a few, however, that I would consider likely to have changed things for you, to have illuminated your path. Ordinarily, events that change our path are impersonal affairs, and yet are extremely personal. - Don Juan Matus, a (potentially fictional) Yaqui shaman from Mexico

The windowless classroom was dark. We were sitting around a rectangular table looking at a projection of Rembrandt’s Syndics of the Drapers’ Guild. Seated opposite the projector, I could see student faces punctuate the darkness, arching noses and blunt hair cuts carving topography through the reddish glow.

“What do you see?”

Barbara Stafford’s voice had the crackly timbre of a Pablo Casals record and her burnt-orange hair was bi-toned like a Rothko painting. She wore downtown attire, suits far too elegant for campus with collars that added movement and texture to otherwise flat lines. We were in her Art History 101 seminar, an option for University of Chicago undergrads to satisfy a core arts & humanities requirement. Most of us were curious about art but wouldn’t major in art history; some wished they were elsewhere. Barbara knew this.

“A sort of darkness and suspicion,” offered one student.

“Smugness in the projection of power,” added another.

“But those are interpretations! What about the men that makes them look suspicious or smug? Start with concrete details. What do you see?”

No one spoke. For some reason this was really hard. It didn’t occur to anyone to say something as literal as “I see a group of men, most of whom have long, curly, light-brown hair, in black robes with wide-brimmed tall black hats sitting around a table draped with a red Persian rug in the daytime.” Too obvious, like posing a basic question about a math proof (where someone else inevitably poses the question and the professor inevitably remarks how great a question it is to our curious but proud dismay). We couldn’t see the painting because we were too busy searching for a way of seeing that would show others how smart we were.

“Katie, you’re our resident fashionista. What strikes you about their clothing?”

Adrenaline surged. I felt my face glow in the reddish hue of the projector, watched others’ faces turn to look at mine, felt a mixture of embarrassment at being tokenized as the student who cared most about clothes and appearance and pride that Barbara found something worth noticing, in particular given her own evident attention to style. Clothes weren’t just clothes for me: they were both art and protection. The prospect of wearing the same J Crew sweater or Seven jeans as another girl had been cruelly beaten out of me in seventh grade, when a queen mean girl snidely asked, in chemistry class, if I knew that she had worn the exact same salmon-colored Gap button-down crew neck cotton sweater, simply in the cream color, the day before. My mom had gotten me the sweater. All moms got their kids Gap sweaters in those days. The insinuation was preposterous but stung like a wasp: henceforth I felt a tinge of awkwardness upon noticing another woman wearing an article of clothing I owned. In those days I wore long ribbons in my ponytails to make my hair seem longer than it was, like extensions. I often wore scarves, having admired the elegance of Spanish women tucking silk scarves under propped collared shirts during my senior year of high school abroad in Burgos, Spain. Material hung everywhere around me. I liked how it moved in the wind and encircled me in the grace I feared I lacked.

“I guess the collars draw your attention. The three guys sitting down have longer collars. They look like bibs. The collar of the guy in the middle is tied tight, barely any space between the folds. A silver locket emerges from underneath. The collars of the two men to his left (and our right) billow more, they’re bunchy, as if those two weren’t so anal retentive when they get dressed in the morning. They also have kinder expressions, especially the guy directly to the left of the one in the center. And then it’s as if the collars of the men standing to the right had too much starch. They’re propped up and overly stiff, caricature stiff. You almost get the feeling Rembrandt added extra air to these puffed up collars to make a statement about the men having their portrait done. Like, someone who had taste and grace wouldn’t have a collar that was so visibly puffy and stiff. Also, the guy in the back doesn’t have a hat like the others.”

Barbara glowed. I’d given her something to work with, a constraint from which to create a world. I felt like I’d just finished a performance, felt the adrenaline subside as students’ turned their heads back to face the painting again, shifted their attention to the next question, the next comment, the next brush stroke in Syndics of the Drapers’ Guild. 

After a few more turns goading students to describe the painting, Barbara stepped out of her role as Socrates and told us about the painting’s historical context. I don’t remember what she said or how she looked when she said it. I don’t remember every class with her. I do remember a homework assignment she gave inspired by André Breton’s objet trouvé, a surrealist technique designed to get outside our standard habits of perception, to let objects we wouldn’t normally see pop into our attention. I wrote about my roommate’s black high-heeled shoes and Barbara could tell I was reading Nietzsche’s Birth of Tragedy because I kept referencing Apollo and Dionysus, godheads for constructive reason and destructive passion, entropy pulling us ever to our demise.[1] I also remember a class where we studied Cindy Sherman photos, in particular her self portraits as Caravaggio’s Bacchus and her film still from Hitchcock’s Vertigo. We took a trip to the Chicago Art Institute and looked at few paintings together. Barbara advised us never to use the handheld audio guides as they would pollute our vision. We had to learn how to trust ourselves and observe the world like scientists.

cindy-sherman_caravaggio
Cindy Sherman’s Untitled #224, styled after Caravaggio’s Bacchus
Cindy_Sherman_Untitled_Film_Still_21
Cindy Sherman’s Untitled Film Still 21, styled after Hitchcock’s Vertigo

In the fourth paragraph of the bio on her personal website, Barbara says that “she likes to touch the earth without gloves.” She explains that this means she doesn’t just write about art and how we perceive images, but also “embodies her ideas in exhibitions.”

I interpret the sentence differently. To touch the earth without gloves is to see the details, to pull back the covers of intentionality and watch as if no one were watching. Arts and humanities departments are struggling to stay relevant in an age where we value computer science, mathematics, and engineering. But Barbara didn’t teach us about art. She taught us how to see, taught us how to make room for the phenomenon in front of us. Paintings like Rembrandt’s Syndics of the Drapers’ Guild were a convenient vehicle for training skills that can be transferred and used elsewhere, skills which, I’d argue, are not only relevant but essential to being strong leaders, exacting scientists, and respectful colleagues. No matter what field we work in, we must all work all the time to notice our cognitive biases, the ever-present mind ghosts that distort our vision. We must make room for observation. Encounter others as they are, hear them, remember their words, watch how their emotions speak through the slight curl of their lips and the upturned arch of their eyebrows. Great software needs more than just engineering and science: it needs designers who observe the world to identify features worth building.

I am indebted to Barbara for teaching me how to see. She is integral to the success I’ve had in my career in technology.

BarbaraStafford
A picture that captures what I remember about Barbara

Of all the memories I could share about my college experience, why share this one? Why do I remember it so vividly? What makes this memory profound?

I recently read Carlos Casteñeda’s The Active Side of Infinity and resonated with book’s premise as “a collection of memorable events” Casteñeda recounts as an exercise to become a warrior-traveler like the shamans who lived in Mexico in ancient times. Don Juan Matus, a (potentially fictional) Yaqui shaman who plays the character of Casteñeda’s guru in most of his work, considers the album “an exercise in discipline and impartiality…an act of war.” On his first pass, Casteñeda picks out memories he assumes should be important in shaping him as an individual, events like getting accepted to the anthropology program at UCLA or almost marrying a Kay Condor. Don Juan dismisses them as “a pile of nonsense,” noting they are focused on his own emotions rather than being “impersonal affairs” that are nonetheless “extremely personal.”

The first story Casteñeda tells that don Juan deems fit for a warrior-traveler is about Madame Ludmilla, “a round, short woman with bleached-blond hair…wearing a a red silk robe with feathery, flouncy sleeves and red slippers with furry balls on top” who performs a grotesque strip tease called “figures in front of a mirror.” The visuals remind me of dream sequence from a Fellini movie, filled with the voluptuousness of wrinkled skin and sagging breasts and the brute force of the carnivalesque. Casteñeda’s writing is noticeably better when he starts telling Madame Ludmilla’s story: there’s more detail, more life. We can picture others, smell the putrid stench of dried vomit behind the bar, relive the event with Casteñeda and recognize a truth in what he’s lived, not because we’ve had the exact same experience, but because we’ve experienced something similar enough to meet him in the overtones. “What makes [this story] different and memorable,” explains don Juan, “is that it touches every one of us human beings, not just you.”

ludmilla
This is how I imagined Madame Ludmilla, as depicted in Fellini’s 8 1/2. As don Juan says, we are all “senseless figures in front of a mirror.”

Don Juan calls this war because it requires discipline to see the world this way. Day in and day out, structures around us bid us to focus our attention on ourselves, to view the world through the prism of self-improvement and self-criticism: What do I want from this encounter? What does he think of me? When I took that action, did she react with admiration or contempt? Is she thinner than I am? Look at her thighs in those pants-if I keep eating desserts they way I do, my thighs will start to look like that too. I’ve fully adopted the growth mindset and am currently working on empathy: in that last encounter, I would only give myself a 4/10 on my empathy scale. But don’t you see that I’m an ESFJ? You have to understand my actions through the prism of my self-revealed personality guide! It’s as if we live in a self-development petri dish, where experiences with others are instruments and experiments to make us better. Everything we live, everyone we meet, and everything we remember gets distorted through a particular analytical prism: we don’t see and love others, we see them through the comparative machine of the pre-frontal cortex, comparing, contrasting, categorizing, evaluating them through the prism of how they help or hinder our ability to become the future self we aspire to become.

Warrior-travelers like don Juan fight against this tendency. Collecting an album of memorable events is a exercise in learning how to live differently, to change how we interpret our memories and first-person experiences. As non-warriors, we view memories as scars, events that shape our personality and make us who we are. As warriors, we view ourselves as instruments and vessels to perceive truths worth sharing, where events just so happen to happen to us so we can feel them deeply enough and experience the minute details required to share vivid details with others. Warriors are instruments of the universe, vessels for the universe to come to know itself. We can’t distort what others feel because we want them to like us or act a certain way because of us: we have to see others for who they are, make space for negative and positive emotions. What matters isn’t that we improve or succeed, but that we increase the range of what’s perceivable. Only then can we transmit information with the force required to heal or inspire. Only then are we fearless. 

Don Juan’s ways of seeing and being weren’t all new to me (although there were some crazy ideas of viewing people as floating energy balls). There are sprinklings of my quest to live outside the self in many posts on the blog. Rather, The Active Side of Infinity helped me clarify why I share first-person stories in the first place. I don’t write to tell the world about myself or share experiences in an effort to shape my identity. This isn’t catharsis. I write to be a vessel, a warrior-traveller. To share what I felt and saw and smelled and touched as I lived experiences that I didn’t know would be important at the time but that have managed to stick around, like Argos, always coming back, somehow catalyzing feelings of love and gratitude as intense today as they were when I first experienced them. To use my experiences to illustrate things we are all likely to experience in some way or another. To turn memories into stories worth sharing, with details concrete enough that you, reader, can feel them, can relate to them, and understand a truth that, ill-defined and informal though it may be, is searing in its beauty.

This post features two excerpts from my warrior-traveler album, both from my time as an undergraduate at the University of Chicago. I ask myself: if I were speaking to someone for the first time and they asked me to tell them about myself, starting in college, would I share these memories? Likely not. But it’s a worthwhile to wonder if doing so might change the world for the good.


When I attended the University of Chicago, very few professors gave students long reading assignments for the first class. Some would share a syllabus, others would circulate a few questions to get us thinking. No one except Loren Kruger expected us to read half of Anna Karenina and be prepared to discuss Tolstoy’s use of literary from to illustrate 19th-century Russian class structures and ideology.

Loren was tall and big boned. A South African, she once commented on J.M. Coetzee’s startling ability to wield power through silence. She shared his quiet intensity, demanded such rigor and precision in her own work that couldn’t but demand it from others. The tiredness of the old world laced her eyes, but her work was about resistance; she wrote about Brecht breaking boundaries in theater, art as an iron-hot rod that could shed society’s tired skin and make room for something new. She thought email destroyed intimacy because the virtual distance emboldened students to reach out far more frequently than when they had to brave a face-to-face encounter. About fifteen students attended the first class. By the third class, there were only three of us. With two teaching assistants (a French speaker and a German speaker), the student:teacher ratio became one:one.[2]

LK_Santiago_web
A picture that captures what I remember about Loren

Loren intimated me, too. The culture at the University of Chicago favored critical thinking and debate, so I never worried about whether my comments would offend others or come off as bitchy (at Stanford, sadly, this was often the case). I did worry about whether my ideas made sense. Being the most talkative student in a class of three meant I was constantly exposed in Loren’s class, subjecting myself to feedback and criticism. She criticized openly and copiously, pushing us for precision, depth, insight. It was tough love.

The first thing Loren taught me was the importance of providing concrete examples to test how well I understood a theory. We were reading Karl Marx, either The German Ideology or the first volume of Das Kapital.[3] I confidently answered Loren’s questions about the text, reshuffling Marx’s words or restating what he’d written in my own words. She then asked me to provide a real-world example of one of his theories. I was blank. Had no clue how to answer. I’d grown accustomed to thinking at a level of abstraction, riding text like a surfer rides the top of a wave without grounding the thoughts in particular examples my mind could concretely imagine.[4] The gap humbled me, changed how I test whether I understand something. This happens to be a critical skill in my current work in technology, given how much marketing and business language is high-level and general: teams think they are thinking the same thing, only to realize that with a little more detail they are totally misaligned.

We wrote midterm papers. I don’t remember what I wrote about but do remember  opening the email with the grade and her comments, laptop propped on my knees and back resting against the powder-blue wall in my bedroom off the kitchen in the apartment on Woodlawn Avenue. B+. “You are capable of much more than this.” Up rang my old friend imposture syndrome: no, I’m not, what looks like eloquence in class is just a sham, she’s going to realize I’m not what she thinks I am, useless, stupid, I’ll never be able to translate what I can say into writing. I don’t know how. Tucked behind the fiddling furies whispered the faint voice of reason: You do remember that you wrote your paper in a few hours, right? That you were rushing around after the house was robbed for the second time and you had to move? 

Before writing our final papers, we had to submit and receive feedback on a formal prospectus rather than just picking a topic. We’d read Franz Fanon’s The Wretched of the Earth and I worked with Dustin (my personal TA) to craft a prospectus analyzing Gillo Pontecorvo’s Battle of Algiers in light of some of Fanon’s descriptions of the experience of colonialism.[7]

Once again, Loren critiqued it harshly. This time I panicked. I didn’t want to disappoint her again, didn’t want the paper to confirm to both of us that I was useless, incompetent, unable to distill my thinking into clear and cogent writing. The topic was new to me and out of my comfort zone: I wasn’t an expert in negritude and or post-colonial critical theory. I wrote her a desperate email suggesting I write about Baudelaire and Adorno instead. I’d written many successful papers about French Romanticism and Symbolism and was on safer ground.

la pointe.gif
Ali La Pointe, the martyred revolutionary in The Battle of Algiers

Her response to my anxious plea was one of the more meaningful interactions I’ve ever had with a professor.

Katie, stop thinking about what you’re going to write and just write. You are spending far too much energy worrying about your topic and what you might or might not produce. I am more than confident you are capable of writing something marvelous about the subject you’ve chosen. You’ve demonstrated that to me over the quarter. My critiques of your prospectus were intended to help you refine your thinking, not push you to work on something else. Just work!

I smiled a sigh of relief. No professor had ever said that to me before. Loren had paid attention, noticed symptoms of anxiety but didn’t placate or coddle me. She remained tough because she believed I could improve. Braved the mania. This interaction has had a longer-lasting impact on me than anything I learned about the subject matter in her class. I can call it to mind today, in an entirely different context of activity, to galvanize myself to get started when I’m anxious about a project at work.

The happiest moments writing my final paper about the Battle of Algiers were the moments describing what I saw in the film. I love using words to replay sequences of stills, love interpreting how the placement of objects or people in a still creates an emotional effect. My knack for doing so stems back to what I learned in Art History 101. I think I got an A on the paper. I don’t remember or care. What stays with me is my gratitude to Loren for not letting me give up, and the clear evidence she cared enough about me to put in the work required to help me grow.


[1] This isn’t the first time things I learned in Barbara’s class have made it into my blog. The objet trouvé exercise inspired a former blog post.

[2] I ended up having my own private teaching assistant, a French PhD named Dustin. He told me any self-respecting comparative literature scholar could read and speak both French and German fluently, inspiring me to spend the following year in Germany.

[3] I picked up my copy of The Marx-Engels Reader (MER) to remember what text we read in Loren’s class. I first read other texts in the MER in Classics of Social and Political Thought, a social sciences survey course that I took to fulfilled a core requirement (similar to Barbara’s Art History 101) my sophomore year. One thing that leads me to believe we read The German Ideology or volume one of Das Kapital in Loren’s class is the difference in my handwriting between years two and four of college. In year two, my handwriting still had round playfulness to it. The letters are young and joyful, but look like they took a long time to write. I remember noticing that my math professors all seemed to adopt a more compact and efficient font when they wrote proofs on the chalkboard: the a’s were totally sans-serif, loopless. Letters were small. They occupied little space and did what they could not to draw attention to themselves so the thinker could focus on the logic and ideas they represented. I liked those selfless a’s and deliberately changed my handwriting to imitate my math professors. The outcome shows in my MER. I apparently used to like check marks to signal something important: they show up next to straight lines illuminating passages to come back to. A few great notes in the margins are: “Hegelian->Too preoccupied w/ spirit coming to itself at basis…remember we are in (in is circled) world of material” and “Inauthenticity->Displacement of authentic action b/c always work for later (university/alienation w/ me?)”

[4] There has to be a ton of analytic philosophy ink spilled on this question, but it’s interesting to think about what kinds of thinking is advanced by pure formalisms that would be hampered by ties to concrete, imaginable referents and what kinds of thinking degrade into senseless mumbo jumbo without ties to concrete, imaginable referents. Marketing language and politically correct platitudes definitely fall into category two. One contemporary symptom of not knowing what one’s talking about is the abuse of the demonstrative adjective that. Interestingly enough, such demonstrative abusers never talk about thises, they only talk about thats. This may be used emphatically and demonstratively in a Twitter or Facebook conversation: when someone wholeheartedly supports a comment, critique, or example of some point, they’ll write This as a stand-alone sentence with super-demonstrative reference power, power strong enough to encompass the entire statement made before it. That’s actually ok. It’s referring to one thing, the thing stated just above it. It’s dramatic but points to something the listener/reader can also point to. The problem with the abused that is that it starts to refer to a general class of things that are assumed, in the context of the conversation, to have some mutually understood functional value: “To successfully negotiate the meeting, you have to have that presentation.” “Have that conversation — it’s the only way to support your D&I efforts!” Here, the listener cannot imagine any particular that that these words denote. The speaker is pointing to a class of objects she assumes the listener is also familiar with and agrees exist. A conversation about what? A presentation that looks like what? There are so many different kinds and qualities of conversations or presentations that could fit the bill. I hear this used all the time and cringe a little inside every time. I’m curious to know if others have the same reaction I do, or if I should update my grammar police to accept what has become common usage. Leibniz, on the other hand, was an early modern staunch defender of cogitatio caeca (Latin for blind thought), which referred to our ability to calculate and manipulate formal symbols and create truthful statements without requiring the halting step of imagining the concrete objects these symbols refer to. This, he argued against conservatives like Thomas Hobbes, was crucial to advance mathematics. There are structural similarities in the current debates about explainability of machine learning algorithms, even though that which is imagined or understood may lie on a different epistemological, ontological, and logical plane.

[5] People tell me that one reason they like my talks about machine learning is that I use a lot of examples to help them understand abstract concepts. Many talks are structured like this one, where I walk an audience through the decisions they would have to make as a cross-functional team collaborating on a machine learning application. The example comes from a project former colleagues worked on. I realized over the last couple of years that no matter how much I like public speaking, I am horrified by the prospect of specializing in speaking or thought leadership and not being actively engaged in the nitty-gritty, day-to-day work of building systems and observing first-person how people interact with them. I believe the existential horror stems from my deep-seated beliefs about language and communication, in my deep-seated discomfort with words that don’t refer to anything. Diving into this would be worthwhile: there’s a big difference between the fictional imagination, the ability to bring to life the concrete particularity of something or someone that doesn’t exist, and the vagueness of generalities lacking reference. The second does harm and breeds stereotypes. The first is not only potent in the realm of fiction, but, as my fiancé Mihnea is helping me understand, may well be one of the master skills of the entrepreneur and executive. Getting people aligned and galvanized around a vision can only occur if that vision is concrete, compelling, and believable. An imaginable state of the world we can all inhabit, even if it doesn’t exist yet. A tractable as if that has the power to influence what we do and how we behave today so as to encourage its creation and possibility.[6]

[6] I believe this is the first time I’ve had a footnote referring to another footnote (I did play around with writing an incorrigibly long photo caption in Analogue Repeaters). Funny this ties to the footnote just above (hello there, dear footnote!) and even funnier that footnote 4 is about demonstrative reference, including the this discursive reference. But it’s seriously another thought so I felt it merited it’s own footnote as opposed to being the second half of footnote 5. When I sat down to write this post, I originally planned to write about the curious and incredible potency of imagined future states as tools to direct action in the present. I’ve been thinking about this conceptual structure for a long time, having written about it in the context of seventeenth-century French philosophy, math, and literature in my dissertation. The structure has been around since the Greeks  (Aristotle references it in Book III of the Nicomachean Ethics) and is used in startup culture today. I started writing a post on the topic in August, 2018. Here’s the text I found in the incomplete draft when I reopened it a few days ago:

A goal is a thinking tool.

A good goal motivates through structured rewards. It keeps people focused on an outcome, helps them prioritize actions and say no to things, and stretches them to work harder than they would otherwise. Wise people say that a good goal should be about 80% achievable. Wise leaders make time reward and recognize inputs and outputs.

A great goal reframes what’s possible. It is moonshot and requires the suspension of disbelief, the willingness to quiet all the we can’ts and believe something surreal, to sacrifice realism and make room for excellence. It assumes a future outcome that is so outlandish, so bold, that when you work backwards through the series of steps required to achieve it, you start to do great things you wouldn’t have done otherwise. Fools say that it doesn’t matter if you never come close to realizing a great goal, because the very act of supposing it could be possible and reorienting your compass has already resulted in concrete progress towards a slightly more reasonable but still way above average outcome. 

Good goals create outcomes. Great goals create legacies.

This text alienates me. It reminds me of an inspirational business book: the syncopation and pace seem geared to stir pathos and excitement. How curious that the self evolves so quickly, that the I looking back on the same I’s creations of a few months ago feels like she is observing a stranger, someone speaking a different language and inhabiting a different world. But of course that’s the case. Of course being in a different environment shapes how one thinks and what one sees. And the lesson here is not one of fear around instability of character: it’s one that underlines to crucial importance of context, the crucial importance of taking care to select our surroundings so we fill our brains with thoughts and words that shape a world we find beautiful, a world we can call home. The other point of this footnote is a comment on the creative process. Readers may have noted the quotation from Pascal that accompanies all my posts: “The last thing one settles in writing a book is what one should put in first.” The joy of writing, for me, as for Mihnea and Kevin Kelly and many others, lies in unpacking an intuition, sitting down in front of a silent wall and a silent world to try to better understand something. I’m happiest when, writing fast, bad, and wrong to give my thoughts space to unfurl, I discover something I wouldn’t have discovered had I not written. Writing creates these thoughts. It’s possible they lie dormant with potential inside the dense snarl of an intuition and possible they wouldn’t have existed otherwise. Topic for another post. With this post, I originally intended to use the anecdote about Stafford’s class to show the importance of using concrete details, to illustrate how training in art history may actually be great training for the tasks of a leader and CEO. But as my mind circled around the structure that would make this kind of intro make sense, I was called to write about Casteñeda, pulled there by my emotions and how meaningful these memories of Barbara and Loren felt. I changed the topic. Followed the path my emotions carved for me. The process was painful and anxiety-inducing. But it also felt like the kind of struggle I wanted to undertake and live through in the service of writing something worth reading, the purpose of my blog.

[7] About six months ago, I learned that an Algerian taxi driver in Montréal was the nephew of Ali La Pointe, the revolutionary martyr hero in Battle of Algiers. It’s possible he was lying, but he was delighted by the fact that I’d seen and loved the film and told me about the heroic deeds of another uncle who didn’t have the same iconic stardom as Ali. Later that evening I attended a dinner hosted by Element AI and couldn’t help but tell Yoshua Bengio about the incredible conversation I had in the taxi cab. He looked at me with confusion and discomfort, put somewhat out of place and mind by my not accommodating the customary rules of conversation with acquaintances.

The featured image is the Syndics of the Drapers’ Guild, which Rembrandt painted in 1662. The assembled drapers assess the quality of different weaves and cloths, presumably, here, assessing the quality of the red rug splayed over the table. In Ways of Seeing, John Berger writes about how oil paintings signified social status in the early modern period. Having your portrait done showed you’d made it, the way driving a Porsche around town would do so today. When I mentioned that the collars seemed a little out of place, Barbara Stafford found the detail relevant precisely because of the plausibility that Rembrandt was including hints of disdain and critique in the commissioned portraits, mocking both his subjects and his dependence on them to get by. 

Innovation as a Dialectic

These days, innovation is not an opportunity but a mandate. By innovate, I mean apply technology to do old things differently (faster, cheaper, more efficiently) or do new things that were not previously possible. There are many arguments one could put forth to critique our unshaken faith in progress and growth. Let’s table these critiques and take it as given that innovation is a good thing. Let’s also restrict our scope to enterprise innovation rather than broad consumer or societal innovation.

I’ve probably seen over 100 different organizations’ approaches to “digital” innovation over the past year in my role leading business development for Fast Forward Labs. While there are general similarities, often influenced by the current popularity of lean startup methodology or design thinking, no two organizations approach innovation identically. Differences arise from the simple fact that organizations are made of people; people differ; individual motives, thoughts, and actions combine together in complex ways to create emergent behavior at the group level; amazingly interesting and complex things result (it’s a miracle that a group of even 50 people can work together as a unit to generate value that greatly exceeds that aggregate value from each individual contributor); past generations of people in organizations pass down behavior and habits to future generations through a mysterious process called culture; developments in technology (amidst other things) occur outside of the system* that is the organization, and then the organization does what it can to tweak the system to accept (or reject) these external developments; some people feel threatened and scared, others are excited and stimulated; the process never ends, and technology changes faster than people’s ability to adopt it…

Observing all these environments, and observing them with the keenness and laser-focused attention that only arises when one wants to influence their behavior, when one must empathize deeply enough to become the person one is observing, to adopt their perspective nearly completely - their ambitions, their fears, their surprises, their frustrations - so as to be able to then convince them that spending money on our company’s services is a sound thing to do (but it’s not only mercenary: people are ends in themselves, not a means to an end. Even if I fail to sell, I am always motivated and energized by the opportunity to get to know yet another individual’s mind and heart), I’ve come to accept a few axioms:

  1. Innovation is hard. Way harder than all the meaningless marketing makes it seem.
  2. There is no one way to innovate. The approach depends on organizational culture, structure, history, product, and context.
  3. Inventions are not solutions. Just because something is technically possible doesn’t mean people will appreciate its utility clearly enough to want to change their habits.
  4. Most people adopt new technologies through imitation, not imagination, often following the normal distribution of Geoffrey Moore’s technology adoption lifecycle.
  5. We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten. (Bill Gates)

Now, research in artificial intelligence is currently moving at a fast clip. At least once a month some amazing theoretical breakthrough (like systems beating Smash Brothers or Texas Hold’em Poker champions) is made by a university computer science department or the well-funded corporate research labs that have virtually replaced academia (this is worrisome for inequality of income and opportunity, and worthy of further discussion in a future post). Executives at organizations that aren’t Google, Facebook, Amazon, Apple, and Microsoft see the news and ask their tech leadership whether any of this is worth paying attention to, if it’s just a passing fad for geeks or if it’s poised to change the economy as we know it. If they do take AI seriously, the next step is to figure out how to go about applying it in their businesses. That’s where things get interesting.

There are two opposite ways to innovate with data and algorithms. The first is a top down approach that starts with new technological capabilities and looks for ways to apply them. The second is a bottom up approach that starts with an existing business problem or pain point and looks for possible technical solutions to the problem.

Top down approaches to innovation are incredibly exciting. They require unbridled creativity, imagination, exploration, coupled with the patience and diligence to go from idea to realized experiment, complete with whatever proof is required to convince others that a theoretical milestone has actually been achieved. Take computer vision as an example. Just a few years ago, the ability of computers to automatically recognize the objects in images - to be shown a picture without any metadata and say, That’s a cat! That’s a dog! That’s a girl riding a bicycle in the park on a summer day! (way more complicated technically) - was mediocre at best. Researchers achieved ok performance using classic classification algorithms, but nothing to write home about. But then, with a phase shift worthy of a Kuhnian scientific revolution, the entire community standardized on a different algorithmic technique that had been shunned by most of the machine learning community for some time. These artificial neural networks, whose underlying architecture is loosely inspired by the synapses that connect neurons in our brain’s visual cortex, did an incredible job transforming image pixels into series of numbers called vectors that could then be twisted, turned, and processed until they reliably matched with linguistic category labels. The results captivated popular press attention, especially since they were backed by large marketing machines at companies like Google and Facebook (would AI be as popular today if research labs were still tucked away in academic ivory towers?). And since then we at Fast Forward Labs have helped companies build systems that curate video content for creatives at publishing houses and identify critical moments in surgical processes to improve performance of remote operators of robotic tools.

That said, many efforts to go from capability to application fail. First, most people struggle to draw analogies between one domain (recognizing cats on the internet) and another (classifying whether a phone attempting to connect to wifi is stationary or moving in a vehicle). The analogies exist at a level of abstraction most people don’t think at - you have to view the data as a vector and think about the properties of the vector to evaluate whether the algorithm would be a good tool for the job - and that seem so different from our standard perceptual habits. Second, it’s technically difficult to scale a mathematical model to actually work for a real-world problem, and many data scientists and researchers lack the software engineering skills required to build real products. Third, drawing on Geoff Moore’s adoption lifecycle, many people lack patience to work with early technologies that don’t immediate do what they’re supposed to do. Applying a new technology often requires finding a few compassionate early adopters who are willing to give feedback to improve a crappy tool. Picking the wrong early users can kill a project. Fourth, organizations that are risk averse like to wait until their peers have tested and tried a new thing before they go about disrupting day-to-day operations. As they say, no one gets fired for hiring McKinsey or IBM. And finally, people are busy. It’s hard to devote the time and attention needed to understand a new capability well enough to envision its potential applicability. Most people can barely keep up with their current workload, and deprioritizing possibility to keep up with yesterday’s tasks.

Which is why it can often be more effective to approach innovation by solving a real problem instead of inventing a problem for a solution. This bottom up approach is more closely aligned with design thinking. The focus is on the business: what people do, how they do it, and where technology may be able to help them do it better. The approach works best when led by a hybrid business-technical person whose job it is to figure out which problems to solve - based on predicted impact and relatively low technical difficulty so progress can be made quickly - and muster the right technology to solve them - by building something internally or buying a third-party product. People tend to have more emotional skin in the game with innovation driven by problem solving because they feel greater ownership: they know the work intimately, and may be motivated by the recognition of doing something better and faster. The risk, particularly when using technology to automate a current task, is that people will fear changing their habits (although we are amazingly adaptable to new tools) or, worse, that they will be replaced by a machine.

The core difficulty with innovating by solving problems is that the best solution to the most valuable business problem is almost always technically boring. Technical research teams want to explore and make cool stuff; they don’t want to apply their time and energy to building a model using math they learned in undergrad to a problem that makes their soul cringe. While it seems exciting to find applications for deep learning in finance, for example, linear regression models are likely the best technical solution for 85% of problems. They are easy to build and interpretable, making it easier for users to understand why a tool outputs the answers it does, easier for developers to identify and fix bugs, and easier for engineers to deploy the model on existing commodity hardware without having to invest in new servers with GPUs. The other risk of starting with business problems is that in can lead to mediocre solutions. Lacking awareness of what’s possible, teams may settle for what they know rather than what’s best. If you’ve never seen an automobile, you’ll count how many horses you need to make it across the country faster.

As the title suggests, therefore, the best approach is to treat innovation as a dialectic capable of balancing creative capabilities with pragmatic solutions. Innovation is a balancing act uniting contradictory forces: the promise of tomorrow against the pull of today, the whimsy of technologists against the stubbornness of end users, the curiosity of potential against the calculation of risk. These contradictions are the growing pains of innovation, and the organizations that win are those that embrace them as part of growing up.

*Even an internal R&D department may be considered outside the system of the organization, particularly if R&D acts as a separate and independent unit that is not integrated into day-to-day operations. What an R&D team develops still has to be integrated into an existing equilibrium.

 

Progress and Relative Definitions

Over the past year,  I’ve given numerous talks attempting to explain what artificial intelligence (AI) is to non-technical audiences. I’d love to start these talks with a solid, intuitive definition for AI, but have come to believe a good definition doesn’t exist. Back in September, I started one talk by providing a few definitions of intelligence (plain old, not artificial - a distinction which itself requires clarification) from people working in AI:

“Intelligence is the computational part of the ability to achieve goals in the world.” John McCarthy, a 20th-century computer scientist who helped found the field of AI

“Intelligence is the use of information to make decisions which save energy in the pursuit of a given task.” Neil Lawrence, a younger professor at the University of Sheffield

“Intelligence is the quality that enables an entity to function appropriately and with foresight in its environment.” Nils Nilsson, an emeritus professor from Stanford’s engineering department

I couldn’t help but accompany these definitions with Robert Musil’s maxim definition of stupidity (if not my favorite author, Musil is certainly up there in the top 10):

“Act as well as you can and as badly as you must, but in doing so remain aware of the margin of error of your actions!” Robert Musil, a 20th century Austrian novelist

There are other definitions for intelligence out there, but I intentionally selected these four because they all present intelligence as related to action, as related to using information wisely to do something in the world. Another potential definition of intelligence would be to make truthful statements about the world, the stuff of the predicate logic we use to say that an X is an X and a Y is a Y. Perhaps sorting manifold, complex inputs into different categories, the tasks of perception and the mathematical classifications that mimic perception, is a stepping stone to what eventually becomes using information to act.

At any rate, there are two things to note.

First, what I like about Musil’s definition, besides the wonderfully deep moral commentary of sometimes needing to act as badly as you must, is that he includes as part of his definition of stupidity (see intelligence) a call to remain aware of margins of error. There is no better training in uncertainty than working in artificial intelligence. Statistics-based AI systems (which means most contemporary systems) provide approximate best guesses, playing Marco Polo, as my friend Blaise Aguera y Arcas says, until they get close enough for government work; some systems output “maximum likely” answers, and others (like the probabilistic programming tools my colleagues at Fast Forward Labs just researched) output full probability distributions, with affiliated confidence rates for each point in the distribution, which we then have to interpret to gauge how much we should rely on the AI to inform our actions. I’ll save other thoughts about the strange unintuitive nature of thinking probabilistically another time (hopefully in a future post about Michael Lewis’s latest book, The Undoing Project.)

Second, these definitions of intelligence don’t help people understand AI. They may be a step above the buzzword junk that litters the internet (all the stuff about pattern recognition magic that will change your business that leads people outside the field to believe that all machine learning is unsupervised, whereas unsupervised learning is an active and early area of research), but they don’t leave audiences feeling like they’ve learned anything useful and meaningful. I’ve found it’s more effective to walk people through some simple linear or logistic regression models to give them an intuition of what the math actually looks like. They may not leave with minds blown away at the possibilities, but they do leave with the confident clarity of having learned something that makes sense.

As it feels like a fruitless task to actually define AI, I (and my colleague Hilary Mason, who used this first) instead like to start my talks with a teaser definition to get people thinking:

“AI is whatever we can do that computers can’t…yet.” Nancy Fulda, a science fiction writer, on Writing Excuses

This definition doesn’t do much to help audiences actually understand AI either. But it does help people understand why it might not make sense to define a given technology category - especially one advancing so quickly - in the first place. For indeed, an attempt to provide specific examples of the things AI systems can and cannot do would eventually - potentially even quickly - be outdated. AI, as such, lies within the horizons of near future possibility. Go too far ahead and you get science fiction. Go even further an you get madness or stupidity. Go too behind and you get plain old technology. Self-driving cars are currently an example of AI because we’re just about there. AlphaGo is an example of AI because it came quicker than we thought. Building a system that uses a statistical language model that’s not cutting edge may be AI for the user of the system but feel like plain old data science to the builder of the system, as for the user it’s on the verge of the possible, and for the builder it’s behind the curve of the possible. As Gideon Lewis-Kraus astutely observed in his very well written exposé on Google’s new translation technology, Google Maps would seem like magic to someone in the 1970s even though it feels commonplace to us today.

So what’s the point? Here’s a stab. It can be challenging to work and live in a period of instability, when things seem to be changing faster than definitions - and corollary social practices like policies and regulations - can keep up with. I personally like how it feels to be work in a vortex of messiness and uncertainty (despite my anxious disposition). I like it because it opens up the possibility for relativist, non-definitions to be more meaningful than predicate truths, the possibility to realize that the very technology I work on can best be defined within the relative horizons of expectation. And I think I like that because (and this is a somewhat tired maxim but hey, it still feels meaningful) it’s the stuff of being human. As Sartre said and as Heidegger said before him, we are beings for whom existence precedes essence. There is no model of me or you sitting up there in the Platonic realm of forms that gets realized as we live in the world. Our history is undefined, leading to all sorts of anxieties, worries, fears, pain, suffering, all of it, and all this suffering also leads the the attendant feelings of joy, excitement, wonder (scratch that, as I think wonder is aligned with perception), and then we look back on what we’ve lived and the essence we’ve become and it feels so rich because it’s us. Each of us Molly Bloom, able to say yes yes and think back on Algeciras, not necessarily because Leo is the catch of the century, but because it’s with him that we’ve spent the last 20 years of our lives.

The image is Paul Klee’s Angelus Novus, which Walter Benjamin described as “an angel looking as though he is about to move away from something he is fixedly contemplating,” an angel looking back at the chain of history piling ruin upon ruin as the storm of progress hurls him into the future.