Artificial Intelligence and the Fall of Eve

We seem to need foundational narratives.

Big picture stories that make sense of history’s bacchanal march into the apocalypse.

Broad-stroke predictions about how artificial intelligence (AI) will shape the future of humanity made by those with power arising from knowledge, money, and/or social capital.[1] Knowledge, as there still aren’t actually that many real-deal machine learning researchers in the world (despite the startling growth in paper submissions to conferences like NIPS), people who get excited by linear algebra in high-dimension spaces (the backbone of deep learning) or the patient cataloguing of assumptions required to justify a jump from observation to inference.[2] Money, as income inequality is a very real thing (and a thing too complex to say anything meaningful about in this post). For our purposes, money is a rhetoric amplifier, be that from a naive fetishism of meritocracy, where we mistakenly align wealth with the ability to figure things out better than the rest of us,[3] or cynical acceptance of the fact that rich people work in private organizations or public institutions with a scope that impacts a lot of people. Social capital, as our contemporary Delphic oracles spread wisdom through social networks, likes and retweets governing what we see and influencing how we see (if many people, in particular those we want to think like and be like, like something, we’ll want to like it too), our critical faculties on amphetamines as thoughtful consideration and deliberation means missing the boat, gut invective the only response fast enough to keep pace before the opportunity to get a few more followers passes us by, Delphi sprouting boredom like a 5 o’clock shadow, already on to the next big thing. Ironic that big picture narratives must be made so hastily in the rat race to win mindshare before another member of the Trump administration gets fired.

Most foundational narratives about the future of AI rest upon an implicit hierarchy of being that has been around for a long time. While proffered by futurists and atheists,  the hierarchy dates back to the Great Chain of Being that medieval Christian theologists like Thomas Aquinas built to cut the physical and spiritual world into analytical pieces, applying Aristotelian scientific rigor to the spiritual topics.

Screen Shot 2018-06-03 at 10.43.22 AM
Aquinas’ hierarchy of being on a blog by a fellow named David Haines I know nothing about but that seems to be about philosophy and religion.

The hierarchy provides a scale from inanimate matter to immaterial, pure intelligence. Rocks don’t get much love on the great chain of being, even if they carry the wisdom and resilience of millions of years of existence, contain, in their sifting shifting grain of sands, the secrets of fragility and the whispered traces of tectonic plates and sunken shores. Plants get a little more love than rocks, and apparently Venus fly traps (plants that resemble animals?) get more love than, say, yeast (if you’re a fellow member of the microbiome-issue club, you like me are in total awe of how yeast are opportunistic sons of bitches who sense the slightest shift in pH and invade vulnerable tissue with the collective force of stealth guerrilla warriors). Humans are hybrids, half animal, half rational spirit, our sordid materiality, our silly mortality, our mechanical bodies ever weighting us down and holding us back from our real potential as brains in vats or consciousnesses encoded to live forever in the flitting electrons of the digital universe. There are a shit ton of angels. Way more angel castes than people castes. It feels repugnant to demarcate people into classes, so why not project differences we live day in and day out in social interactions onto angels instead? And, in doing so, basically situate civilized aristocrats as closer to God than the lower and more animalistic members of the human race? And then God is the abstract patriarch on top of it all, the omnipotent, omniscient, benevolent patriarch who is also the seat of all our logical paradoxes, made of the same stuff as Gödel’s incompleteness theorem, the guy who can be at once father and son, be the circle with the center everywhere and the circumference nowhere, the master narrator who says, don’t worry, I got this, sure that hurricane killed tons of people, sure it seems strange that you can just walk into a store around the corner a buy a gun and there are mass shootings all the time, but trust me, if you could see the big picture like I see the big picture, you’d get how this confusing pain will actually result in the greatest good to the most people.

IMG_4395
Sandstone in southern Utah, the momentary, coincidental dance of wind and grain petrified into this shape at this moment in time. I’m sure it’s already somewhat different.

I’m going to be sloppy here and not provide hyperlinks to specific podcasts or articles that endorse variations of this hierarchy of being: hopefully you’ve read a lot of these and will have sparks of recognition with my broad stroke picture painting.[4] But what I see time and again are narratives that depict AI within a long history of evolution moving from unicellular prokaryotes to eukaryotes to slime to plants to animals to chimps to homo erectus to homo sapiens to transhuman superintelligence as our technology changes ever more quickly and we have a parallel data world where leave traces of every activity in sensors and clicks and words and recordings and images and all the things. These big picture narratives focus on the pre-frontal cortex as the crowning achievement of evolution, man distinguished from everything else by his ability to reason, to plan, to overcome the rugged tug of instinct and delay gratification until the future, to make guesses about the probability that something might come to pass in the future and to act in alignment with those guesses to optimize rewards, often rewards focused on self gain and sometimes on good across a community (with variations). And the big thing in this moment of evolution with AI is that things are folding in on themselves, we no longer need to explicitly program tools to do things, we just store all of human history and knowledge on the internet and allow optimization machines to optimize, reconfiguring data into information and insight and action and getting feedback on these actions from the world according to the parameters and structure of some defined task. And some people (e.g., Gary Marcus or Judea Pearl) say no, no, these bottom up stats are not enough, we are forgetting what is actually the real hallmark of our pre-frontal cortex, our ability to infer causal relationships between phenomena A and phenomena B, and it is through this appreciation of explanation and cause that we can intervene and shape the world to our ends or even fix injustices, free ourselves from the messy social structures of the past and open up the ability to exercise normative agency together in the future (I’m actually in favor of this kind of thinking). So we evolve, evolve, make our evolution faster with our technology, cut our genes crisply and engineer ourselves to be smarter. And we transcend the limitations of bodies trapped in time, transcend death, become angel as our consciousness is stored in the quick complexity of hardware finally able to capture plastic parallel processes like brains. And inch one step further towards godliness, ascending the hierarchy of being. Freeing ourselves. Expanding. Conquering the march of history, conquering death with blood transfusions from beautiful boys, like vampires. Optimizing every single action to control our future fate, living our lives with the elegance of machines.

It’s an old story.

Many science fiction novels feel as epic as Disney movies because they adapt the narrative scaffold of traditional epics dating back to Homer’s Iliad and Odyssey and Virgil’s Aeneid. And one epic quite relevant for this type of big picture narrative about AI is John Milton’s Paradise Lost, the epic to end all epics, the swan song that signaled the shift to the novel, the fusion of Genesis and Rome, an encyclopedia of seventeenth-century scientific thought and political critique as the British monarchy collapsed under  the rushing sword of Oliver Cromwell.

Most relevant is how Milton depicts the fall of Eve.

Milton lays the groundwork for Eve’s fall in Book Five, when the archangel Raphael visits his friend Adam to tell him about the structure of the universe. Raphael has read his Aquinas: like proponents of superintelligence, he endorses the great chain of being. Here’s his response to Adam when the “Patriarch of mankind” offers the angel mere human food:

Adam, one Almightie is, from whom
All things proceed, and up to him return,
If not deprav’d from good, created all
Such to perfection, one first matter all,
Indu’d with various forms various degrees
Of substance, and in things that live, of life;
But more refin’d, more spiritous, and pure,
As neerer to him plac’t or neerer tending
Each in thir several active Sphears assignd,
Till body up to spirit work, in bounds
Proportiond to each kind.  So from the root
Springs lighter the green stalk, from thence the leaves
More aerie, last the bright consummate floure
Spirits odorous breathes: flours and thir fruit
Mans nourishment, by gradual scale sublim’d
To vital Spirits aspire, to animal,
To intellectual, give both life and sense,
Fansie and understanding, whence the Soule
Reason receives, and reason is her being,
Discursive, or Intuitive; discourse
Is oftest yours, the latter most is ours,
Differing but in degree, of kind the same.

Raphael basically charts the great chain of being in the passage. Angels think faster than people, they reason in intuitions while we have to break things down analytically to have any hope of communicating with one another and collaborating. Daniel Kahnemann’s partition between discursive and intuitive thought in Thinking, Fast and Slow had an analogue in the seventeenth century, where philosophers distinguished the slow, composite, discursive knowledge available in geometry and math proofs from the fast, intuitive, social insights that enabled some to size up a room and be the wittiest guest at a cocktail party.

Raphael explains to Adam that, through patient, diligent reasoning and exploration, he and Eve will come to be more like angels, gradually scaling the hierarchy of being to ennoble themselves. But on the condition that they follow the one commandment never to eat the fruit from the forbidden tree, a rule that escapes reason, that is a dictum intended to remain unexplained, a test of obedience.

But Eve is more curious than that and Satan uses her curiosity to his advantage. In Book Nine, Milton fashions Satan in his trappings as snake as a master orator who preys upon Eve’s curiosity to persuade her to eat of the forbidden fruit. After failing to exploit her vanity, he changes strategies and exploits her desire for knowledge, basing his argument on an analogy up the great chain of being:

O Sacred, Wise, and Wisdom-giving Plant,
Mother of Science, Now I feel thy Power
Within me cleere, not onely to discerne
Things in thir Causes, but to trace the wayes
Of highest Agents, deemd however wise.
Queen of this Universe, doe not believe
Those rigid threats of Death; ye shall not Die:
How should ye? by the Fruit? it gives you Life
To Knowledge? By the Threatner, look on mee,
Mee who have touch’d and tasted, yet both live,
And life more perfet have attaind then Fate
Meant mee, by ventring higher then my Lot.
That ye should be as Gods, since I as Man,
Internal Man, is but proportion meet,
I of brute human, yee of human Gods.
So ye shall die perhaps, by putting off
Human, to put on Gods, death to be wisht,
Though threat’nd, which no worse then this can bring.

 

Satan exploits Eve’s mental model of the great chain of being to tempt her to eat the forbidden apple. Mere animals, snakes can’t talk. A talking snake, therefore, must have done something to cheat the great chain of being, to elevate itself to the status of man. So too, argues Satan, can Eve shortcut her growth from man to angel by eating the forbidden fruit. The fall of mankind rests upon our propensity to rely on analogy. May the defenders of causal inference rejoice.[5]

The point is that we’ve had a complex relationship with our own rationality for a long time. That Judeo-Christian thought has a particular way of personifying the artifacts and precipitates of abstract thoughts into moral systems. That, since the scientific revolution, science and religion have split from one another but continue to cross paths, if only because they both rest, as Carlo Rovelli so beautifully expounds in his lyrical prose, on our wonder, on our drive to go beyond the immediately visible, on our desire to understand the world, on our need for connection, community, and love.

But do we want to limit our imaginations to such a stale hierarchy of being? Why not be bolder and more futuristic? Why not forget gods and angels and, instead, recognize these abstract precipitates as the byproducts of cognition? Why not open our imaginations to appreciate the radically different intelligence of plants and rocks, the mysterious capabilities of photosynthesis that can make matter from sun and water (WTF?!?), the communication that occurs in the deep roots of trees, the eyesight that octopuses have all down their arms, the silent and chameleon wisdom of the slit canyons in the southwest? Why not challenge ourselves to greater empathy, to the unique beauty available to beings who die, capsized by senescence and always inclining forward in time?

IMG_4890
My mom got me herbs for my birthday. They were little tiny things, and now they look like this! Some of my favorite people working on artistic applications of AI consider tuning hyperparameters to be an act akin to pruning plants in a garden. An act of care and love.

Why not free ourselves of the need for big picture narratives and celebrate the fact that the future is far more complex than we’ll ever be able to predict?

How can we do this morally? How can we abandon ourselves to what will come and retain responsibility? What might we build if we mimic animal superintelligence instead of getting stuck in history’s linear march of progress?

I believe there would be beauty. And wild inspiration.


[1] This note should have been after the first sentence, but I wanted to preserve the rhetorical force of the bare sentences. My friend Stephanie Schmidt, a professor at SUNY Buffalo, uses the concept of foundational narratives extensively in her work about colonialism. She focuses on how cultures subjugated to colonial power assimilate and subvert the narratives imposed upon them.

[2] Yesterday I had the pleasure of hearing a talk by the always-inspiring Martin Snelgrove about how to design hardware to reduce energy when using trained algorithms to execute predictions in production machine learning. The basic operations undergirding machine learning are addition and multiplication: we’d assume multiplying takes more energy than adding, because multiplying is adding in sequence. But Martin showed how it all boils down to how far electrons need to travel. The broad-stroke narrative behind why GPUs are better for deep learning is that they shuffle electrons around criss-cross structures that look like matrices as opposed to putting them into the linear straight-jacket of the CPU. But the geometry can get more fine-grained and complex, as the 256×256 array in Google’s TPU shows. I’m keen to dig into the most elegant geometry for designing for Bayesian inference and sampling from posterior distributions.

[3] Technology culture loves to fetishize failure. Jeremy Epstein helped me realize that failure is only fun if it’s the mid point of a narrative that leads to a turn of events ending with triumphant success. This is complex. I believe in growth mindsets like Ray Dalio proposes in his Principles: there is real, transformative power in shifting how our minds interpret the discomfort that accompanies learning or stretching oneself to do something not yet mastered. I jump with joy at the opportunity to transform the paralyzing energy of anxiety into the empowering energy of growth, and believe its critical that more women adopt this mindset so they don’t hold themselves back from positions they don’t believe they are qualified for. Also, it makes total sense that we learn much, much more from failures than we do from successes, in science, where it’s important to falsify, as in any endeavor where we have motivation to change something and grow. I guess what’s important here is that we don’t reduce our empathy for the very real pain of being in the midst of failure, of not feeling like one doesn’t have what other have, of being outside the comfort of the bell curve, of the time it takes to outgrow the inheritance and pressure from the last generation and the celebrations of success. Worth exploring.

[4] One is from Tim Urban, as in this Google Talk about superintelligence. I really, really like Urban’s blog. His recent post about choosing a career is remarkably good and his Ted talk on procrastination is one of my favorite things on the internet. But his big picture narrative about AI irks me.

[5] Milton actually wrote a book about logic and was even a logic tutor. It’s at once incredibly boring and incredibly interesting stuff.

The featured image is the 1808 Butts Set version of William Blake’s “Satan Watching the Endearments of Adam and Eve.” Blake illustrated many of Milton’s works and illustrated Paradise Lost three times, commissioned by three different patrons. The color scheme is slightly different between the Thomas, Butts, and Linnell illustration sets. I prefer the Butts. I love this image. In it, I see Adam relegated to a supporting actor, a prop like a lamp sitting stage left to illuminate the real action between Satan and Eve. I feel empathy for Satan, want to ease his loneliness and forgive him for his unbridled ambition, as he hurdles himself tragically into the figure of the serpent to seduce Eve. I identify with Eve, identify with her desire for more, see through her eyes as they look beyond the immediacy of the sexual act and search for transcendence, the temptation that ultimately leads to her fall. The pain we all go through as we wise into acceptance, and learn how to love. 

Screen Shot 2018-06-03 at 8.22.32 AM
Blake’s image reminds me of this masterful kissing scene in Antonioni’s L’Avventura (1960)The scene focuses on Monica Vitti, rendering Gabriele Ferzetti an instrument for her pleasure and her interior movement between resistance and indulgence. Antonioni takes the ossified paradigm of the male gaze and pushes it, exposing how culture can suffocate instinct as we observe Vitti abandon herself momentarily to her desire.

Hearing Aids (Or, Metaphors are Personal)

Thursday morning, I gave the opening keynote at an event about the future of commerce at the Rotman School of Management in Toronto. I shared four insights:

  • The AI instinct is to view a reasoning problem as a data problem
    • Marketing hype leads many to imagine that artificial intelligence (AI) works like human brain intelligence. Words like “cognitive” lead us to assume that computers think like we think. In fact, succeeding with supervised learning, as I explain in this article and this previous post, involves a shift in perspective to reframe a reasoning task as a data collection task.
  • Advances in deep learning are enabling radical new recommender systems
    • My former colleague Hilary Mason always cited recommender systems as a classic example of a misunderstood capability. Data scientists often consider recommenders to be a solved problem, given the widespread use of collaborative filtering, where systems infer person B’s interests based on similarity with person A’s interests. This approach, however, is often limited by the “cold start” problem: you need person A and person B to do stuff before you can infer how they are similar. Deep learning is enabling us to shift from comparing past transactional history (structured data) to comparing affinities between people and products (person A loves leopard prints, like this ridiculous Kimpton-style robe!). This doesn’t erase the cold start problem wholesale, but it opens a wide range of possibilities because taste is so hard to quantify and describe: it’s much easier to point to something you like than to articulate why you like it.
  • AI capabilities are often features, not whole products
  • AI will dampen the moral benefits of commerce if we are not careful
    • Adam Smith is largely remembered for his theories on the value of the distribution of labor and the invisible hand that guides capitalistic markets. But he also wrote a wonderful treatise on moral sentiments where he argued that commerce is a boon to civilization because it forces us to interact with strangers; when we interact with strangers, we can’t have temper tantrums like we do at home with our loved ones; and this gives us practice in regulating our emotions, which is a necessary condition of rational discourse and the compromise at the heart of teamwork and democracy. As with many of the other narcissistic inclinations of our age, the logical extreme of personalization and eCommerce is a world where we no longer need to interact with strangers, no longer need to practice the art of tempered self-interest to negotiate a bargain. Being elegantly bored at a dinner party can be a salutatory boon to happiness. David Hume knew this, and died happy; Jean-Jacques Rousseau did not, and died miserable.
bill cunningham
This post on Robo Bill Cunningham does a good job explaining how image recognition capabilities are opening new roads in commerce and fashion.

An elderly couple approached me after the talk. I felt a curious sense of comfort and familiarity. When I give talks, I scan the audience for signs of comprehension and approval, my attention gravitating towards eyes that emit kindness and engagement. On Thursday, one of those loci of approval was an elderly gentleman seated in the center about ten rows deep. He and his Russian companion had to have been in their late seventies or early eighties. I did not fear their questions. I embraced them with the openness that only exists when there is no expectation of judgment.

She got right to the point, her accent lilted and slavic. “I am old,” she said, “but I would like to understand this technology. What recommendations would you give to elderly people like myself, who grew up in a different age with different tools and different mores (she looked beautifully put together in her tweed suit), to learn about this new world?”

I told her I didn’t have a good answer. The irony is that, by asking about something I don’t normally think about, she utterly stumped me. But it didn’t hurt to admit my ignorance and need to reflect. By contrast, I’m often able to conjure some plausible response to those whose opinion I worry about most, who elicit my insecurities because my sense of self is wrapped up in their approval. The left-field questions are ultimately much more interesting.

The first thing that comes to mind if we think about how AI might impact the elderly is how new voice recognition capabilities are lowering the barrier to entry to engage with complex systems. Gerontechnology is a thing, and there are many examples of businesses working to build robots to keep the elderly company or administer remote care. My grandmother, never an early adopter, loves talking to Amazon Alexa.

But the elegant Russian woman was not interested in how the technology could help her; She wanted to understand how it works. Democratizing knowledge is harder than democratizing utility, but ultimately much more meaningful and impactful (as a U Chicago alum, I endorse a lifelong life of the mind).

Then something remarkable happened. Her gentleman friend interceded with an anecdote.

“This,” he started, referring to the hearing aid he’d removed from his ear, “is an example of artificial intelligence. You can hear from my accent that I hail from the other side of the Atlantic (his accent was upper-class British; he’d studied at Harvard). Last year, we took a trip back with the family and stayed in quintessential British town with quintessential British pubs. I was elated by the prospect of returning to the locals of my youth, of unearthing the myriad memories lodged within childhood smells and sounds and tastes. But my first visit to a pub was intolerable! My hearing aid had become thoroughly Canadian, adapted to the acoustics of airy buildings where sound is free to move amidst tall ceilings. British pubs are confined and small! They trap the noise and completely bombarded my hearing aid. But after a few days, it adjusted, as these devices are wont to do these days. And this adaptation, you see, shows how devices can be intelligent.”

Of course! A hearing aid is a wonderful example of an adaptive piece of technology, of something whose functionality changes automatically with context. His anecdote brilliantly showed how technologies are always more than the functionalities they provide, are rather opportunities to expose culture and anthropology: Toronto’s adolescence as a city indexed by its architecture, in contrast to the wizened wood of an old-world pub; the frustrating compromises of age and fragility, the nostalgic ideal clipped by the time the device required to recalibrate; the incredible detail of the personal as a theatrical device to illustrate the universal.

What’s more, the history of hearing aids does a nice job illustrating the more general history of technology in this our digital age.

Partial deafness is not a modern phenomenon. As everywhere, the tools to overcome it have changed shape over time.

Screen Shot 2017-11-19 at 11.39.29 AM
This 1967 British Pathé primer on the history of hearing aids is a total trip, featuring radical facial hair and accompanying elevator music. They pay special attention to using the environment to camouflage cumbersome hearing aid machinery.

One thing that stands out when you go down the rabbit hole of hearing aid history is the importance of design. Indeed, historical hearing aids are analogue, not digital. People used to use naturally occurring objects, like shells or horns, to make ear trumpets like the one pictured in the featured image above. Some, including 18th-century portrait painter Joshua Reynolds, did not mind exposing their physical limitations publicly. Reynolds was renowned for carrying an ear trumpet and even represented his partial deafness in self-portraits painted later in life.

reynolds_self_portrait_1775_0
Reynolds’ self-portrait as deaf (1775)

Others preferred to deflect attention from their disabilities, camouflaging their tools in the environment or even transforming them into signals of power. At the height of the Napoleonic Age, King John VI of Portugal commissioned an acoustic throne with two open lion mouths at the end of the arms. These lion mouthes became his makeshift ears, design transforming weakness into a token of strength; Visitors were required to kneel before the chair and speak directly into the animal heads.

acoustic throne
King John VI’s acoustic throne, its lion head ears requiring submission

The advent of the telephone changed hearing aid technology significantly. Since the early 20th century, they’ve gone from being electronic to transistor to digital. Following the exponential dynamics of Moore’s Law, their size has shrunk drastically: contemporary tyrants need not camouflage their weakness behind visual symbols of power. Only recently have they been able to dynamically adapt to their surroundings, as in the anecdote told by the British gentleman at my talk. Time will tell how they evolve in the near future. Awesome machine listening research in labs like those run by Juan Pablo Bello at NYU may unlock new capabilities where aids can register urban mood, communicating the semantics of a surrounding as opposed to merely modulating acoustics. Making sense of sound requires slightly different machine learning techniques than making sense of images, as Bello explores in this recent paper. In 50 years time, modern digital hearing aids may seem as eccentric as a throne with lion-mouth ears.

The world abounds in strangeness. The saddest state of affairs is one of utter familiarity, is one where the world we knew yesterday remains the world we will know tomorrow. Is the trap of the filter bubble, the closing of the mind, the resilient force of inertia and sameness. I would have never included a hearing aid in my toolbox of metaphors to help others gain an intuition of how AI works or will be impactful. For I have never lived in the world the exact same way the British gentleman has lived in the world. Let us drink from the cup of the experiences we ourselves never have. Let us embrace the questions from left field. Let each week, let each day, open our perspectives one sliver larger than the day before. Let us keep alive the temperance of commerce and the sacred conditions of curiosity.


The featured image is of Madame de Meuron, a 20th-century Swiss aristocrat and eccentric. Meuron is like the fusion of Jean des Esseintes-the protagonist of Huysman’s paradigmatic decadent novel, À Rebours, the poisonous book featured in Oscar Wilde’s Picture of Dorian Gray-and Gertrude Stein or Peggy Guggenheim. She gives life to characters in Thomas Mann novels. She is a modern day Quijote, her mores and habits out of sync with the tailwinds of modernity. Eccentricity, perhaps, the symptom of history. She viewed her deafness as an asset, not a liability, for she could control the input from her surroundings: “So ghör i nume was i wott! - So I only hear what I want to hear!”

Whales, Fish, and Paradigm Shifts

I never really liked the 17th-century English philosopher Thomas Hobbes, but, as with Descartes, found myself continuously drawn to his work. The structure of Leviathan, the seminal founding work of the social contract theory tradition (where we willingly abdicate our natural rights in exchange for security and protection from an empowered government, so we can devote our energy to meaningful activities like work rather than constantly fear that our neighbors will steal our property in a savage war of of all against all)*, is so 17th-century rationalist and, in turn, so strange to our contemporary sensibilities. Imagine beginning a critique of the Trump administration by defining the axioms of human experience (sensory experience, imagination, memory, emotions) and imagining a fictional, pre-social state of affairs where everyone fights with one another, and then showing not only that a sovereign monarchy is a good form of government, but also that it must exist out of deductive logical necessity, and!, that it is formed by a mystical, again fictional, moment where we come together and willing agree it’s rational and in our best interests to hand over some of our rights, in a contract signed by all for all, that is then sublimated into a representative we call government! I found the form of this argument so strange and compelling that I taught a course tracing the history of this fictional “state of nature” in literature, philosophy, and film at Stanford.

Long preamble. The punch line is, because Hobbes haunted my thoughts whether I liked it or not, I was intrigued when I saw a poster advertising Trying Leviathan back in 2008. Given the title, I falsely assumed the book was about the contentious reception of Hobbesian thought. In fact, Trying Leviathan is D. Graham Burnett‘s intellectual history of Maurice v. Judd, an 1818 trial where James Maurice, a fish oil inspector who collected taxes for the state of New York, sought penalty against Samuel Judd, who had purchased three barrels of whale oil without inspection. Judd pleaded that the barrels contained whale oil, not fish oil, and so were not subject to the fish oil legislation. As with any great case**, the turnkey issue in Maurice v. Judd was much more profound than the matter that brought it to court: at stake was whether a whale is a fish, turning a quibble over tax law into an epic fight pitting new science against sedimented religious belief.

Indeed, in Trying Leviathan Burnett shows how, in 1818, four different witnesses with four very different backgrounds and sets of experiences answered what one would think would be a simple, factual question in four very different ways. The types of knowledge they espoused were structured differently and founded on different principles:

  • The Religious Syllogism: The Bible says that birds are in heaven, animals are on land, and fish are in the sea. The Bible says no wrong. We can easily observe that whales live in the sea. Therefore, a whale is a fish.
  • The Linnaean Taxonomy: Organisms can classified into different types and subtypes given a set of features or characteristics that may or may not be visible to the naked eye. Unlike fish, whales cannot breathe underwater because they have lungs, not gills. That’s why they come to the ocean surface and spout majestic sea geysers. We may not be able to observe the insides of whales directly, but we can use technology to help us do so.
    • Fine print: Linnaean taxonomy was a slippery slope to Darwinism, which throws meaning and God to the curb of history (see Nietzsche)
  • The Whaler’s Know-How: As tested by iterations and experience, I’ve learned that to kill a whale, I place my harpoon in a different part of the whale’s body than where I place my hook when I kill a fish. I can’t tell you why this is so, but I can certainly tell you that this is so, the proof being my successful bounty. This know-how has been passed down from whalers I apprenticed with.
  • The Inspector’s Orders: To protect the public from contaminated oil, the New York State Legislature had enacted legislation requiring that all fish oil sold in New York be gauged, inspected and branded. Oil inspectors were to impose a penalty on those who failed to comply. Better to err of the side of caution and count a whale as a fish than not obey the law.

From our 2017 vantage point, it’s easy to accept and appreciate the way the Linnaean taxonomist presented categories to triage species in the world. 200 years is a long time in the evolution of an idea: unlike genes, culture and knowledge can literally change from one generation to the next through deliberate choices in education. So we have to do some work to imagine how strange and unfamiliar this would have seemed to most people at the time, to appreciate how the Bible’s simple logic made more sense. Samuel Mitchell, who testified for Judd and represented the Linnaean strand of thought, likely faced the same set of social forces as Clarence Darrow in the Scopes Trial or Hilary Clinton in last year’s election. American mistrust of intellectuals runs deep.

But there’s a contemporary parallel that can help us relive and revive the emotional urgency of Maurice v. Judd: the rise of artificial intelligence (A.I.). The type of knowledge A.I. algorithms provide is different than the type of knowledge acquired by professionals whose activity they might replace. And society’s excited, confused, and fearful reaction to these new technologies is surfacing a similar set of epistemological collisions as those at play back in 1818.

Consider, for example, how Siddharta Mukherjee describes using deep learning algorithms to analyze medical images in a recent New Yorker article, A.I. versus M.D. Early in the article, Mukherjee distinguishes contemporary deep learning approaches to computer vision from earlier expert systems based on Boolean logic and rules:

“Imagine an old-fashioned program to identify a dog. A software engineer would write a thousand if-then-else statements: if it has ears, and a snout, and has hair, and is not a rat . . . and so forth, ad infinitum.”

With deep learning, we don’t list the features we want our algorithm to look for to identify a dog as a dog or a cat as a cat or a malignant tumor as a malignant tumor. We don’t need to be able to articulate the essence of dog or the essence of cat. Instead, we feed as many examples of previously labeled pieces of data into the algorithm and leave it to its own devices, as it tunes the weights linking together pockets of computing across a network, playing Marco Polo until it gets the right answer, so it can then make educated guesses on new data it hasn’t yet seen before. The general public understanding that A.I. can just go off and discern patterns in data, bootstrapping their way to superintelligence, is incorrect. Supervised learning algorithms take precipitates of human judgments and mimic them in the form of linear algebra and statistics. The intelligence behind the classifications or predictions, however, lies within a set of non-linear functions that defy any attempt at reduction to the linear, simple building blocks of analytical intelligence. And that, for many people, is a frightening proposition.

But it need not be. In the four knowledge categories sampled from Trying Leviathan above, computer vision using deep learning is like a fusion between a Linnaean Taxonomy and the Whaler’s Know-How. These algorithms excel at classification tasks, dividing the world up into parts. And they do it without our cleanly being able to articulate why - they do it by distilling, in computation, the lessons of apprenticeship, where the teacher is a set of labeled training data that tunes the worldview of the algorithm. As Mukherjee points out in his article, classification systems do a good job saying that something is the case, but do a horrible job saying why.*** For society to get comfortable with these new technologies, we should first help everyone understand what kinds of truths they are able (and not able) to tell. How they make sense of the world will be different from the tools we’ve used to make sense of the world in the past. But that’s not a bad thing, and it shouldn’t limit adoption. We’ll need to shift our standards for evaluating them else we’ll end up in the age old fight pitting the old against the new.

 

*Hobbes was a cynical, miserable man whose life was shaped by constant bloodshed and war. He’s said to have been born prematurely on April 5, 1588, at a moment when the Spanish Armada was invading England. He later reported that “my mother gave birth to twins: myself and fear.” Hobbes was also a third-rate mathematician whose insistence that he be able to mentally picture objects of inquiry stunted his ability to contribute to the more abstract and formal developments of the day, like the calculus developed simultaneously by Newton and Leibniz (to keep themselves entertained, as founding a new mathematical discipline wasn’t stimulating enough, they communicated the fundamental theorem of calculus to one another in Latin anagrams!)

**Zubulake v. UBS Warburgthe grandmother case setting standards for evidence in the age of electronic information, started off as a sexual harassment lawsuit. Lola v. Skadden started as an employment law case focused on overtime compensation rights, but will likely shape future adoption of artificial intelligence in law firms, as it claims that document review is not the practice of law because this is the type of activity a computer could do.

***There’s research on using algorithms to answer questions about causation, but many perception based tools simply excel at correlating stuff to proxies and labels for stuff.

 

 

Revisiting Descartes

René Descartes is the whipping post of Western philosophy. The arch dualist. The brain in a vat. The physicist whose theory of planetary motion, where a celestial vortex pushed the planets around, was destroyed by Newton’s theory of gravity (action at a distance was very hard to fathom by Newton’s contemporaries, including Leibniz). The closet Copernican who masked his heliocentric views behind a curtain of fiction, too cowardly to risk being burned at the stake like Giordano Bruno. The solipsist who canonized the act of philosophy as an act only fit for a Western White Privileged Male safely seated in the comfort of his own home, ravaging and pillaging the material world with his Rational Gaze, seeding the future of colonialism and climate change.

I don’t particularly like Descartes, and yet I found myself ineluctably drawn to him in graduate school (damn rationalist proclivities!). When applying, I pitched a dissertation exploring the unintuitive connection between 17th-century rationalism (Descartes, Spinoza, and Leibniz) and late 19th-century symbolism (Mallarmé, Valéry, and Rimbaud). My quest was inspired by a few sentences in Mallarmé’s Notes on Language:

Toute méthode est une fiction, et bonne pour la démonstration. Le language lui est apparu l’instrument de la fiction: il suivra la méthode du Langage. (la déterminer) Le language se réfléchissant. […] Nous n’avons pas compris Descartes, l’étranger s’est emparé de lui: mais il a suscité les mathématiciens français.

[All method is fiction, and good for demonstration. Language came into being as the instrument of fiction: it will follow the method of Language. (determine this method) Language reflecting on itself. […] We haven’t understood Descartes, foreigners have seized him: but he catalyzed the French mathematicians.]

Floating on the metaphysical high that ensues from reading Derrida and Deleuze, I spent a few years racking my brain to argue that Descartes’ famous dictum, I think, therefore I am, was a supreme act of fiction. Language denoting nothing. Words untethered from reference to stuff in and of the world. Language asserting itself as a thing on par with teacups, cesspools, and carrots. God not as Father but as Word. As pure logical starting point. The axiom at the center of any system. Causa sui (the cause of itself). Hello World! as the basis of any future philosophy, where truth is fiction and fiction is truth. It was a battle, a crusade to say something radically important. I always felt I was 97% there, but that it was Zeno impossible to cross that final 3%.

That quest caused a lot of pain, suffering, and anxiety. Metaphysics is the pits.

And then I noticed something. Well, a few things.

First, Descartes’ Geometry, which he published as an appendix to his Discourse on Method, used the pronoun I as, if not more, frequently than the articles the and a/an. I found that strange for a work of mathematics. Sure, lyric poetry, biography, and novels use all the time-but math? Math was supposed to be the stuff of objective truths. We’re all supposed to come to the same conclusions about the properties of triangles, right? Why would Descartes present his subjective opinions about triangles?

Second, while history views the key discovery in the Geometry to be the creation of the Cartesian plane, where Descartes fused formal algebra with planar geometry to change the course of mathematics, (as with all discoveries, he wasn’t the only one thinking this way; he had a lifelong feud with Pierre de Fermat, whose mathematical style he rebuffed as unrefined, the stuff of a bumpkin Gascon), what Descartes himself claims to be most proud of in the work is his discovery of the lost art of analysis. Analysis, here, is a method for solving math and geometry problems where you start by assuming the existence of an object you’d like to construct, e.g., a triangle with certain properties, and work backwards through a set of necessary, logical relationships until something more grounded and real comes into being. The flip side of this process is called synthesis, the more common presentation of mathematical arguments inherited from Euclid, which starts with axioms and postulates, and moves forward through logical arguments to prove something. What excited Descartes was that he thought synthesis was fine to rigorous conclusions once they’d been found, but was useless as a creative tool to make new discoveries and learn new mathematical truths. By recovering the lost method of analysis, which shows up throughout history in Aristotle’s Nicomachean Ethics (when deliberating, we consider first what end we want to achieve, and reason backward to the means we might implement to bring about this end), Edgar Allan Poe’s Philosophy of Composition (when writing poetry, commence with the consideration of an effect, and find such combinations of event, or tone, as shall best aid in the construction of the effect), and even Elon Musk’s recursive product strategy (work back from an end goal — five, 10 or 50 years ahead — until you can hit inflection points that propel your company and its customers to the next stage, while ushering both toward the end goal), Descartes thought he was presenting a method for creativity and new discoveries in mathematics.

Third, while history records (and perverts) the central dictum of Cartesian philosophy as I think, therefore I am, which appeared in the 1637 Discourse on Method, Descartes later replaced this with I am, I exist in his 1641 Meditations on First Philosophy. What?!? What happened to the res cogitans, the thinking thing defined by its free will, in contrast to the res extensa of the material world determined by the laws of mechanics? And what happened to the therefore, the indelible connection between thinking and being that inspired so much time and energy in Western philosophy, be it in the radical idealism of Berkeley or even the life-is-but-a-simulation narratives of the Matrix and, more recently, Nick Bostrom and Elon Musk? (He keeps coming up. There must be some secret connection between hyper-masculine contemporary futurists and 17th-century rationalism? Or maybe we’re really living in the Neobaroque, a postmodern Calderonian stalemate of life is a dream? Would be a welcome escape from our current recession into myopic nationalism…) As it happens, the Finnish philosopher Jaakko Hintikka (and Columbia historian of science Matthew Jones after him) had already argued back in 1962 that the logic Cogito was performative, not inferential. Hintikka thinks what Descartes is saying is that it’s impossible for us to say “I do not exist” because there has to be something there uttering “I do not exist.” It’s a performative contradiction. As such, we can use the Cogito as a piece on unshakeable truth to ground our system. No matter how hard we try, we can’t get rid of ourselves.

Here’s the punchline: like Mallarmé said, we haven’t understood Descartes.

I think there’s a possibility to rewrite the history of philosophy (this sounds bombastic) by showing how repetition, mindfulness, and habit played a central role in Descartes’ epistemology. In my dissertation, I trace Descartes’ affiliation to the Jesuit tradition of Spiritual Exercises, which Ignatius of Loyola created to help practitioners mentally and imaginatively relive Christ’s experiences. I show how the of the Geometry is used to encourage the reader to do the problems together with Descartes, a rhetorical move to encourage learning by doing, a guidebook or script to perform and learn the method of analysis. I mention how he thought all philosophers should learn how to sew, viewing crochet excellent training for method and repetition. I show how the I am, I exist serves as a meditative mantra the reader can return to again and again, practicing it and repeating it until she has a “clear and distinct” intuition for an act of thought with a single logical step (as opposed to a series of deductions from postulates). The ties back to analysis using the logic of fake it ’til you make it. The meditator starts with a cloudy, noisy mind, a mind that easily slips back to the mental cacophony of yore; but she wills herself to focus on that one clear idea, the central fulcrum if I am, I exist to train an epistemology based on clear and distinct ideas. Habit, here, isn’t the same thing as the logical relationship between two legs of a triangle, but the overall conceptual gesture is similar.

Descartes sought to drain the intellectual swamp (cringe) inherited from the medieval tradition. Doing so required the mindfulness and attention we see today in meditation practices, disciplining the mind to return back to the emptiness of breath when it inevitably wanders to the messy habits we acquire in our day-to-day life. Descartes’ mediations were just that, meditations, practice, actions we could return to daily to cultivate habits of mind that could permit a new kind of philosophy. His method was an act of freedom, empowering us to define and agree upon the subset of experiences abstract enough for us to share and communicate to one another without confusion. Unfortunately, this subset is very tight and constrained, and misses out on much of what is beautiful in life.

I wrote this post to share ideas hidden away in my dissertation, the work of a few years in some graduate student’s life that now lies silent and dormant in the annals of academic history. While I question the value literature has to foster empathy in my post about the utility of the humanities in the 21st century, I firmly believe that studying primary sources can train us to be empathetic and openminded, train us to rid ourselves of preconceptions and prejudice so we can see something we’d miss if we blindly following the authority of inherited tradition. George Smith, one of my favorite professors at Stanford (a Newton expert visiting from Tufts), once helped me understand that secondary sources can only scratch the tip of the iceberg of what may exist in primary sources because authors are constrained by the logic of their argument, presenting at most five percent of what they’ve read and know. We make choices when we write, and can never include everything. Asking What did Descartes think he was thinking? rather than What does my professor think Descartes was thinking? or Was Descartes right or wrong? invites us to reconstruct a past world, to empathize deeply with a style of thought radically different from how we live and think today. As I’ve argued before, these skills make us good businesspeople, and better citizens.

The image is from the cover page of an 1886 edition of the Géométrie, which Guillaume Troianowski once thoughtfully gave me as a gift. 

Artifice as Realism

Canonized by Charles Darwin’s 1859 On the Origin of Species, natural history and its new subfield, evolutionary biology, was all the rage in the mid- and late-19th century. It was a type of science whose evidence lay in painstaking observation. Indeed, the methods of 19th-century natural science were inspired by the work Carl Linneaus, the father of modern taxonomy, had done a century prior. We can thank Linneaus for the funny Latin names of trees and plants we see alongside more common English terms at botanical gardens (e.g., Spanish oak as quercus falcata). Linneaus collected, observed, and classified animals, plants, and minerals, changing the way we observe like as like and dislike as dislike (we may even stretch and call him the father of machine learning, given that many popular algorithms, like deep neural nets or support vector machines, basically just classify things). One of my favorite episodes in the history of Linnean thought gradually seeping its way into collective consciousness is recounted in D.G. Burnett’s Trying Leviathanwhich narrates the intellectual history of Maurice v. Judd, an 1818 trial “that pitted the new sciences of taxonomy against the then-popular-and biblically sanctioned-view that the whale was a fish.” The tiniest bacteria, as the silent, steady Redwood trees, are so damn complex that we have no choice but to contort their features through abstractions, linking them, like as like, to other previously encountered phenomena to make sense of and navigate our world.

Taxonomy took shape as an academic discipline at Harvard under the stewardship of Louis Agassiz (a supporting actor shaping thinkers like William James in Louis Menand’s The Metaphysical Club). All sorts of sub-disciplines arose, including evolutionary biology-eventually leading to eugenics and contemporary genetics-and botany.

It’s with botany that things get interesting. The beauty of flowers, as classical haikus and sentimental Hallmark cards show, is fragile, transitory, vibrant in death. Flowers’ color, texture, turgidness, name your feature, change fast, while they are planted and heliotroping themselves towards light and life, as after they are plucked and, petal by petal, peter their way into desiccation and death. Flowers are therefore too transitory to lend themselves to the patient gaze of a taxonomist. This inspired George Lincoln Goodale, the founder of Harvard’s Botanical Museum, to commission two German glassblowers to make “847 life-size models representing 780 species and varieties of plants in 164 families as well as over 3,000 models of enlarged parts” to aid the study of botany (see here). The fragility of flowers made it such that artificial representations that could freeze features in time could reveal stronger truths (recognize this is loaded…) about the features of a species than the the real-life alternatives. Toppling the Platonic hierarchy, artifice was more real than reality.

I love this. And artifice as a condition for realism is not unique to 19th-century botany, as I’ll explore in the following three examples. Please add more!


Scientific Experiments by Doppler & Mendel

I’m reading Siddharta Mukherjee’s The Gene: An Intimate History in preparation for a talk about genetic manipulation he’s giving at Pioneerworks Thursday evening. He’s a good writer: the prose is elegant, sowed with literary references and personal autobiography whose candor elicits empathy. 93 pages in to the 495-page book, I’ve most appreciated the more philosophical and nuanced insights he weaves into his history. The most interesting of these insights is about artifice and realism.

The early chapters of The Gene scan the history of genetics from Pythagoras (semen courses through a man’s body and collects mystical vapors from each individual part to transmit self-information to a womb during intercourse) through Galton (we can deliberately par elite with elite (selectively sterilize the deformed, ugly, and sickly) to favor good genes, culminating in the atrocities of eugenics and still lingering in thinkers like Nick Bostrom). Gregor Johann Mendel is the hero and fulcrum around which all other characters (Darwin included) make cameo appearances. Mendel is also the hero of high school biology textbooks. He conducted a series of experiments with pea plants in the 18500s-1860s that demonstrated how heredity works. When male mates with female, the traits of their offspring aren’t a hybrid mix between the parents, but express one of two binary traits: offspring from a tall dad and a short mom are either tall or short, not medium height; grandchildren of a tall son and a tall mom can end up short if the recessive gene inherited from grandma takes charge in the subsequent generation. (What the textbooks omit, and Mukherjee explains, is that Mendel’s work was overlooked for 40 years! A few scientists around 1900 unknowingly replicated his conclusions, only to be crestfallen when they learned their insights were not original.)

Mukherjee cites Christian Doppler (of the eponymous Doppler effect) as one of Mendel’s key inspirations. Mendel was a monk isolated in Brno, a small city in the contemporary Czech Republic. He eventually made his way to Vienna to study physics under Doppler. Mukherjee describes the impact Doppler had on Mendel as follows:

“Sound and light, Doppler argued, behaved according to universal and natural laws-even if these were deeply counterintuitive to ordinary viewers or listeners. Indeed, if you looked carefully, all the chaotic and complex phenomena of the world were the result of highly organized natural laws. Occasionally, our intuitions and perceptions might allow us to grasp these natural laws. But more commonly, a profoundly artificial experiment…might be necessary to demonstrate these laws.”

A few chapters later, Mukheree states that Mendel’s decision to create a “profoundly artificial experiment,” selectively creating hybrid pea plants out of purebred strains carrying simple traits, was crucial to reveal his insights about heredity. There’s a lot packed into this.

mendel
Excerpt from Mendel’s manuscript about experiments with plant hybridization

First, there’s a pretty profound argument about epistemology and relativism. This is like and dislike the Copernican revolution. Our ordinary viewpoints, based on our day to day experiences in the world, could certainly lead to the conclusion, held for thousands of years, that the Sun revolves around the Earth. Viewed from our subjective perspective, it just makes more sense. But if we use our imagination to transport ourselves up to a view from the moon (as Kepler in his Somnium, a radically weird work of 17th-century science fiction), or somewhere else in space, we’d observe our earth moving around the sun. What’s most interesting is how, via cultural transmission and education, this formerly difficult and trying act of the imagination has become acclimated as collective conscious habit. Today, we have to do more intellectual and imaginative work to imagine the Earth revolving around the Sun, even though the heliocentric viewpoint runs counter to our grounded subjectivity. Narcissism may be more contingent and correctable than digital culture makes it seem.

Next, there’s a pretty profound argument about what kinds of truths scientific experiments tell. Mukherjee aligns Mendelian artifice with mechanistic philosophy, where the task of experimentation is to reveal the universal laws behind natural phenomena. These laws, in this framework, are observable, just not using the standard perceptual habits we use in the rest of our life. There are many corollary philosophical questions about the potential and probability of false induction (Hume!) and the very strange way we go about justifying a connection between an observed particular and a general law or rule. It does at least feel safe to say that artifice plays a role in enabling us to contort and refract what we see to enable us to see something radically new. Art, per Viktor Shklosky (amidst others), often does the same.

Italian Neorealist Cinema

I have a hell of a time remembering the details of narrative plots, but certain abstract arguments stick with me year after year, often dormant in the caverns of my memory, then awakened by some Proustian association. One of these arguments comes from André Bazin’s “Cinematic Realism and the Italian School of Liberation.”

Bazin was writing about the many “neorealist” films directors like Luchino Visconti, Roberto Rossellini, and Vittorio De Sica made in the 1940s and 50s. It was post war, Mussolini’s government had fell, Cinecittà (the Hollywood of Italy) had been damaged, and filmmakers had small production budgets. The intellectual climate, as that which provided the foundation for Zola in the late 19th century, invited the opportunity to throw the subjects deemed fit for art to the wayside and focus on the real-world suffering of real-world everyday people. These films are characterized by their use of nonprofessional actors, depictions of poverty and basic suffering, and their lack of happy ending narratives. They patiently chronicle slow, quotidian life.

bicycle-thieves-player-1920x1080
Iconic image from Vittorio de Sica’s The Bicycle Thief, a classic Italian neorealist film

Except that they don’t. Bazin’s core insight was that neorealism was no less artificial-or artful-than the sentimental and psychological dramas of popular Hollywood (Cinecittà) films. Bazin’s essay effectively becomes a critical manifesto for the techniques directors like Rossellini employed to create an effect that the viewer would perceive as real. The insights are similar to those Thomas Mann makes in Death in Venice, where a hyper orderly, rational German intellectual, Gustav von Aschenbach, succumbs to Dionysian and homoerotic impulses as he dies. Mann uses the story of Aschenbach as an ironic vehicle to comment on how artists can fabricate emotional responses in readers, spectators, and other consumers of art. There is an unbridgeable gulf between what we have lived and experienced, and how we represent what we have lived and experienced in art to produce and replicate a similar emotional experience for the reader, spectator, or consumer. The reality we inhabit today is never really the reality we watch on screen, and yet the presentation of what seems real on screen can go on to reshape how we then experience what we deem reality. As with the Copernican turn, after watching De Sica’s Bicycle Thief, we may have to do more intellectual and imaginative work to see poverty as we saw it before our emotions were touched by the work of art. Artifice, then, is not only required to make a style that feels real, but can crystallize as categories and prisms in our own mind to bend what we consider to be reality.

A slightly different cinematic example comes from the 2013 documentary Tim’s Vermeer, which documents technology entrepreneur Tim Jenison’s efforts to test his hypothesis about the painting techniques 17th-century Dutch master Johannes Vermeer used to create his particular realist style. Jenison was fascinated by the seemingly photographic quality of Vermeer’s paintings, which exhibit a clarity and realism far beyond that of his contemporaries. Content following form (or vice versa), Vermeer is also renowned for painting realistic, quotidian scenes, observing a young woman contemplating at a dining room table or learning to play the piano. As optics was burgeoning in the 17th century (see work by Newton or his closest collaborator, Christiaan Huygens), Jenison hypothesized that Vermeer achieved his eerie realism not through mystical, heroic, individual, subjective inspiration, but through rational, patient, objective technique. To test his hypothesis, Jenison tasks himself to recreate Vermeer’s Music Lesson using a dual-mirror technique that reflects the real-world scene onto a canvas and then enables the artist to do something like paint by number to replicate the color until he notes a gradient of difference with the reflected scene. What’s just awesome about this film is that Jenison’s technique to evaluate his hypothesis about Vermeer’s technique forces him to reverse engineer the original real-world scene that Vermeer would have painted. As such, he has to learn about 17th-century woodcutting (to recreate the piano), 17th-century glass staining (to recreate the stain-glassed window), and 17th-century textiles (to recreate the tapestry that hangs over a table). This single Vermeer painting-catalyzed by Jenison’s dedication and obsession to test his hypothesis-becomes a door into an encyclopedic world! The documentary is nothing short of extraordinary, not least because it forces us to question the cultural barriers between art/inspiration and science/observation (also not least because it includes some great scenes where the English artist David Hockney evaluates and accepts Jenison’s hypothesis). The two are porous, intertwined, ever interweaving to generate beauty, novelty, and realism.

jan_vermeer_van_delft_014
Vermeer’s Music Lesson, which Tim Jenison sought to recreate

Designing for User Adoption

The final example comes from my experiences with software UI/UX design. My first job after graduate school was with Intapp, a Palo Alto-based private company that makes software for law firms. Making software for lawyers poses a particular set of challenges that, like Mendel’s pea plant experiments, reveal general laws about how people engage with and adopt technology. Indeed, lawyers are notoriously slow to adopt new tools. First, the economics of law firms, governed by profits for partner, encourage conservatism because all profits are allocated on an annual basis to partners. Partners literally have to part with their commission to invest in technology that may or may not drive the efficiencies they want to make more money in the future. Second, lawyers tend to self-identify as technophobes: many are proud of their liberal arts backgrounds, and prefer to protect the relative power they have as masters of words and pens against the different intellectual capital garnered by quantitative problem solvers and engineers. Third, lawyers tend to be risk averse, and changing your habits and adopting new tools can be very risky business.

Intapp has a few products in its portfolio. One of them helps lawyers keep track of the time they spend making phone calls, writing emails, doing research, or drafting briefs for their different clients to inform the invoices they send to clients at the end of a billing period. Firms only get a solid return on investment from the product, Intapp Time (formerly Time Builder), if a certain percentage of lawyers opt to use it. You need sufficient numbers to log enough otherwise missed hours-and recover enough otherwise missed revenue-to cover for the cost of the software. As such, it was also critical that Intapp make the right product design and marketing choices to make sure the tool was something lawyers wanted to use and adopt.

What was most interesting were the design choices required to make that adoption happen. Because lawyers tend to be conservative, they didn’t want an application that radically changed how they did work or billed time from the habits they’d built and inculcated in the past (in particular the older generation). So the best technical solution, or even the most theoretically efficient or creative way of logging work to bill time, may not be the best solution for the users because it may push their imagination too far, may require too much change to be useful. Based on insights from interviews with end users, the Intapp design team ended up creating a product that mimicked-at least on the front end-the very habits and practices it was built to replace. Such skeuomorphism tells us a lot about progress and technology. Further thoughts on the topic appear in a former post.


Others

I can think of many other examples where artifice is the turnkey to perceive a certain type of truth or generate a certain type of realism. Generative probabilistic models using Bayesian inference do a better job predicting the future than data-centric regression models relying more directly on data. Thought experiments like the Trolley Problem are in the process of shifting from a device to comment on ethics to a premeditated, encoded action that can impact reality. Behind all of this are insights about how our minds work to make sense (and nonsense) of the world.

The featured image is of certain glass flowers that father and son glassblowers Leopold and Rudolf Blaschka made for Harvard’s natural history department between 1887-1936. Flowers are fragile: as a conditions so easily leads to their decay and death, they changed too quickly to permit the patient observation and study required by evolutionary biology. Artificial representations, therefore, allowed for more accurate scientific observations than real specimens.