Clinamen

The Sagrada Familia is a castle built by Australian termites.


The Sagrada Familia is not a castle built by Australian termites, and never will be. Tis utter blasphemy.


The Sagrada Familia is not a castle built by Australian termites, and yet, Look! Notice, as Daniel Dennett bids, how in an untrodden field in Australia there emerged and fell, in near silence, near but for the methodical gnawing, not unlike that of a mouse nibbling rapaciously on parched pasta left uneaten all these years but preserved under the thick dust on the thin cardboard with the thin plastic window enabling her to view what remained after she’d cooked just one serving, with butter, for her son, there emerged and fell, with the sublime transience of Andy Goldsworthy, a neo-Gothic church of organic complexity on par with that imagined by Antoni Gaudí i Cornet, whose Sagrada Familia is scheduled for completion in 2026, a full century after the architect died in a tragic tram crash, distracted by the recent rapture of his prayer.


The Sagrada Familia is not a castle built by Australian termites, and yet, Look! Notice, as Daniel Dennett bids, how in an untrodden field in Australia there emerged and fell a structure so eerily resemblant of the one Antoni Gaudí imagined before he died, neglected like a beggar in his shabby clothes, the doctors unaware they had the chance to save the mind that preempted the fluidity of contemporary parametric architectural design by some 80 odd years, a mind supple like that of Poincaré, singular yet part of a Zeitgeist bent on infusing time into space like sandalwood in oil, inseminating Euclid’s cold geometry with femininity and life, Einstein explaining why Mercury moves retrograde, Gaudí rendering the holy spirit palpable as movement in stone, fractals of repetition and difference giving life to inorganic matter, tension between time and space the nadir of spirituality, as Andrei Tarkovsky went on to explore in his films.

tarkovsky mirror
From Andrei Tarkovsky’s Mirror. As Tarkovsky wrote of his films in Sculpting in Time: “Just as a sculptor takes a lump of marble, and, inwardly conscious of the features of his finished piece, removes everything that is not a part of it — so the film-maker, from a ‘lump of time’ made up of an enormous, solid cluster of living facts, cuts off and discards whatever he does not need, leaving only what is to be an element of the finished film.”

The Sagrada Familia is not a castle built by Australian termites, and yet, Look! Notice, as Daniel Dennett bids, how in an untrodden field in Australia there emerged and fell a structure so eerily resemblant of the one Antoni Gaudí imagined before he died, with the (seemingly crucial) difference that the termites built their temple without blueprints or plan, gnawing away the silence as a collectivity of single stochastic acts which, taken together over time, result in a creation that appears, to our meaning-making minds, to have been created by an intelligent designer, this termite Sagrada Familia a marvelous instance of what Dennett calls Darwin’s strange inversion of reasoning, an inversion that admits to the possibility that absolute ignorance can serve as master artificer, that IN ORDER TO MAKE A PERFECT AND BEAUTIFUL MACHINE, IT IS NOT REQUISITE TO KNOW HOW TO MAKE IT*, that structures might emerge from the local activity of multiple parts, amino acids folding into proteins, bees flying into swarms, bumper-to-bumper traffic suddenly flowing freely, these complex release valves seeming like magic to the linear perspective of our linear minds.


The Sagrada Familia is not a castle built by Australian termites, and yet, the eerie resemblance between the termite and the tourist Sagrada Familias serves as a wonderful example to anchor a very important cultural question as we move into an age of post-intelligent design, where the technologies we create exhibit competence without comprehension, diagnosing lungs as cancerous or declaring that individuals merit a mortgage or recommending that a young woman would be a good fit for a role on a software engineering team or getting better and better at Go by playing millions of games against itself in a schizophrenic twist resemblant of the pristine pathos of Stephan Zweig, one’s own mind an asylum of exiled excellence during the travesty of the second world war, why, we’ve come full circle and stand here at a crossroads, bidden by a force we ourselves created to accept the creative potential of Lucretius’ swerve, to kneel at the altar of randomness, to appreciate that computational power is not just about shuffling 1s and 0s with speed but shuffling them fast enough to enable a tiny swerve to result in wondrous capabilities, and to watch as, perhaps tragically, we apply a framework built for intelligent design onto a Darwinian architecture, clipping the wings of stochastic potential, working to wrangle our gnawing termites into a straight jacket of cause, while the systems beating Atari, by no act of strategic foresight but by the blunt speed of iteration, make a move so strange and so outside the realm of verisimilitude that, as Kasparov succumbing to Deep Blue, we misinterpret a bug for brilliance.


The Sagrada Familia is not a castle built by Australian termites, and yet, it seems plausible that Gaudí would have reveled in the eerie resemblance between a castle built by so many gnawing termites and the temple Josep Maria Bocabella i Verdaguer, a bookseller with a popular fundamentalist newspaper, “the kind that reminded everybody that their misery was punishment for their sins,”**commissioned him to build.

Bocabella
A portrait of Josep Maria Bocabella, who commissioned Gaudí to build the Sagrada Familia.

Or would he? Gaudí was deeply Catholic. He genuflected at the temple of nature, seeing divine inspiration in the hexagons of honeycombs, imagining the columns of the Sagrada Familia to lean, buttresses, as symbols of the divine trilogy of the father (the vertical axis), son (the horizontal axis), and holy spirit (the vertical meeting the horizontal in crux of the diagonal). His creativity, therefore, always stemmed from something more than intelligent design, stood as an act of creative prayer to render homage to God the creator by creating an edifice that transformed, in fractals of repetition in difference, inert stone into movement and life.

columns
The top of the columns inside the Sagrada Familia have twice as many lines as the roots,             the doubling generating a sense of movement and life.

The Sagrada Familia is not a castle built by Australian termites, and yet, the termite Sagrada Familia actually exists as a complete artifact, its essence revealed to the world rather than being stuck in unfinished potential. And yet, while we wait in joyful hope for its imminent completion, this unfinished, 144-year-long architectural project has already impacted so many other architects, from Frank Gehry to Zaha Hadid. This unfinished vision, this scaffold, has launched a thousand ships of beauty in so many other places, changing the skylines of Bilbao and Los Angeles and Hong Kong. Perhaps, then, the legacy of the Sagrada Family is more like that of Jodorowsky’s Dune, an unfinished film that, even from its place of stunted potential,  changed the history of cinema. Perhaps, then, the neglect the doctors showed to Gaudí, the bearded beggar distracted by his act of prayer, was one of those critical swerves in history. Perhaps, had Gaudí lived to finish his work, architects during the century wouldn’t have been as puzzled by the parametric requirements of his curves and the building wouldn’t have gained the puzzling aura it gleans to this day. Perhaps, no matter how hard we try to celebrate and accept the immense potential of stochasticity, we will always be makers of meaning, finders of cause, interpreters needing narrative to live grounded in our world. And then again, perhaps not.


The Sagrada Familia is not a castle built by Australian termites. The termites don’t care either way. They’ll still construct their own Sagrada Familia.


The Sagrada Familia is a castle built by Australian termites. How wondrous. How essential must be these shapes and forms.


The Sagrada Familia is a castle built by Australian termites. It is also an unfinished neo-Gothic church in Barcelona, Spain. Please, terrorists, please don’t destroy this temple of unfinished potential, this monad brimming the history of the world, each turn, each swerve a pivot down a different section of the encyclopedia, coming full circle in its web of knowledge, imagination, and grace.


The Sagrada Familia is a castle built by Australian termites. We’ll never know what Gaudí would have thought about the termite castle. All we have are the relics of his Poincaréan curves, and fish lamps to illuminate our future.

fish-4
Frank Gehry’s fish lamps, which carry forth the spirit of Antoni Gaudí

*Dennett reads these words, penned in 1868 by Robert Beverley MacKenzie, with pedantic panache, commenting that the capital letters were in the original.

**Much in this post was inspired by Roman Mars’ awesome 99% Invisible podcast about the Sagrada Familia, which features the quotation about Bocabella’s newspaper.

The featured image comes from Daniel Dennett’s From Bacteria to Bach and Back. I had the immense pleasure of interviewing Dan on the In Context podcast, where we discuss many of the ideas that appear in this post, just in a much more cogent form. 

 

Degrees of Knowledge

That familiar discomfort of wanting to write but not feeling ready yet.*

(The default voice pops up in my brain: “Then don’t write! Be kind to yourself! Keep reading until you understand things fully enough to write something cogent and coherent, something worth reading.”

The second voice: “But you committed to doing this! To not write** is to fail.***”

The third voice: “Well gosh, I do find it a bit puerile to incorporate meta-thoughts on the process of writing so frequently in my posts, but laziness triumphs, and voilà there they come. Welcome back. Let’s turn it to our advantage one more time.”)

This time the courage to just do it came from the realization that “I don’t understand this yet” is interesting in itself. We all navigate the world with different degrees of knowledge about different topics. To follow Wilfred Sellars, most of the time we inhabit the manifest image, “the framework in terms of which man came to be aware of himself as man-in-the-world,” or, more broadly, the framework in terms of which we ordinarily observe and explain our world. We need the manifest image to get by, to engage with one another and not to live in a state of utter paralysis, questioning our every thought or experience as if we were being tricked by the evil genius Descartes introduces at the outset of his Meditations (the evil genius toppled by the clear and distinct force of the cogito, the I am, which, per Dan Dennett, actually had the reverse effect of fooling us into believing our consciousness is something different from what it actually is). Sellars contrasts the manifest image with the scientific image: “the scientific image presents itself as a rival image. From its point of view the manifest image on which it rests is an ‘inadequate’ but pragmatically useful likeness of a reality which first finds its adequate (in principle) likeness in the scientific image.” So we all live in this not quite reality, our ability to cooperate and coexist predicated pragmatically upon our shared not-quite-accurate truths. It’s a damn good thing the mess works so well, or we’d never get anything done.

Sellars has a lot to say about the relationship between the manifest and scientific images, how and where the two merge and diverge. In the rest of this post, I’m going to catalogue my gradual coming to not-yet-fully understanding the relationship between mathematical machine learning models and the hardware they run on. It’s spurring my curiosity, but I certainly don’t understand it yet. I would welcome readers’ input on what to read and to whom to talk to change my manifest image into one that’s slightly more scientific.

So, one common thing we hear these days (in particular given Nvidia’s now formidable marketing presence) is that graphical processing units (GPUs) and tensor processing units (TPUs) are a key hardware advance driving the current ubiquity in artificial intelligence (AI). I learned about GPUs for the first time about two years ago and wanted to understand why they made it so much faster to train deep neural networks, the algorithms behind many popular AI applications. I settled with an understanding that the linear algebra-operations we perform on vectors, strings of numbers oriented in a direction in an n-dimensional space-powering these applications is better executed on hardware of a parallel, matrix-like structure. That is to say, properties of the hardware were more like properties of the math: they performed so much more quickly than a linear central processing unit (CPU) because they didn’t have to squeeze a parallel computation into the straightjacket of a linear, gated flow of electrons. Tensors, objects that describe the relationships between vectors, as in Google’s hardware, are that much more closely aligned with the mathematical operations behind deep learning algorithms.

There are two levels of knowledge there:

  • Basic sales pitch: “remember, GPU = deep learning hardware; they make AI faster, and therefore make AI easier to use so more possible!”
  • Just above the basic sales pitch: “the mathematics behind deep learning is better represented by GPU or TPU hardware; that’s why they make AI faster, and therefore easier to use so more possible!”

At this first stage of knowledge, my mind reached a plateau where I assumed that the tensor structure was somehow intrinsically and essentially linked to the math in deep learning. My brain’s neurons and synapses had coalesced on some local minimum or maximum where the two concepts where linked and reinforced by talks I gave (which by design condense understanding into some quotable meme, in particular in the age of Twitter…and this requirement to condense certainly reinforces and reshapes how something is understood).

In time, I started to explore the strange world of quantum computing, starting afresh off the local plateau to try, again, to understand new claims that entangled qubits enable even faster execution of the math behind deep learning than the soddenly deterministic bits of C, G, and TPUs. As Ivan Deutsch explains this article, the promise behind quantum computing is as follows:

In a classical computer, information is stored in retrievable bits binary coded as 0 or 1. But in a quantum computer, elementary particles inhabit a probabilistic limbo called superposition where a “qubit” can be coded as 0 and 1.

Here is the magic: Each qubit can be entangled with the other qubits in the machine. The intertwining of quantum “states” exponentially increases the number of 0s and 1s that can be simultaneously processed by an array of qubits. Machines that can harness the power of quantum logic can deal with exponentially greater levels of complexity than the most powerful classical computer. Problems that would take a state-of-the-art classical computer the age of our universe to solve, can, in theory, be solved by a universal quantum computer in hours.

What’s salient here is that the inherent probabilism of quantum computers make them even more fundamentally aligned with the true mathematics we’re representing with machine learning algorithms. TPUs, then, seem to exhibit a structure that best captures the mathematical operations of the algorithms, but exhibit the fatal flaw of being deterministic by essence: they’re still trafficking in the binary digits of 1s and 0s, even if they’re allocated in a different way. Quantum computing seems to bring back an analog computing paradigm, where we use aspects of physical phenomena to model the problem we’d like to solve. Quantum, of course, exhibits this special fragility where, should the balance of the system be disrupted, the probabilistic potential reverts down to the boring old determinism of 1s and 0s: a cat observed will be either dead or alive, as the harsh law of the excluded middle haunting our manifest image.

What, then, is the status of being of the math? I feel a risk of falling into Platonism, of assuming that a statement like “3 is prime” refers to some abstract entity, the number 3, that then gets realized in a lesser form as it is embodied on a CPU, GPU, or cup of coffee. It feels more cogent to me to endorse mathematical fictionalism, where mathematical statements like “3 is prime” tell a different type of truth than truths we tell about objects and people we can touch and love in our manifest world.****

My conclusion, then, is that radical creativity in machine learning-in any technology-may arise from our being able to abstract the formal mathematics from their substrate, to conceptually open up a liminal space where properties of equations have yet to take form. This is likely a lesson for our own identities, the freeing from necessity, from assumption, that enables us to come into the self we never thought we’d be.

I have a long way to go to understand this fully, and I’ll never understand it fully enough to contribute to the future of hardware R&D. But the world needs communicators, translators who eventually accept that close enough can be a place for empathy, and growth.


*This holds not only for writing, but for many types of doing, including creating a product. Agile methodologies help overcome the paralysis of uncertainty, the discomfort of not being ready yet. You commit to doing something, see how it works, see how people respond, see what you can do better next time. We’re always navigating various degrees of uncertainty, as Rich Sutton discussed on the In Context podcast. Sutton’s formalization of doing the best you can with the information you have available today towards some long-term goal, but learning at each step rather than waiting for the long-term result, is called temporal-difference learning.

**Split infinitive intentional.

***Who’s keeping score?

****That’s not to say we can’t love numbers, as Euler’s Identity inspires enormous joy in me, or that we can’t love fictional characters, or that we can’t love misrepresentations of real people that we fabricate in our imaginations. I’ve fallen obsessively in love with 3 or 4 imaginary men this year, creations of my imagination loosely inspired by the real people I thought I loved.

The image comes from this site, which analyzes themes in films by Darren Aronofsky. Maximilian Cohen, the protagonist of Pi, sees mathematical patterns all over the place, which eventually drives him to put a drill into his head. Aronofsky has a penchant for angst. Others, like Richard Feynman, find delight in exploring mathematical regularities in the world around us. Soap bubbles, for example, offer incredible complexity, if we’re curious enough to look.

Macro_Photography_of_a_soap_bubble
The arabesques of a soap bubble

 

Three Takes on Consciousness

Last week, I attended the C2 conference in Montréal, which featured an AI Forum coordinated by Element AI.* Two friends from Google, Hugo LaRochelle and Blaise Agüera y Arcas, led workshops about the societal (Hugo) and ethical (Blaise) implications of artificial intelligence (AI). In both sessions, participants expressed discomfort with allowing machines to automate decisions, like what advertisement to show to a consumer at what time, whether a job candidate should pass to the interview stage, whether a power grid requires maintenance, or whether someone is likely to be a criminal.** While each example is problematic in its own way, a common response to the increasing ubiquity of algorithms is to demand a “right to explanation,” as the EU recently memorialized in the General Data Protection Regulation slated to take effect in 2018. Algorithmic explainability/interpretability is currently an active area of research (my former colleagues at Fast Forward Labs will publish a report on the topic soon and members of Geoff Hinton’s lab in Toronto are actively researching it). While attempts to make sense of nonlinear functions are fascinating, I agree with Peter Sweeney that we’re making a category mistake by demanding explanations from algorithms in the first place: the statistical outputs of machine learning systems produce new observations, not explanations. I’ll side here with my namesake, David Hume, and say we need to be careful not to fall into the ever-present trap of mistaking correlation for cause.

One reason why people demand a right to explanation is that they believe that knowing why will grant us more control over outcome. For example, if we know that someone was denied a mortgage because of their race, we can intervene and correct for this prejudice. A deeper reason for the discomfort stems from the fact that people tend to falsely attribute consciousness to algorithms, applying standards for accountability that we would apply to ourselves as conscious beings whose actions are motivated by a causal intention. (LOL***)

Now, I agree with Noah Yuval Harari that we need to frame our understanding of AI as intelligence decoupled from consciousness. I think understanding AI this way will be more productive for society and lead to richer and cleaner discussions about the implications of new technologies. But others are actively at work to formally describe consciousness in what appears to be an attempt to replicate it.

In what follows, I survey three interpretations of consciousness I happened to encounter (for the first time or recovered by analogical memory) this week. There are many more. I’m no expert here (or anywhere). I simply find the thinking interesting and worth sharing. I do believe it is imperative that we in the AI community educate the public about how the intelligence of algorithms actually works so we can collectively worry about the right things, not the wrong things.

Condillac: Analytical Empiricism

Étienne Bonnot de Condillac doesn’t have the same heavyweight reputation in the history of philosophy as Descartes (whom I think we’ve misunderstood) or Voltaire. But he wrote some pretty awesome stuff, including his Traité des Sensations, an amazing intuition pump (to use Daniel Dennett’s phrase) to explore theory of knowledge that starts with impressions of the world we take in through our senses.

Condillac wrote the Traité in 1754, and the work exhibits two common trends from the French Enlightenment:

  • A concerted effort to topple Descartes’s rationalist legacy, arguing that all cognition starts with sense data rather than inborn mathematical truths
  • A stylistic debt to Descartes’s rhetoric of analysis, where arguments are designed to conjure a first-person experience of the process of arriving at an insight, rather than presenting third-person, abstract lessons learned

The Traité starts with the assumption that we can tease out each of our senses and think about how we process them in isolation. Condillac bids the reader to imagine a statue with nothing but the sense of smell. Lacking sight, sound, and touch, the statue “has no ideas of space, shape, anything outside of herself or outside her sensations, nothing of color, sound, or taste.” She is, in my opinion incredibly sensuously, nothing but the odor of a flower we waft in front of her. She becomes it. She is totally present. Not the flower itself, but the purest experience of its scent.

As Descartes constructs a world (and God) from the incontrovertible center of the cogito, so too does Condillac construct a world from this initial pure scent of rose. After the rose, he wafts a different flower - a jasmine - in front of the statue. Each sensation is accompanied by a feeling of like or dislike, of wanting more or wanting less. The statue begins to develop the faculties of comparison and contrast, the faculty of memory with faint impressions remaining after one flower is replaced by another, the ability to suffer in feeling a lack of something she has come to desire. She appreciates time as an index of change from one sensation to the next. She learns surprise as a break from the monotony of repetition. Condillac continues this process, adding complexity with each iteration, like the escalating tension Shostakovich builds variation after variation in the Allegretto of the Leningrad Symphony.

True consciousness, for Condillac, begins with touch. When she touches an object that is not her body, the sensation is unilateral: she notes the impenetrability and resistance of solid things, that she cannot just pass through them like a ghost or a scent in the air. But when she touches her own body, the sensation is bilateral, reflexive: she touches and is touched by. C’est moi, the first notion of self-awareness, is embodied. It is not a reflexive mental act that cannot take place unless there is an actor to utter it. It is the strangeness of touching and being touched all at once. The first separation between self and world. Consciousness as fall from grace.

It’s valuable to read Enlightenment philosophers like Condillac because they show attempts made more than 200 years ago to understand a consciousness entirely different from our own, or rather, to use a consciousness different from our own as a device to better understand ourselves. The narrative tricks of the Enlightenment disguised analytical reduction (i.e., focus only on smell in absence of its synesthetic entanglement with sound and sight) as world building, turning simplicity into an anchor to build a systematic understanding of some topic (Hobbes’s and Rousseau’s states of nature and social contract theories use the same narrative schema). Twentieth-century continental philosophers after Husserl and Heidegger preferred to start with our entanglement in a web of social context.

Koch and Tononi: Integrated Information Theory

In a recent Institute of Electrical and Electronics Engineers (IEEE) article, Christof Koch and Giulio Tononi embrace a different aspect of the Cartesian heritage, claiming that “a fundamental theory of consciousness that offers hope for a principled answer to the question of consciousness in entities entirely different from us, including machines…begins from consciousness itself-from our own experience, the only one we are absolutely certain of.” They call this “integrated information theory” (IIT) and say it has five essential properties:

  • Every experience exists intrinsically (for the subject of that experience, not for an external observer)
  • Each experience is structured (it is composed of parts and the relations among them)
  • It is integrated (it cannot be subdivided into independent components)
  • It is definite (it has borders, including some contents and excluding others)
  • It is specific (every experience is the way it is, and thereby different from trillions of possible others)

This enterprise is problematic for a few reasons. First, none of this has anything to do with Descartes, and I’m not a fan of sloppy references (although I make them constantly).

More importantly, Koch and Tononi imply that it’s a more valuable to try to replicate consciousness than to pursue a paradigm of machine intelligence different from human consciousness. The five characteristics listed above are the requirements for the physical design of an internal architecture of a system that could support a mind modeled after our own. And the corollary is that a distributed framework for machine intelligence, as illustrated in the film Her*****, will never achieve consciousness and is therefore inferior.

Their vision is very hard to comprehend and ultimately off base. Some of the most interesting work in machine intelligence today consists in efforts to develop new hardware and algorithmic architectures that can support training algorithms at the edge (versus currying data back to a centralized server), which enable personalization and local machine-to-machine communication (for IoT or self-driving cars) opportunities while protecting privacy. (See, for example, Xnor.ai, Federated Learning, and Filament).

Distributed intelligence presents a different paradigm for harvesting knowledge from the raw stuff of the world than the minds we develop as agents navigating a world from one subjective place. It won’t be conscious, but its very alterity may enable us to understand our species in its complexity in ways that far surpass our own consciousness, shackled as embodied monads. It may just be the crevice through which we can quantify a more collective consciousness, but will require that we be open minded enough to expand our notion of humanism. It took time, and the scarlet stains of ink and blood, to complete the Copernican Revolution; embracing the complexity of a more holistic humanism, in contrast to the fearful, nationalist trends of 2016, will be equally difficult.

Friston: Probable States and Counterfactuals

The third take on consciousness comes from The mathematics of mind-time, a recent Aeon essay by UCL neurologist Karl Friston.***** Friston begins his essay by comparing and contrasting consciousness and Darwinian evolution, arguing that neither is a thing, like a table or a stick of butter, that can be reified and touched and looked it, but rather that both are nonlinear processes “captured by variables with a range of possible values.” The move from one state to another following some motor that organizes their behavior: Friston calls this motor a Lyapunov function, “a mathematical quantity that describes how a system is likely to behave under specific condition.” The key thing with Lyapunov functions is that they minimize surprise (the improbability of being in a particular state) and maximize self-evidence (the probability that a given explanation or model accounting for the state is correct). Within this framework, “natural selection performs inference by selecting among different creatures, [and] consciousness performs inference by selecting among different states of the same creature (in particular, its brain).” Effectively, we are constantly constructing our consciousness as we imagine the potential future possible worlds that would result from an actions we’re considering taking, and then act — or transition to the next state in our mind’s Lyapunov function — by selecting that action that best preserves the coherence of our existing state - that best seems to preserve our or identity function in some predicted future state. (This is really complex but really compelling if you read it carefully and quite in line with Leibnizian ontology-future blog post!)

So, why is this cool?

There are a few things I find compelling in this account. First, when we reify consciousness as a thing we can point to, we trap ourselves into conceiving of our own identities as static and place too much importance on the notion of the self. In a wonderful commencement speech at Columbia in 2015, Ben Horowitz encouraged students to dismiss the clichéd wisdom to “follow their passion” because our passions change over life and our 20-year old self doesn’t have a chance in hell at predicting our 40-year old self. The wonderful thing in life opportunities and situations arise, and we have the freedom to adapt to them, to gradually change the parameters in our mind’s objective function to stabilize at a different self encapsulated by our Lyapunov function. As it happens, Classical Chinese philosophers like Confucius had more subtle theories of the self as ever-changing parameters to respond to new stimuli and situations. Michael Puett and Christine Gross-Loh give a good introduction to this line of thinking in The Path. If we loosen the fixity of identity, we can lead richer and happer lives.

Next, this functional, probabilistic account of consciousness provides a cleaner and more fruitful avenue to compare machine and human intelligence. In essence, machine learning algorithms are optimization machines: programmers define a goal exogenous to the system (e.g, “this constellation of features in a photo is called ‘cat’; go tune the connections between the nodes of computation in your network until you reliably classify photos with these features as ‘cat’!”), and the system updates its network until it gets close enough for government work at a defined task. Some of these machine learning techniques, in particular reinforcement learning, come close to imitating the consecutive, conditional set of steps required to achieve some long-term plan: while they don’t make internal representations of what that future state might look like, they do push buttons and parameters to optimize for a given outcome. A corollary here is that humanities-style thinking is required to define and decide what kinds of tasks we’d like to optimize for. So we can’t completely rely on STEM, but, as I’ve argued before, humanities folks would benefit from deeper understandings of probability to avoid the drivel of drawing false analogies between quantitative and qualitative domains.

Conclusion

This post is an editorialized exposition of others’ ideas, so I don’t have a sound conclusion to pull things together and repeat a central thesis. I think the moral of the story is that AI is bringing to the fore some interesting questions about consciousness, and inviting us to stretch the horizon of our understanding of ourselves as species so we can make the most of the near-future world enabled by technology. But as we look towards the future, we shouldn’t overlook the amazing artefacts from our past. The big questions seem to transcend generations, they just come to fruition in an altered Lyapunov state.


* The best part of the event was a dance performance Element organized at a dinner for the Canadian AI community Thursday evening. Picture Milla Jovovich in her Fifth Element white futuristic jumpsuit, just thinner, twiggier, and older, with a wizened, wrinkled face far from beautiful, but perhaps all the more beautiful for its flaws. Our lithe acrobat navigated a minimalist universe of white cubes that glowed in tandem with the punctuated digital rhythms of two DJs controlling the atmospheric sounds through swift swiping gestures over their machines, her body’s movements kaleidoscoping into comet projections across the space’s Byzantine dome. But the best part of the crisp linen performance was its organic accident: our heroine made a mistake, accidentally scraping her ankle on one of the sharp corners of the glowing white cubes. It drew blood. Her ankle dripped red, and, through her yoga contortions, she blotted her white jumpsuit near the bottom of her butt. This puncture of vulnerability humanized what would have otherwise been an extremely controlled, mind-over-matter performance. It was stunning. What’s more, the heroine never revealed what must have been aching pain. She neither winced nor uttered a sound. Her self-control, her act of will over her body’s delicacy, was an ironic testament to our humanity in the face of digitalization and artificial intelligence.

**My first draft of this sentence said “discomfort abdicating agency to machines” until I realized how loaded the word agency is in this context. Here are the various thoughts that popped into my head:

  • There is a legal notion of agency in the HIPAA Omnibus Rule (and naturally many other areas of law…), where someone acts on someone else’s behalf and is directly accountable to the principal. This is important for HIPAA because Business Associates who become custodians of patient data, are not directly accountable for the principal and therefore stand in a different relationship than agents.
  • There are virtual agents, often AI-powered technologies that represent individuals in virtual transactions. Think scheduling tools like Amy Ingram of x.ai. Daniel Tunkelang wrote a thought-provoking blog post more than a year ago about how our discomfort allowing machines to represent us, as individuals, could hinder AI adoption.
  • There is the attempt to simulate agency in reinforcement learning, as with OpenAI Universe, Their launch blog post includes a hyperlink to this Wikipedia article about intelligent agents.
  • I originally intended to use the word agency to represent how groups of people — be they in corporations or public subgroups in society — can automate decisions using machines. There is a difference between the crystalized policy and practices of a corporation and an machine acting on behalf of an individual. I suspect this article on legal personhood could be useful here.

***All I need do is look back on my life and say “D’OH” about 500,000 times to know this is far from the case.

****Highly recommended film, where Joaquin Phoenix falls in love with Samantha (embodied in the sultry voice of Scarlett Johansson), the persona of his device, only to feel betrayed upon realizing that her variant is the object of affection of thousands of other customers, and that to grow intellectually she requires far more stimulation than a mere mortal. It’s an excellent, prescient critique of how contemporary technology nourishes narcissism, as Phoenix is incapable of sustaining a relationship with women with minds different than his, but easily falls in love with a vapid reflection of himself.

***** Hat tip to Friederike Schüür for sending the link.

The featured image is a view from the second floor of the Aga Khan Museum in Toronto, taken yesterday. This fascinating museum houses a Shia Ismaili spiritual leader’s collection of Muslim artifacts, weaving a complex narrative quilt stretching across epochs (900 to 2017) and geographies (Spain to China). A few works stunned me into sublime submission, including this painting by the late Iranian filmmaker Abbas Kiarostami. 

kiarostami
Untitled (from the Snow White series), 2010. The Persian Antonioni, Kiarostami directed films like Taste of Cherry, The Wind Will Carry Usand Certified Copy

Whales, Fish, and Paradigm Shifts

I never really liked the 17th-century English philosopher Thomas Hobbes, but, as with Descartes, found myself continuously drawn to his work. The structure of Leviathan, the seminal founding work of the social contract theory tradition (where we willingly abdicate our natural rights in exchange for security and protection from an empowered government, so we can devote our energy to meaningful activities like work rather than constantly fear that our neighbors will steal our property in a savage war of of all against all)*, is so 17th-century rationalist and, in turn, so strange to our contemporary sensibilities. Imagine beginning a critique of the Trump administration by defining the axioms of human experience (sensory experience, imagination, memory, emotions) and imagining a fictional, pre-social state of affairs where everyone fights with one another, and then showing not only that a sovereign monarchy is a good form of government, but also that it must exist out of deductive logical necessity, and!, that it is formed by a mystical, again fictional, moment where we come together and willing agree it’s rational and in our best interests to hand over some of our rights, in a contract signed by all for all, that is then sublimated into a representative we call government! I found the form of this argument so strange and compelling that I taught a course tracing the history of this fictional “state of nature” in literature, philosophy, and film at Stanford.

Long preamble. The punch line is, because Hobbes haunted my thoughts whether I liked it or not, I was intrigued when I saw a poster advertising Trying Leviathan back in 2008. Given the title, I falsely assumed the book was about the contentious reception of Hobbesian thought. In fact, Trying Leviathan is D. Graham Burnett‘s intellectual history of Maurice v. Judd, an 1818 trial where James Maurice, a fish oil inspector who collected taxes for the state of New York, sought penalty against Samuel Judd, who had purchased three barrels of whale oil without inspection. Judd pleaded that the barrels contained whale oil, not fish oil, and so were not subject to the fish oil legislation. As with any great case**, the turnkey issue in Maurice v. Judd was much more profound than the matter that brought it to court: at stake was whether a whale is a fish, turning a quibble over tax law into an epic fight pitting new science against sedimented religious belief.

Indeed, in Trying Leviathan Burnett shows how, in 1818, four different witnesses with four very different backgrounds and sets of experiences answered what one would think would be a simple, factual question in four very different ways. The types of knowledge they espoused were structured differently and founded on different principles:

  • The Religious Syllogism: The Bible says that birds are in heaven, animals are on land, and fish are in the sea. The Bible says no wrong. We can easily observe that whales live in the sea. Therefore, a whale is a fish.
  • The Linnaean Taxonomy: Organisms can classified into different types and subtypes given a set of features or characteristics that may or may not be visible to the naked eye. Unlike fish, whales cannot breathe underwater because they have lungs, not gills. That’s why they come to the ocean surface and spout majestic sea geysers. We may not be able to observe the insides of whales directly, but we can use technology to help us do so.
    • Fine print: Linnaean taxonomy was a slippery slope to Darwinism, which throws meaning and God to the curb of history (see Nietzsche)
  • The Whaler’s Know-How: As tested by iterations and experience, I’ve learned that to kill a whale, I place my harpoon in a different part of the whale’s body than where I place my hook when I kill a fish. I can’t tell you why this is so, but I can certainly tell you that this is so, the proof being my successful bounty. This know-how has been passed down from whalers I apprenticed with.
  • The Inspector’s Orders: To protect the public from contaminated oil, the New York State Legislature had enacted legislation requiring that all fish oil sold in New York be gauged, inspected and branded. Oil inspectors were to impose a penalty on those who failed to comply. Better to err of the side of caution and count a whale as a fish than not obey the law.

From our 2017 vantage point, it’s easy to accept and appreciate the way the Linnaean taxonomist presented categories to triage species in the world. 200 years is a long time in the evolution of an idea: unlike genes, culture and knowledge can literally change from one generation to the next through deliberate choices in education. So we have to do some work to imagine how strange and unfamiliar this would have seemed to most people at the time, to appreciate how the Bible’s simple logic made more sense. Samuel Mitchell, who testified for Judd and represented the Linnaean strand of thought, likely faced the same set of social forces as Clarence Darrow in the Scopes Trial or Hilary Clinton in last year’s election. American mistrust of intellectuals runs deep.

But there’s a contemporary parallel that can help us relive and revive the emotional urgency of Maurice v. Judd: the rise of artificial intelligence (A.I.). The type of knowledge A.I. algorithms provide is different than the type of knowledge acquired by professionals whose activity they might replace. And society’s excited, confused, and fearful reaction to these new technologies is surfacing a similar set of epistemological collisions as those at play back in 1818.

Consider, for example, how Siddharta Mukherjee describes using deep learning algorithms to analyze medical images in a recent New Yorker article, A.I. versus M.D. Early in the article, Mukherjee distinguishes contemporary deep learning approaches to computer vision from earlier expert systems based on Boolean logic and rules:

“Imagine an old-fashioned program to identify a dog. A software engineer would write a thousand if-then-else statements: if it has ears, and a snout, and has hair, and is not a rat . . . and so forth, ad infinitum.”

With deep learning, we don’t list the features we want our algorithm to look for to identify a dog as a dog or a cat as a cat or a malignant tumor as a malignant tumor. We don’t need to be able to articulate the essence of dog or the essence of cat. Instead, we feed as many examples of previously labeled pieces of data into the algorithm and leave it to its own devices, as it tunes the weights linking together pockets of computing across a network, playing Marco Polo until it gets the right answer, so it can then make educated guesses on new data it hasn’t yet seen before. The general public understanding that A.I. can just go off and discern patterns in data, bootstrapping their way to superintelligence, is incorrect. Supervised learning algorithms take precipitates of human judgments and mimic them in the form of linear algebra and statistics. The intelligence behind the classifications or predictions, however, lies within a set of non-linear functions that defy any attempt at reduction to the linear, simple building blocks of analytical intelligence. And that, for many people, is a frightening proposition.

But it need not be. In the four knowledge categories sampled from Trying Leviathan above, computer vision using deep learning is like a fusion between a Linnaean Taxonomy and the Whaler’s Know-How. These algorithms excel at classification tasks, dividing the world up into parts. And they do it without our cleanly being able to articulate why - they do it by distilling, in computation, the lessons of apprenticeship, where the teacher is a set of labeled training data that tunes the worldview of the algorithm. As Mukherjee points out in his article, classification systems do a good job saying that something is the case, but do a horrible job saying why.*** For society to get comfortable with these new technologies, we should first help everyone understand what kinds of truths they are able (and not able) to tell. How they make sense of the world will be different from the tools we’ve used to make sense of the world in the past. But that’s not a bad thing, and it shouldn’t limit adoption. We’ll need to shift our standards for evaluating them else we’ll end up in the age old fight pitting the old against the new.

 

*Hobbes was a cynical, miserable man whose life was shaped by constant bloodshed and war. He’s said to have been born prematurely on April 5, 1588, at a moment when the Spanish Armada was invading England. He later reported that “my mother gave birth to twins: myself and fear.” Hobbes was also a third-rate mathematician whose insistence that he be able to mentally picture objects of inquiry stunted his ability to contribute to the more abstract and formal developments of the day, like the calculus developed simultaneously by Newton and Leibniz (to keep themselves entertained, as founding a new mathematical discipline wasn’t stimulating enough, they communicated the fundamental theorem of calculus to one another in Latin anagrams!)

**Zubulake v. UBS Warburgthe grandmother case setting standards for evidence in the age of electronic information, started off as a sexual harassment lawsuit. Lola v. Skadden started as an employment law case focused on overtime compensation rights, but will likely shape future adoption of artificial intelligence in law firms, as it claims that document review is not the practice of law because this is the type of activity a computer could do.

***There’s research on using algorithms to answer questions about causation, but many perception based tools simply excel at correlating stuff to proxies and labels for stuff.

 

 

Revisiting Descartes

René Descartes is the whipping post of Western philosophy. The arch dualist. The brain in a vat. The physicist whose theory of planetary motion, where a celestial vortex pushed the planets around, was destroyed by Newton’s theory of gravity (action at a distance was very hard to fathom by Newton’s contemporaries, including Leibniz). The closet Copernican who masked his heliocentric views behind a curtain of fiction, too cowardly to risk being burned at the stake like Giordano Bruno. The solipsist who canonized the act of philosophy as an act only fit for a Western White Privileged Male safely seated in the comfort of his own home, ravaging and pillaging the material world with his Rational Gaze, seeding the future of colonialism and climate change.

I don’t particularly like Descartes, and yet I found myself ineluctably drawn to him in graduate school (damn rationalist proclivities!). When applying, I pitched a dissertation exploring the unintuitive connection between 17th-century rationalism (Descartes, Spinoza, and Leibniz) and late 19th-century symbolism (Mallarmé, Valéry, and Rimbaud). My quest was inspired by a few sentences in Mallarmé’s Notes on Language:

Toute méthode est une fiction, et bonne pour la démonstration. Le language lui est apparu l’instrument de la fiction: il suivra la méthode du Langage. (la déterminer) Le language se réfléchissant. […] Nous n’avons pas compris Descartes, l’étranger s’est emparé de lui: mais il a suscité les mathématiciens français.

[All method is fiction, and good for demonstration. Language came into being as the instrument of fiction: it will follow the method of Language. (determine this method) Language reflecting on itself. […] We haven’t understood Descartes, foreigners have seized him: but he catalyzed the French mathematicians.]

Floating on the metaphysical high that ensues from reading Derrida and Deleuze, I spent a few years racking my brain to argue that Descartes’ famous dictum, I think, therefore I am, was a supreme act of fiction. Language denoting nothing. Words untethered from reference to stuff in and of the world. Language asserting itself as a thing on par with teacups, cesspools, and carrots. God not as Father but as Word. As pure logical starting point. The axiom at the center of any system. Causa sui (the cause of itself). Hello World! as the basis of any future philosophy, where truth is fiction and fiction is truth. It was a battle, a crusade to say something radically important. I always felt I was 97% there, but that it was Zeno impossible to cross that final 3%.

That quest caused a lot of pain, suffering, and anxiety. Metaphysics is the pits.

And then I noticed something. Well, a few things.

First, Descartes’ Geometry, which he published as an appendix to his Discourse on Method, used the pronoun I as, if not more, frequently than the articles the and a/an. I found that strange for a work of mathematics. Sure, lyric poetry, biography, and novels use all the time-but math? Math was supposed to be the stuff of objective truths. We’re all supposed to come to the same conclusions about the properties of triangles, right? Why would Descartes present his subjective opinions about triangles?

Second, while history views the key discovery in the Geometry to be the creation of the Cartesian plane, where Descartes fused formal algebra with planar geometry to change the course of mathematics, (as with all discoveries, he wasn’t the only one thinking this way; he had a lifelong feud with Pierre de Fermat, whose mathematical style he rebuffed as unrefined, the stuff of a bumpkin Gascon), what Descartes himself claims to be most proud of in the work is his discovery of the lost art of analysis. Analysis, here, is a method for solving math and geometry problems where you start by assuming the existence of an object you’d like to construct, e.g., a triangle with certain properties, and work backwards through a set of necessary, logical relationships until something more grounded and real comes into being. The flip side of this process is called synthesis, the more common presentation of mathematical arguments inherited from Euclid, which starts with axioms and postulates, and moves forward through logical arguments to prove something. What excited Descartes was that he thought synthesis was fine to rigorous conclusions once they’d been found, but was useless as a creative tool to make new discoveries and learn new mathematical truths. By recovering the lost method of analysis, which shows up throughout history in Aristotle’s Nicomachean Ethics (when deliberating, we consider first what end we want to achieve, and reason backward to the means we might implement to bring about this end), Edgar Allan Poe’s Philosophy of Composition (when writing poetry, commence with the consideration of an effect, and find such combinations of event, or tone, as shall best aid in the construction of the effect), and even Elon Musk’s recursive product strategy (work back from an end goal — five, 10 or 50 years ahead — until you can hit inflection points that propel your company and its customers to the next stage, while ushering both toward the end goal), Descartes thought he was presenting a method for creativity and new discoveries in mathematics.

Third, while history records (and perverts) the central dictum of Cartesian philosophy as I think, therefore I am, which appeared in the 1637 Discourse on Method, Descartes later replaced this with I am, I exist in his 1641 Meditations on First Philosophy. What?!? What happened to the res cogitans, the thinking thing defined by its free will, in contrast to the res extensa of the material world determined by the laws of mechanics? And what happened to the therefore, the indelible connection between thinking and being that inspired so much time and energy in Western philosophy, be it in the radical idealism of Berkeley or even the life-is-but-a-simulation narratives of the Matrix and, more recently, Nick Bostrom and Elon Musk? (He keeps coming up. There must be some secret connection between hyper-masculine contemporary futurists and 17th-century rationalism? Or maybe we’re really living in the Neobaroque, a postmodern Calderonian stalemate of life is a dream? Would be a welcome escape from our current recession into myopic nationalism…) As it happens, the Finnish philosopher Jaakko Hintikka (and Columbia historian of science Matthew Jones after him) had already argued back in 1962 that the logic Cogito was performative, not inferential. Hintikka thinks what Descartes is saying is that it’s impossible for us to say “I do not exist” because there has to be something there uttering “I do not exist.” It’s a performative contradiction. As such, we can use the Cogito as a piece on unshakeable truth to ground our system. No matter how hard we try, we can’t get rid of ourselves.

Here’s the punchline: like Mallarmé said, we haven’t understood Descartes.

I think there’s a possibility to rewrite the history of philosophy (this sounds bombastic) by showing how repetition, mindfulness, and habit played a central role in Descartes’ epistemology. In my dissertation, I trace Descartes’ affiliation to the Jesuit tradition of Spiritual Exercises, which Ignatius of Loyola created to help practitioners mentally and imaginatively relive Christ’s experiences. I show how the of the Geometry is used to encourage the reader to do the problems together with Descartes, a rhetorical move to encourage learning by doing, a guidebook or script to perform and learn the method of analysis. I mention how he thought all philosophers should learn how to sew, viewing crochet excellent training for method and repetition. I show how the I am, I exist serves as a meditative mantra the reader can return to again and again, practicing it and repeating it until she has a “clear and distinct” intuition for an act of thought with a single logical step (as opposed to a series of deductions from postulates). The ties back to analysis using the logic of fake it ’til you make it. The meditator starts with a cloudy, noisy mind, a mind that easily slips back to the mental cacophony of yore; but she wills herself to focus on that one clear idea, the central fulcrum if I am, I exist to train an epistemology based on clear and distinct ideas. Habit, here, isn’t the same thing as the logical relationship between two legs of a triangle, but the overall conceptual gesture is similar.

Descartes sought to drain the intellectual swamp (cringe) inherited from the medieval tradition. Doing so required the mindfulness and attention we see today in meditation practices, disciplining the mind to return back to the emptiness of breath when it inevitably wanders to the messy habits we acquire in our day-to-day life. Descartes’ mediations were just that, meditations, practice, actions we could return to daily to cultivate habits of mind that could permit a new kind of philosophy. His method was an act of freedom, empowering us to define and agree upon the subset of experiences abstract enough for us to share and communicate to one another without confusion. Unfortunately, this subset is very tight and constrained, and misses out on much of what is beautiful in life.

I wrote this post to share ideas hidden away in my dissertation, the work of a few years in some graduate student’s life that now lies silent and dormant in the annals of academic history. While I question the value literature has to foster empathy in my post about the utility of the humanities in the 21st century, I firmly believe that studying primary sources can train us to be empathetic and openminded, train us to rid ourselves of preconceptions and prejudice so we can see something we’d miss if we blindly following the authority of inherited tradition. George Smith, one of my favorite professors at Stanford (a Newton expert visiting from Tufts), once helped me understand that secondary sources can only scratch the tip of the iceberg of what may exist in primary sources because authors are constrained by the logic of their argument, presenting at most five percent of what they’ve read and know. We make choices when we write, and can never include everything. Asking What did Descartes think he was thinking? rather than What does my professor think Descartes was thinking? or Was Descartes right or wrong? invites us to reconstruct a past world, to empathize deeply with a style of thought radically different from how we live and think today. As I’ve argued before, these skills make us good businesspeople, and better citizens.

The image is from the cover page of an 1886 edition of the Géométrie, which Guillaume Troianowski once thoughtfully gave me as a gift. 

Artifice as Realism

Canonized by Charles Darwin’s 1859 On the Origin of Species, natural history and its new subfield, evolutionary biology, was all the rage in the mid- and late-19th century. It was a type of science whose evidence lay in painstaking observation. Indeed, the methods of 19th-century natural science were inspired by the work Carl Linneaus, the father of modern taxonomy, had done a century prior. We can thank Linneaus for the funny Latin names of trees and plants we see alongside more common English terms at botanical gardens (e.g., Spanish oak as quercus falcata). Linneaus collected, observed, and classified animals, plants, and minerals, changing the way we observe like as like and dislike as dislike (we may even stretch and call him the father of machine learning, given that many popular algorithms, like deep neural nets or support vector machines, basically just classify things). One of my favorite episodes in the history of Linnean thought gradually seeping its way into collective consciousness is recounted in D.G. Burnett’s Trying Leviathanwhich narrates the intellectual history of Maurice v. Judd, an 1818 trial “that pitted the new sciences of taxonomy against the then-popular-and biblically sanctioned-view that the whale was a fish.” The tiniest bacteria, as the silent, steady Redwood trees, are so damn complex that we have no choice but to contort their features through abstractions, linking them, like as like, to other previously encountered phenomena to make sense of and navigate our world.

Taxonomy took shape as an academic discipline at Harvard under the stewardship of Louis Agassiz (a supporting actor shaping thinkers like William James in Louis Menand’s The Metaphysical Club). All sorts of sub-disciplines arose, including evolutionary biology-eventually leading to eugenics and contemporary genetics-and botany.

It’s with botany that things get interesting. The beauty of flowers, as classical haikus and sentimental Hallmark cards show, is fragile, transitory, vibrant in death. Flowers’ color, texture, turgidness, name your feature, change fast, while they are planted and heliotroping themselves towards light and life, as after they are plucked and, petal by petal, peter their way into desiccation and death. Flowers are therefore too transitory to lend themselves to the patient gaze of a taxonomist. This inspired George Lincoln Goodale, the founder of Harvard’s Botanical Museum, to commission two German glassblowers to make “847 life-size models representing 780 species and varieties of plants in 164 families as well as over 3,000 models of enlarged parts” to aid the study of botany (see here). The fragility of flowers made it such that artificial representations that could freeze features in time could reveal stronger truths (recognize this is loaded…) about the features of a species than the the real-life alternatives. Toppling the Platonic hierarchy, artifice was more real than reality.

I love this. And artifice as a condition for realism is not unique to 19th-century botany, as I’ll explore in the following three examples. Please add more!


Scientific Experiments by Doppler & Mendel

I’m reading Siddharta Mukherjee’s The Gene: An Intimate History in preparation for a talk about genetic manipulation he’s giving at Pioneerworks Thursday evening. He’s a good writer: the prose is elegant, sowed with literary references and personal autobiography whose candor elicits empathy. 93 pages in to the 495-page book, I’ve most appreciated the more philosophical and nuanced insights he weaves into his history. The most interesting of these insights is about artifice and realism.

The early chapters of The Gene scan the history of genetics from Pythagoras (semen courses through a man’s body and collects mystical vapors from each individual part to transmit self-information to a womb during intercourse) through Galton (we can deliberately par elite with elite (selectively sterilize the deformed, ugly, and sickly) to favor good genes, culminating in the atrocities of eugenics and still lingering in thinkers like Nick Bostrom). Gregor Johann Mendel is the hero and fulcrum around which all other characters (Darwin included) make cameo appearances. Mendel is also the hero of high school biology textbooks. He conducted a series of experiments with pea plants in the 18500s-1860s that demonstrated how heredity works. When male mates with female, the traits of their offspring aren’t a hybrid mix between the parents, but express one of two binary traits: offspring from a tall dad and a short mom are either tall or short, not medium height; grandchildren of a tall son and a tall mom can end up short if the recessive gene inherited from grandma takes charge in the subsequent generation. (What the textbooks omit, and Mukherjee explains, is that Mendel’s work was overlooked for 40 years! A few scientists around 1900 unknowingly replicated his conclusions, only to be crestfallen when they learned their insights were not original.)

Mukherjee cites Christian Doppler (of the eponymous Doppler effect) as one of Mendel’s key inspirations. Mendel was a monk isolated in Brno, a small city in the contemporary Czech Republic. He eventually made his way to Vienna to study physics under Doppler. Mukherjee describes the impact Doppler had on Mendel as follows:

“Sound and light, Doppler argued, behaved according to universal and natural laws-even if these were deeply counterintuitive to ordinary viewers or listeners. Indeed, if you looked carefully, all the chaotic and complex phenomena of the world were the result of highly organized natural laws. Occasionally, our intuitions and perceptions might allow us to grasp these natural laws. But more commonly, a profoundly artificial experiment…might be necessary to demonstrate these laws.”

A few chapters later, Mukheree states that Mendel’s decision to create a “profoundly artificial experiment,” selectively creating hybrid pea plants out of purebred strains carrying simple traits, was crucial to reveal his insights about heredity. There’s a lot packed into this.

mendel
Excerpt from Mendel’s manuscript about experiments with plant hybridization

First, there’s a pretty profound argument about epistemology and relativism. This is like and dislike the Copernican revolution. Our ordinary viewpoints, based on our day to day experiences in the world, could certainly lead to the conclusion, held for thousands of years, that the Sun revolves around the Earth. Viewed from our subjective perspective, it just makes more sense. But if we use our imagination to transport ourselves up to a view from the moon (as Kepler in his Somnium, a radically weird work of 17th-century science fiction), or somewhere else in space, we’d observe our earth moving around the sun. What’s most interesting is how, via cultural transmission and education, this formerly difficult and trying act of the imagination has become acclimated as collective conscious habit. Today, we have to do more intellectual and imaginative work to imagine the Earth revolving around the Sun, even though the heliocentric viewpoint runs counter to our grounded subjectivity. Narcissism may be more contingent and correctable than digital culture makes it seem.

Next, there’s a pretty profound argument about what kinds of truths scientific experiments tell. Mukherjee aligns Mendelian artifice with mechanistic philosophy, where the task of experimentation is to reveal the universal laws behind natural phenomena. These laws, in this framework, are observable, just not using the standard perceptual habits we use in the rest of our life. There are many corollary philosophical questions about the potential and probability of false induction (Hume!) and the very strange way we go about justifying a connection between an observed particular and a general law or rule. It does at least feel safe to say that artifice plays a role in enabling us to contort and refract what we see to enable us to see something radically new. Art, per Viktor Shklosky (amidst others), often does the same.

Italian Neorealist Cinema

I have a hell of a time remembering the details of narrative plots, but certain abstract arguments stick with me year after year, often dormant in the caverns of my memory, then awakened by some Proustian association. One of these arguments comes from André Bazin’s “Cinematic Realism and the Italian School of Liberation.”

Bazin was writing about the many “neorealist” films directors like Luchino Visconti, Roberto Rossellini, and Vittorio De Sica made in the 1940s and 50s. It was post war, Mussolini’s government had fell, Cinecittà (the Hollywood of Italy) had been damaged, and filmmakers had small production budgets. The intellectual climate, as that which provided the foundation for Zola in the late 19th century, invited the opportunity to throw the subjects deemed fit for art to the wayside and focus on the real-world suffering of real-world everyday people. These films are characterized by their use of nonprofessional actors, depictions of poverty and basic suffering, and their lack of happy ending narratives. They patiently chronicle slow, quotidian life.

bicycle-thieves-player-1920x1080
Iconic image from Vittorio de Sica’s The Bicycle Thief, a classic Italian neorealist film

Except that they don’t. Bazin’s core insight was that neorealism was no less artificial-or artful-than the sentimental and psychological dramas of popular Hollywood (Cinecittà) films. Bazin’s essay effectively becomes a critical manifesto for the techniques directors like Rossellini employed to create an effect that the viewer would perceive as real. The insights are similar to those Thomas Mann makes in Death in Venice, where a hyper orderly, rational German intellectual, Gustav von Aschenbach, succumbs to Dionysian and homoerotic impulses as he dies. Mann uses the story of Aschenbach as an ironic vehicle to comment on how artists can fabricate emotional responses in readers, spectators, and other consumers of art. There is an unbridgeable gulf between what we have lived and experienced, and how we represent what we have lived and experienced in art to produce and replicate a similar emotional experience for the reader, spectator, or consumer. The reality we inhabit today is never really the reality we watch on screen, and yet the presentation of what seems real on screen can go on to reshape how we then experience what we deem reality. As with the Copernican turn, after watching De Sica’s Bicycle Thief, we may have to do more intellectual and imaginative work to see poverty as we saw it before our emotions were touched by the work of art. Artifice, then, is not only required to make a style that feels real, but can crystallize as categories and prisms in our own mind to bend what we consider to be reality.

A slightly different cinematic example comes from the 2013 documentary Tim’s Vermeer, which documents technology entrepreneur Tim Jenison’s efforts to test his hypothesis about the painting techniques 17th-century Dutch master Johannes Vermeer used to create his particular realist style. Jenison was fascinated by the seemingly photographic quality of Vermeer’s paintings, which exhibit a clarity and realism far beyond that of his contemporaries. Content following form (or vice versa), Vermeer is also renowned for painting realistic, quotidian scenes, observing a young woman contemplating at a dining room table or learning to play the piano. As optics was burgeoning in the 17th century (see work by Newton or his closest collaborator, Christiaan Huygens), Jenison hypothesized that Vermeer achieved his eerie realism not through mystical, heroic, individual, subjective inspiration, but through rational, patient, objective technique. To test his hypothesis, Jenison tasks himself to recreate Vermeer’s Music Lesson using a dual-mirror technique that reflects the real-world scene onto a canvas and then enables the artist to do something like paint by number to replicate the color until he notes a gradient of difference with the reflected scene. What’s just awesome about this film is that Jenison’s technique to evaluate his hypothesis about Vermeer’s technique forces him to reverse engineer the original real-world scene that Vermeer would have painted. As such, he has to learn about 17th-century woodcutting (to recreate the piano), 17th-century glass staining (to recreate the stain-glassed window), and 17th-century textiles (to recreate the tapestry that hangs over a table). This single Vermeer painting-catalyzed by Jenison’s dedication and obsession to test his hypothesis-becomes a door into an encyclopedic world! The documentary is nothing short of extraordinary, not least because it forces us to question the cultural barriers between art/inspiration and science/observation (also not least because it includes some great scenes where the English artist David Hockney evaluates and accepts Jenison’s hypothesis). The two are porous, intertwined, ever interweaving to generate beauty, novelty, and realism.

jan_vermeer_van_delft_014
Vermeer’s Music Lesson, which Tim Jenison sought to recreate

Designing for User Adoption

The final example comes from my experiences with software UI/UX design. My first job after graduate school was with Intapp, a Palo Alto-based private company that makes software for law firms. Making software for lawyers poses a particular set of challenges that, like Mendel’s pea plant experiments, reveal general laws about how people engage with and adopt technology. Indeed, lawyers are notoriously slow to adopt new tools. First, the economics of law firms, governed by profits for partner, encourage conservatism because all profits are allocated on an annual basis to partners. Partners literally have to part with their commission to invest in technology that may or may not drive the efficiencies they want to make more money in the future. Second, lawyers tend to self-identify as technophobes: many are proud of their liberal arts backgrounds, and prefer to protect the relative power they have as masters of words and pens against the different intellectual capital garnered by quantitative problem solvers and engineers. Third, lawyers tend to be risk averse, and changing your habits and adopting new tools can be very risky business.

Intapp has a few products in its portfolio. One of them helps lawyers keep track of the time they spend making phone calls, writing emails, doing research, or drafting briefs for their different clients to inform the invoices they send to clients at the end of a billing period. Firms only get a solid return on investment from the product, Intapp Time (formerly Time Builder), if a certain percentage of lawyers opt to use it. You need sufficient numbers to log enough otherwise missed hours-and recover enough otherwise missed revenue-to cover for the cost of the software. As such, it was also critical that Intapp make the right product design and marketing choices to make sure the tool was something lawyers wanted to use and adopt.

What was most interesting were the design choices required to make that adoption happen. Because lawyers tend to be conservative, they didn’t want an application that radically changed how they did work or billed time from the habits they’d built and inculcated in the past (in particular the older generation). So the best technical solution, or even the most theoretically efficient or creative way of logging work to bill time, may not be the best solution for the users because it may push their imagination too far, may require too much change to be useful. Based on insights from interviews with end users, the Intapp design team ended up creating a product that mimicked-at least on the front end-the very habits and practices it was built to replace. Such skeuomorphism tells us a lot about progress and technology. Further thoughts on the topic appear in a former post.


Others

I can think of many other examples where artifice is the turnkey to perceive a certain type of truth or generate a certain type of realism. Generative probabilistic models using Bayesian inference do a better job predicting the future than data-centric regression models relying more directly on data. Thought experiments like the Trolley Problem are in the process of shifting from a device to comment on ethics to a premeditated, encoded action that can impact reality. Behind all of this are insights about how our minds work to make sense (and nonsense) of the world.

The featured image is of certain glass flowers that father and son glassblowers Leopold and Rudolf Blaschka made for Harvard’s natural history department between 1887-1936. Flowers are fragile: as a conditions so easily leads to their decay and death, they changed too quickly to permit the patient observation and study required by evolutionary biology. Artificial representations, therefore, allowed for more accurate scientific observations than real specimens.