Coffee Cup Computers, or Degrees of Knowledge

That familiar discomfort of wanting to write but not feeling ready yet.*

(The default voice pops up in my brain: “Then don’t write! Be kind to yourself! Keep reading until you understand things fully enough to write something cogent and coherent, something worth reading.”

The second voice: “But you committed to doing this! To not write** is to fail.***”

The third voice: “Well gosh, I do find it a bit puerile to incorporate meta-thoughts on the process of writing so frequently in my posts, but laziness triumphs, and voilà there they come. Welcome back. Let’s turn it to our advantage one more time.”)

This time the courage to just do it came from the realization that “I don’t understand this yet” is interesting in itself. We all navigate the world with different degrees of knowledge about different topics. To follow Wilfred Sellars, most of the time we inhabit the manifest image, “the framework in terms of which man came to be aware of himself as man-in-the-world,” or, more broadly, the framework in terms of which we ordinarily observe and explain our world. We need the manifest image to get by, to engage with one another and not to live in a state of utter paralysis, questioning our every thought or experience as if we were being tricked by the evil genius Descartes introduces at the outset of his Meditations (the evil genius toppled by the clear and distinct force of the cogito, the I am, which, per Dan Dennett, actually had the reverse effect of fooling us into believing our consciousness is something different from what it actually is). Sellars contrasts the manifest image with the scientific image: “the scientific image presents itself as a rival image. From its point of view the manifest image on which it rests is an ‘inadequate’ but pragmatically useful likeness of a reality which first finds its adequate (in principle) likeness in the scientific image.” So we all live in this not quite reality, our ability to cooperate and coexist predicated pragmatically upon our shared not-quite-accurate truths. It’s a damn good thing the mess works so well, or we’d never get anything done.

Sellars has a lot to say about the relationship between the manifest and scientific images, how and where the two merge and diverge. In the rest of this post, I’m going to catalogue my gradual coming to not-yet-fully understanding the relationship between mathematical machine learning models and the hardware they run on. It’s spurring my curiosity, but I certainly don’t understand it yet. I would welcome readers’ input on what to read and to whom to talk to change my manifest image into one that’s slightly more scientific.

So, one common thing we hear these days (in particular given Nvidia’s now formidable marketing presence) is that graphical processing units (GPUs) and tensor processing units (TPUs) are a key hardware advance driving the current ubiquity in artificial intelligence (AI). I learned about GPUs for the first time about two years ago and wanted to understand why they made it so much faster to train deep neural networks, the algorithms behind many popular AI applications. I settled with an understanding that the linear algebra–operations we perform on vectors, strings of numbers oriented in a direction in an n-dimensional space–powering these applications is better executed on hardware of a parallel, matrix-like structure. That is to say, properties of the hardware were more like properties of the math: they performed so much more quickly than a linear central processing unit (CPU) because they didn’t have to squeeze a parallel computation into the straightjacket of a linear, gated flow of electrons. Tensors, objects that describe the relationships between vectors, as in Google’s hardware, are that much more closely aligned with the mathematical operations behind deep learning algorithms.

There are two levels of knowledge there:

  • Basic sales pitch: “remember, GPU = deep learning hardware; they make AI faster, and therefore make AI easier to use so more possible!”
  • Just above the basic sales pitch: “the mathematics behind deep learning is better represented by GPU or TPU hardware; that’s why they make AI faster, and therefore easier to use so more possible!”

At this first stage of knowledge, my mind reached a plateau where I assumed that the tensor structure was somehow intrinsically and essentially linked to the math in deep learning. My brain’s neurons and synapses had coalesced on some local minimum or maximum where the two concepts where linked and reinforced by talks I gave (which by design condense understanding into some quotable meme, in particular in the age of Twitter…and this requirement to condense certainly reinforces and reshapes how something is understood).

In time, I started to explore the strange world of quantum computing, starting afresh off the local plateau to try, again, to understand new claims that entangled qubits enable even faster execution of the math behind deep learning than the soddenly deterministic bits of C, G, and TPUs. As Ivan Deutsch explains this article, the promise behind quantum computing is as follows:

In a classical computer, information is stored in retrievable bits binary coded as 0 or 1. But in a quantum computer, elementary particles inhabit a probabilistic limbo called superposition where a “qubit” can be coded as 0 and 1.

Here is the magic: Each qubit can be entangled with the other qubits in the machine. The intertwining of quantum “states” exponentially increases the number of 0s and 1s that can be simultaneously processed by an array of qubits. Machines that can harness the power of quantum logic can deal with exponentially greater levels of complexity than the most powerful classical computer. Problems that would take a state-of-the-art classical computer the age of our universe to solve, can, in theory, be solved by a universal quantum computer in hours.

For me what’s salient here is that the inherent probabilism of quantum computers make them even more fundamentally aligned with the true mathematics we’re representing with machine learning algorithms. TPUs, then, seem to exhibit a structure that best captures the mathematical operations of the algorithms, but exhibit the fatal flaw of being deterministic by essence: they’re still trafficking in the binary digits of 1s and 0s, even if they’re allocated in a different way. Quantum computing seems to bring back an analog computing paradigm, where we use aspects of physical phenomena to model the problem we’d like to solve. Quantum, of course, exhibits this special fragility where, should the balance of the system be disrupted, the probabilistic potential reverts down to the boring old determinism of 1s and 0s: a cat observed will be either dead or alive, as the harsh law of the excluded middle haunting our manifest image.

Once I opened pandoras box, I realize all sorts of things can be computers! One I find particularly interesting is a liquid state machine (LSM), which uses the ever changing properties of a perturbed liquid–like a cup of coffee you just put sugar into–as a means to compute a time series!

Screen Shot 2017-09-23 at 2.26.52 PM
A diagram from Maass et al’s paper on using liquid to make a real-time recurrent neural network

We often marvel at how the cloud has enabled the startup economy as we know it, reducing the cost of starting a business by significantly lowering capital investment required to get started with code. But imagine what it would be like if cups of coffees were real-time deep learning computers (granted we’d need to hook up something to keep track of the changing liquid states).

There’s an elemental beauty here: the flux of the world around us can be harnessed for computation. The world is breathing, beating, beating in randomness, and we can harness that randomness to do stuff.

I know close to nothing about analog computing. About liquid computing. All I know is it feels enormously exciting to shatter my assumption that digital computers are a given for machine learning. It’s just math, so why not find other places to observe it, rather than stick with the assumptions of the universal Turing machine?

And here’s what interests me most: what, then, is the status of being of the math? I feel a risk of falling into Platonism, of assuming that a statement like “3 is prime” refers to some abstract entity, the number 3, that then gets realized in a lesser form as it is embodied on a CPU, GPU, or cup of coffee. It feels more cogent to me to endorse mathematical fictionalism, where mathematical statements like “3 is prime” tell a different type of truth than truths we tell about objects and people we can touch and love in our manifest world.****

My conclusion, then, is that radical creativity in machine learning–in any technology–may arise from our being able to abstract the formal mathematics from their substrate, to conceptually open up a liminal space where properties of equations have yet to take form. This is likely a lesson for our own identities, the freeing from necessity, from assumption, that enables us to come into the self we never thought we’d be.

I have a long way to go to understand this fully, and I’ll never understand it fully enough to contribute to the future of hardware R&D. But the world needs communicators, translators who eventually accept that close enough can be a place for empathy, and growth.


*This holds not only for writing, but for many types of doing, including creating a product. Agile methodologies help overcome the paralysis of uncertainty, the discomfort of not being ready yet. You commit to doing something, see how it works, see how people respond, see what you can do better next time. We’re always navigating various degrees of uncertainty, as Rich Sutton discussed on the In Context podcast. Sutton’s formalization of doing the best you can with the information you have available today towards some long-term goal, but basing your learning and updates not on the long-term goal way out there but rather the next best guess is called temporal-difference learning.

**Split infinitive intentional.

***Who’s keeping score?

****That’s not to say we can’t love numbers, as Euler’s Identity inspires enormous joy in me, or that we can’t love fictional characters, or that we can’t love misrepresentations of real people that we fabricate in our imaginations. I’ve fallen obsessively in love with 3 or 4 imaginary men this year, creations of my imagination loosely inspired by the real people I thought I loved.

The image comes from this site, which analyzes themes in films by Darren Aronofsky. Maximilian Cohen, the protagonist of Pi, sees mathematical patterns all over the place, which eventually drives him to put a drill into his head. Aronofsky has a penchant for angst. Others, like Richard Feynman, find delight in exploring mathematical regularities in the world around us. Soap bubbles, for example, offer incredible complexity, if we’re curious enough to look.

Macro_Photography_of_a_soap_bubble
The arabesques of a soap bubble

 

The Secret Miracle

….And God made him die during the course of a hundred years and then He revived him and said: “How long have you been here?” “A day, or part of a day,” he replied.  – The Koran, II 261

The embryo of this post has gestated between my prefrontal cortex and limbic system for one year and eight months. It’s time.*

There seem to be two opposite axes from which we typically consider and evaluate character. Character as traits, Eigenschaften (see Musil), the markers of personality, virtue, and vice.

One extreme is to say that character is formed and reinforced through our daily actions and habits.** We are the actions we tend towards, the self not noun but verb, a precipitate we shape using the mysterious organ philosophers have historically called free will. Thoughts rise up and compete for attention,*** drawing and calling us to identify as a me, a me reinforced as our wrists rotate ever more naturally to wash morning coffee cups, a me shocked into being by an acute feeling of disgust, coiling and recoiling from some exogenous stimulus that drives home the need for a barrier between self and other, a me we can imagine looking back on from an imagined future-perfect perch to ask, like Ivan Ilyich, if we have indeed lived a life worth living. Character as daily habit. Character, as my grandfather used to say, as our ability to decide if today will be a good or a bad day when we first put our feet on the ground in the morning (Naturally, despite all the negative feelings and challenges, he always chose to make today a good day).

The other extreme is to say that true character is revealed in the fox hole. That traits aren’t revealed until they are tested. That, given our innate social nature, it’s relatively easy to seem one way when we float on, with, and in the waves of relative goodness embodied in a local culture (a family, a team, a company, a neighborhood, a community, perhaps a nation, hopefully a world, imagine a universe!), but that some truer nature will be shamelessly revealed when the going gets tough. This notion of character is the stuff of war movies. We like the hero who irrationally goes back to save one sheep at the expense of the flock when the napalm shit hits the fan. It seems we need these moments and myths to keep the tissue of social bonds intact. They support us with tears nudged and nourished by the sentimental cadences of John Williams soundtracks.

How my grandfather died convinced me that these two extremes are one.

On the evening of January 14, 2016, David William Hume (Bill, although it’s awesome to be part of a family with multiple David Humes!) was taken to a hospital near Pittsburgh. He’d suffered from heart issues for more than ten years and on that day the blood simply stopped pumping into his legs. He was rushed behind the doors of the emergency operating room, while my aunts, uncles, and grandmother waited in the silence and agony one comes to know in the limbo state upon hearing that a loved one has just had a heart attack, has just been shot, has just had a stroke, has just had something happen where time dilates to a standstill and, phenomenologically, the principles of physics linking time and space are halted in the pinnacle of love, of love towards another, of all else in the world put on hold until we learn whether the loved one will survive. (It may be that this experience of love’s directionality, of love at any distance, of our sense of self entangled in the existence and well being of another, is the clearest experiential metaphor available build our intuitions of quantum entanglement.****) My grandfather survived the operation. And the first thing he did was to call my grandmother and exclaim, with the glee and energy of a young boy, that he was alive, that he was delighted to be alive, and that he couldn’t have lived without her beside him, through 60 years of children crying and making pierogis and washing the floor and making sure my father didn’t squander his life at the hobby shop in Beaver Meadows Pennsylvania and learning that Katie, me, here, writing, the first grandchild was born, my eyebrows already thick and black as they’ll remain my whole life until they start to grey and signing Sinatra off key and loving the Red Sox and being a role model of what it means to live a good life, what it means to be a patriarch for our family, yes he called her and said he did it, that he was so scared but that he survived and it was just the same as getting out of bed every morning and making a choice to be happy and have a good day.

She smiled, relieved.

A few minutes later, he died.

It’s like a swan song. His character distilled to its essence. I think about this moment often. It’s so perfectly representative of the man I knew and loved.

And when I first heard about my grandfather’s death, I couldn’t help but think of Borges’s masterful (but what by Borges is not masterful?) short story The Secret Miracle. Instead of explaining why, I bid you, reader, to find out for yourself.


 * Mark my words: in 50 years time, we will cherish the novels of Jessie Ferguson, perhaps the most talented novelist of our time. Jessie was in my cohort in the comparative literature department at Stanford. The depth of her intelligence, sensitivity, and imagination eclipsed us all. I stand in awe of her talents as Jinny to Rhoda in Virginia Woolf’s The Waves. At her wedding, she asked me to read aloud Paul Celan’s Corona. I could barely do it without crying, given how immensely beautiful this poem is. Tucked away in the Berkeley Hills, her wedding remains the most beautiful ceremony I’ve ever attended.

**My ex-boyfriends, those privileged few who’ve observed (with a mixture of loving acceptance and tepid horror) my sacrosanct morning routine, certainly know how deeply this resonates with me.

***Thomas Metzinger shares some wonderful thoughts about consciousness and self-consciousness in his interview with Sam Harris on the Waking Up podcast. My favorite part of this episode is Metzinger’s very cogent conclusion that, should an AI ever suffer like we humans do (which Joanna Bryson compelling argues will not and should not occur), the most rational action it would then take would be to self-annihilate. Pace Bostrom and Musk, I find the idea that a truly intelligent being would choose non-existence over existence to be quite compelling, if only because I have first-hand experience with the acute need to allay acute suffering like anxiety immediately, whereas boredom, loneliness, and even sadness are emotional states within which I more comfortably abide.

****Many thanks to Yanbo Xue at D-Wave for first suggesting that metaphor. Jean-Luc Marion explores the subjective phenomenon of love in Le Phénomène Erotique; I don’t recall his mentioning quantum physics, although it’s been years since I read the book, but, based on conversations I had with him years ago at the University of Chicago, I predict this would be a parallel he’d be intrigued to explore.

My last dance with my grandfather, the late David William Hume. Snuff, as we lovingly called him, was never more at home than on the dance floor, even though he couldn’t sing and couldn’t dance. He used to do this cute knees-back-and-forth dance. He loved jazz standards, and would send me mix CDs he burned when I lived in Leipzig, Germany. In his 80s, he embarrassed the hell out of my grandmother, his wife of 60 years, by joining the local Dancing with the Stars chapter and taking Zumba lessons. He lived. He lived fully and with great integrity. 

When Writing Fails

This post is for writers.

I take that back.

This post shares my experience as a writer to empathize with anyone working to create something from nothing, to break down the density of an intuition into a communicable sequence of words and thoughts, to digitize, which Daniel Dennett eloquently defines as “obliging continuous phenomena to sort themselves out into discontinuous, all-or-nothing phenomena” (I’m reading and very much enjoying From Bacteria to Bach and Back: The Evolution of Minds), to perform an act of judgment that eliminates other possibilities, foreclosing other forms to create its own form, Shiva and Vishnu forever linked in cycles of destruction, creation, and stability. That is to say, this post shares my experience as a writer as metonymy for our human experience as finite beings living finite lives.

shiva
The Nataraja, Shiva in his form as the cosmic ecstatic dancer, inspires trusting calm in me.

Earlier this morning, I started a post entitled Competence without Comprehension. I’ll publish it eventually, hopefully next week. It will feature a critique of explainable artificial intelligence (AI), efforts in the computer science and policy communities to develop AI systems that make sense for human users. I have tons to say here. I think it’s ok for systems to be competent without being comprehensible (my language is inspired by Dan Dennett, who thinks consciousness is an illusion) because I think there’s a lot of cognitive competencies we exhibit without comprehension (ranging from ways of transforming our habits or even become believers in some religious system by going through the motions, as I wrote about in my dissertation, to training students in operations like addition and subtraction before they learn the theoretical underpinnings of abstract algebra – which many people never even learn!). I think the word why is a complex word that we use in different ways: Aristotle thought there were four types of causes and, again following Dennett, we can distinguish between why as “how come” (what input data created this output result?) and why as “what for” (what action will be taken from this output result?). Aristotle’s causal theory was largely toppled during the scientific revolution and then again by Sartre in Existentialism is Humanism (where he shows we humans exist in a very different ways from paper knifes, which are an outdated technology!), but I think there’s value in resurrecting his categories to think about machine learning pipelines and explainable AI. I think there are different ethical implications for using AI in different settings, and I think there’s something crucial about social norms – how we expect humans to behave towards other humans – that is driving widespread interest in this topic and that, when analyzed, can help us understand what may (or may not!) be unique about the technology in its use in society.

In short, my blog post was a mess. I was trying to do too much at once, there were multiple lines of synthetic thought that need to be teased out to make sense to anyone, including myself. I will understand my position better once I devote the time and patience to exploring it, formalizing it, unpacking ideas that currently sit inchoate like bile. What I started today contains at least five different blog posts’ worth of material, on topics that many other people are thinking about, so could have some impact in the social circles that are meaningful for me and my identity. This is crucial: I care about getting this one right, because I can imagine the potential readers, or at least the hoped-for readers. That said, upon writing this, I can also step back and remember that the approval I think I’m seeking rarely matters in the end. I always feel immense gratitude when anyone — a perfect stranger — reads my work, and the most gratitude when someone feels inspired to write or grow herself.

So I allowed myself to pivot from seeking approval to instilling inspiration. To manifesting the courage to publish whatever – whatever came out from the primordial sludge of my being, the stream of consciousness that is the dribble of expression, ideas without form, but ideas nonetheless, the raw me sitting here trying my best on a Sunday afternoon in August, imagining the negative response of anyone who would bother to read this, but also knowing the charity I hold within my own heart for consistency, habit, effort, exposure, courage to display what’s weakest and most vulnerable to the public eye.

I see my experience this morning as metonymy for our experience as finite beings living finite lives because of the anxiety of choice. Each word written conditions the space of possibility of what can, reasonably, come next (Skip-Thought vectors assume this to function). The best writing is not about everything but is about something, just as many of the happiest and most successful people become that way by accepting the focus required to create and achieve, focus that shuts doors — or at least Japanese screens — on unrealized selves. I find the burden of identity terrific. My being resists the violence of definition and prefers to flit from self to self in the affordance of friendships, histories, and contexts. It causes anxiety, confusion, false starts, but also a richness I’m loathe to part with. It’s the give and take between creation and destruction, Shiva dancing joyfully in the heavens, her smile peering ironic around the corners of our hearts like the aura of the eclipse.

The featured image represents Tim Jenison’s recreation of Vermeer’s The Music Lesson. Tim’s Vermeer is a fantastic documentary about Jenison’s quest to confirm his theory of Vermeer’s optical painting technique, which worked somewhat similarly to a camera (refracting light to create a paint-by-number-like format for the artist). It’s a wonderful film that makes us question our assumptions about artistic genius and creativity. I firmly believe creativity stems from constraint, and that Romantic ideas of genius miss the mark in shaping cultural understandings of creativity. This morning, I lacked the constraints required to write. 

The Unreasonable Effectiveness of Proxies*

Imagine it’s December 26. You’re right smack in the midst of your Boxing Day hangover, feeling bloated and headachy and emotionally off from the holiday season’s interminable festivities. You forced yourself to eat Aunt Mary’s insipid green bean casserole out of politeness and put one too many shots of dark rum in your eggnog. The chastising power of the prefrontal cortex superego is in full swing: you start pondering New Year’s Resolutions.

Lose weight! Don’t drink red wine for a year! Stop eating gluten, dairy, sugar, processed foods, high-fructose corn syrup–just stop eating everything except kale, kefir, and kimchi! Meditate daily! Go be a free spirit in Kerala! Take up kickboxing! Drink kombucha and vinegar! Eat only purple foods!

Right. Check.

(5:30 pm comes along. Dad’s offering single malt scotch. Sure, sure, just a bit…neat, please…)**

We’re all familiar with how hard it is to set and stick to resolutions. That’s because our brains have little instant gratification monkeys flitting around on dopamine highs in constant guerrilla warfare against the Rational Decision Maker in the prefrontal cortex (Tim Urban’s TEDtalk on procrastination is a complete joy). It’s no use beating ourselves up over a physiological fact. The error of Western culture, inherited from Catholicism, is to stigmatize physiology as guilt, transubstantiating chemical processes into vehicles of self deprecation with the same miraculous power used to transform just-about-cardboard wafers into the living body of Christ. Eastern mindsets, like those proselytized by Buddha, are much more empowering and pragmatic: if we understand our thoughts and emotions to be senses like sight, hearing, touch, taste, smell, we can then dissociate self from thoughts. Our feelings become nothing but indices of a situation, organs to sense a misalignment between our values–etched into our brains as a set of habitual synaptic pathways–and the present situation around us. We can watch them come in, let them sit there and fester, and let them gradually fade before we do something we regret. Like waiting out the internal agony until the baby in front of you in 27G on your overseas flight to Sydney stops crying.

Resolutions are so hard to keep because we frame them the wrong way. We often set big goals, things like, “in 2017 I’ll lose 30 pounds” or “in 2017 I’ll write a book.” But a little tweak to the framework can promote radically higher chances for success. We have to transform a long-term, big, hard-to-achieve goal into a short-term, tiny, easy-to-achieve action that is correlated with that big goal. So “lose weight” becomes “eat an egg rather than cereal for breakfast.” “Write a book” becomes “sit down and write for 30-minutes each day.” “Master Mandarin Chinese” becomes “practice your characters for 15 minutes after you get home from work.” The big, scary, hard-to-achieve goal that plagues our consciousness becomes a small, friendly, easy-to-achieve action that provides us with a little burst of accomplishment and satisfaction. One day we wake up and notice we’ve transformed.

It’s doubtful that the art of finding a proxy for something that is hard to achieve or know is the secret of the universe. But it may well be the secret to adapting the universe to our measly human capabilities, both at the individual (transform me!) and collective (transform my business!) level. And the power extends beyond self-help: it’s present in the history of mathematics, contemporary machine learning, and contemporary marketing techniques known as growth hacking.

Ut unum ad unum, sic omnia ad omnia: Archimedes, Cavalieri, and Calculus

Many people are scared of math. Symbols are scary: they’re a type of language and it takes time and effort to learn what they mean. But most of the time people struggle with math because they were badly taught. There’s no clearer example of this than calculus, where kids memorize equations that something is so instead of conceptually grasping why something is so.

The core technique behind calculus–and I admit this just scratches the surface–is to reduce something that’s hard to know down to something that’s easy to know. Slope is something we learn in grade school: change in y divided by change in x, how steep a line is. Taking the derivative is doing this same process but on a twisting, turning, meandering curve rather than just a line. This becomes hard because we add another dimension to the problem: with a line, the slope is the same no matter what x we put in; with a curve, the slope changes with our x input value, like a mountain range undulating from mesa to vertical extreme cliff. What we do in differential calculus is find a way to make a line serve as a proxy for a curve, to turn something we don’t know how to do and into something know how to do. So we take magnifying glasses with ever increasing potency and zoom in until our topsy-turvy meandering curve becomes nothing but a straight line; we find the slope; and then we sum up those little slopes all the way across our curve. The big conceptual breakthrough Newton and Leibniz made in the 17th century was to turn this proxy process into something continuous and infinite: to cross a conceptual chasm between a very, very small number and a number so small that it was effectively zero. Substituting close-enough-for-government-work-zero with honest-to-goodness-zero did not go without strong criticism from the likes of George Berkeley, a prominent philosopher of the period who argued that it’s impossible for us to say anything about the real world because we can only know how our minds filter the real world. But its pragmatic power to articulate the mechanics of the celestial motions overcame such conceptual trifles.***

riemann sum
Riemann Sums use the same proxy method to find the area under a curve. One replaces that hard task with the easier task of summing up the area of rectangles approximate the area of the curve.

This type of thinking, however, did not start in the 17th century. Greek mathematicians like Archimedes (famous for screaming Eureka! (I’ve found it!) and running around naked like a madman when he noticed that water levels in the bathtub rose proportionately to his body mass) used its predecessor, the method of exhaustion, to find the area of a shape like a circle or a blob by inscribing it within a series of easier-to-measure shapes like polygons or squares to get an approximation of the area by proxy to the polygon.

exhaustion
The method of exhaustion in ancient Greek math.

It’s challenging for us today to reimagine what Greek geometry was like because we’re steeped in a post-Cartesian mindset, where there’s an equivalence between algebraic expressions and geometric shapes. The Greeks thought about shapes as shapes. The math was tactical, physical, tangible. This mindset leads to interesting work in the Renaissance like Bonaventura’s Cavalieri’s method of indivisibles, which showed that the areas of two shapes were equivalent (often a hard thing to show) by cutting the shapes into parts and showing that each of the parts were equivalent (an easier thing to show). He turns the problem of finding equivalence into an analogy, ut unum ad unum, sic omnia ad omnia–as the one is to the one, so all are to all–substituting the part for the whole to turn this in a tractable problem. His worked paved the way for what would eventually become the calculus.****

Supervised Machine Learning for Dummies

My dear friend Moises Goldszmidt, currently Principal Research Scientist at Apple and a badass Jazz musician, once helped me understand that supervised machine learning is quite similar.

Again, at an admittedly simplified level, machine learning can be divided into two camps. Unsupervised machine learning is using computers to find patterns in data and sort different data into clusters. When most people hear they world machine learning, they think about unsupervised learning: computers automagically finding patterns, “actionable insights,” in data that would evade detection of measly human minds. In fact, unsupervised learning is an area of research in the upper echelons of the machine learning community. It can be valuable for exploratory data analysis, but only infrequently powers the products that are making news headlines. The real hero of the present day is supervised learning.

I like to think about supervised learning as follows:

Screen Shot 2017-07-02 at 9.51.14 AM

Let’s take a simple example. We’re moving, and want to know how much to put our house on the market for. We’re not real estate brokers, so we’re not great at measuring prices. But we do have a tape measure, so we are great at measuring the square footage of our house. Let’s say we go look through a few years of real estate records, and find a bunch of data points about how much houses go for and what their square footage is. We also have data about location, amenities like an in-house washer and dryer, and whether the house has a big back yard. But we notice a lot of variation in prices for houses with different sized back yards, but pretty consistent correlations between square footage and price. Eureka! we say, and run around the neighbourhood naked horrifying our neighbours! We can just plot the various data points of square footage : price, measure our square footage (we do have our handy tape measure), and then put that into a function that outputs a reasonable price!

This technique is called linear regression. And it’s the basis for many data science and machine learning techniques.

Screen Shot 2017-07-02 at 9.57.31 AM

The big breakthroughs in deep learning over the past couple of years (note, these algorithms existed for a while, but they are now working thanks to more plentiful and cheaper data, faster hardware, and some very smart algorithmic tweaks) are extensions of this core principle, but they add the following two capabilities (which are significant):

  • Instead of humans hand selecting a few simple features (like square footage or having a washer/dryer), computers transform rich data into a vector of numbers and find all sorts of features that might evade our measly human minds
  • Instead of only being able to model phenomena using simple linear lines, deep learning neural networks can model phenomena using topsy-turvy-twisty functions, which means they can capture richer phenomena like the environment around a self-driving car

At its root, however, even deep learning is about using mathematics to identify a good proxy to represent a more complex phenomenon. What’s interesting is that this teaches us something about the representational power of language: we barter in proxies at every moment of every day, crystallizing the complexities of the world into little tokens, words, that we use to exchange our experience with others. These tokens mingle and merge to create new tokens, new levels of abstraction, adding from from the dust from which we’ve come and to which we will return. Our castles in the sky. The quixotic figures of our imagination. The characters we fall in love with in books, not giving a dam that they never existed and never will. And yet, children learn that dogs are dogs and cats are cats after only seeing a few examples; computers, at least today, need 50,000 pictures of dogs to identify the right combinations of features that serve as a decent proxy for the real thing. Reducing that quantity is an active area of research.

Growth Hacking: 10 Friends in 14 Days

I’ve spent the last month in my new role at integrate.ai talking with CEOs and innovation leaders at large B2C businesses across North America. We’re in that miraculously fun, pre product-market fit phase of startup life where we have to make sure we are building a product that will actually solve a real, impactful, valuable business problem. The possibilities are broad and we’re managing more unknown unknowns than found in a Donald Rumsfeld speech (hat tip to Keith Palumbo of Cylance for the phrase). But we’re starting to see a pattern:

  • B2C businesses have traditionally focused on products, not customers. Analytics have been geared towards counting how many widgets were sold. They can track how something moves across a supply chain, but cannot track who their customers are, where they show up, and when. They can no longer compete on just product. They want to become customer centric.
  • All businesses are sustained by having great customers. Great means having loyalty and alignment with brand and having a high life-time value. They buy, they buy more, they don’t stop buying, and there’s a positive association when they refer a brand to others, particularly others who behave like them.
  • Wanting great customers is not a good technical analytics problem. It’s too fuzzy. So we have to find a way to transform a big objective into a small proxy, and focus energy and efforts on doing stuff in that small proxy window. Not losing weight, but eating an egg instead of pancakes for breakfast every morning.

Silicon Valley giants like Facebook call this type of thinking growth hacking: finding some local action you can optimize for that is a leading indicator of a long-term, larger strategic goal. The classic example from Facebook (which some rumour to be apocryphal, but it’s awesome as an example) was when the growth team realized that the best way to achieve their large, hard-to-achieve metric of having as many daily active users as possible was to reduce it to a smaller, easy-to-achieve metric of getting new users up to 10 friends in their first 14 days. 10 was the threshold for people’s ability to appreciate the social value of the site, a quantity of likes sufficient to drive dopamine hits that keep users coming back to the site.***** These techniques are rampant across Silicon Valley, with Netflix optimizing site layout and communications when new users join given correlations with potential churn rates down the line and Eventbrite making small product tweaks to help users understand they can use to tool to organize as well as attend events. The real power they unlock is similar to that of compound interest in finance: a small investment in your twenties can lead to massive returns after retirement.

Our goal at integrate.ai is to bring this thinking into traditional enterprises via a SaaS platform, not a consulting services solution. And to make that happen, we’re also scouting small, local wins that we believe will be proxies for our long-term success.

Conclusion

The spirit of this post is somewhat similar to a previous post about artifice as realism. There, I surveyed examples of situations where artifice leads to a deeper appreciation of some real phenomenon, like when Mendel created artificial constraints to illuminate the underlying laws of genetics. Proxies aren’t artifice, they’re parts that substitute for wholes, but enable us to understand (and manipulate) wholes in ways that would otherwise be impossible. Doorways into potential. A shift in how we view problems that makes them tractable for us, and can lead to absolutely transformative results. This takes humility. The humility of analysis. The practice of accepting the unreasonable effectiveness of the simple.


*Shout out to the amazing Andrej Karpathy, who authored The Unreasonable Effectiveness of Recurrent Neural Networks and Deep Reinforcement Learning: Pong from Pixels, two of the best blogs about AI available.

**There’s no dearth of self-help books about resolutions and self-transformation, but most of them are too cloying to be palatable. Nudge by Cass Sunstein and Richard Thaler is a rational exception.

***The philosopher Thomas Hobbes was very resistant to some of the formal developments in 17th-century mathematics. He insisted that we be able to visualize geometric objects in our minds. He was relegated to the dustbins of mathematical history, but did cleverly apply Euclidean logic to the Leviathan.

****Leibniz and Newton were rivals in discovering the calculus. One of my favourite anecdotes (potentially apocryphal?) about the two geniuses is that they communicated their nearly simultaneous discovery of the Fundamental Theorem of Calculus–which links derivatives to integrals–in Latin anagrams! Jesus!

*****Nir Eyal is the most prominent writer I know of on behavioural design and habit in products. And he’s a great guy!

The featured image is from the Archimedes Palimpsest, one of the most exciting and beautiful books in the world. It is a Byzantine prayerbook–or euchologion–written on a piece of parchment paper that originally contained mathematical treatises by the Greek mathematician Archimedes. A palimpsest, for reference, is a manuscript or piece of writing material on which the original writing has been effaced to make room for later writing but of which traces remain. As portions of Archimedes’ original Archimedes are very hard to read, researchers recently took the palimpsest to the Stanford Accelerator Laboratory and threw all sorts of particles at it really fast to see if they might shine light on hard-to-decipher passages. What they found had the potential to change our understanding of the history of math and the development of calculus! 

Objet Trouvé

A la pointe de la découverte, de l’instant où pour les premiers navigateurs une nouvelle terre fut en vue à celui où ils mirent le pied sur la côte, de l’instant où tel savant put se convaincre qu’il venait d’être témoin d’un phénomène jusqu’à lui inconnu à celui où il commença à mesurer la portée de son observation, tout sentiment de durée aboli dans l’enivrement de la chance, un très fin pinceau de feu dégage ou parfait comme rien autre le sens de la vie. – André Breton, 1934

(At the point of discovery — from the moment when a new land comes into the field of vision for a group of explorers to that when their feet first touch the shore — from the moment when a certain savant convinces herself that she’s observed a previously unknown phenomenon to that when she begins to measure her observation’s significance — the intoxication of luck abolishing all notions of time, a very thin paintbrush* unlocks, or perfects, like nothing else, the meaning of life.)

I have a few blog post ideas brewing but had lost my weekly writing momentum in the process of moving from New York City to Toronto for my new role at integrate.ai. It’s incredible how quickly a habit atrophies: the little monkey procrastinator** in my mind has found many reasons to dissuade me from writing these past two weeks. I already feel my mind intaking the world differently, without the same synthetic gumption. Anxiety creeps in. Enter Act of Will stage left, sauntering or skipping or prancing or curtseying or however you’d like to imagine her. A bias towards action, yes, yes indeed, and all those little procrastination monkeys will dissipate like tomorrow’s bug bites, smeared with pink calamine lotion bought on sale at Shoppers Drug Mart.

But what to write about? That is (always) the question.

Enter Associative Memory stage right. It’s 8:22 am. I’m on a run. Fog partially conceals CN tower. A swans stretches her neck to bite little nearby ducks as the lady with her ragged curly hair — your hair at 60 dear Kathryn — chuckles in delight, arms akimbo and crumbs askance, by the docks on the shore. The Asian rowers don rainbow windbreakers, lined up in a row like the refracted waves of a prism (seriously!). What do I write about? Am I ready to write about quantum computing and Georg Cantor (god not yet!), about why so many people reject consequentialist ethics for AI (closer, and Weber must be able to help), about the talk I recently gave defining AI, ML, Deep Learning, and NLP (I could do this today but the little monkey is still too powerful at the moment), about the pesky health issues I’m struggling with at the moment (too personal for prime time, and I’ll simply never be that kind of blogger)? About the move? About the massive changes in my life? About how emotionally charged it can be to start again, to start again how many times, to reinvent myself again, in this lifestyle I can’t help but adopt as I can’t help but be the self I reinforce through my choices, year after year, choices, I hope, oriented to further the exploration into the meaning of life?

Associative Memory got a bit sidetracked by the ever loquacious Stream of Consciousness. Please do return!

Take 2.

Enter Associative Memory stage right. It’s 8:22 am. I’m on a run. Fog partially conceals CN tower. Searching for something to write about. Well, what about drawing upon the objet trouvé technique the ever-inspiring Barbara Maria Stafford taught us in Art History 101 at the University of Chicago? According to Wikipediaobjet trouvé refers to “art created from undisguised, but often modified, objects or products that are not normally considered materials from which art is made, often because they already have a non-art function.”*** Think Marcel Duchamp’s ready-made objects, which I featured in a previous post and will feature again here.

Duchamp.-Bicycle-Wheel-395x395
One of Marcel Duchamp’s ready-made artworks.

But that’s not how I remember it. Stafford presented the objet trouvé as a psychological technique to open our attention to the world around us, helping our minds cast wide, porous, technicolor nets to catch impressions we’d otherwise miss when the wardens of the pre-frontal cortex confine our mental energy into the prisons cells of goals and tasks, confine our handmaidens under the iron-clad chastity belt of action. (Enter Laertes stage left, peaking through only to be quickly pulled back by Estragon’s cane.)

You see, moving to a new place, having all these hours alone, opens the world anew to meaning. We become explorers having just discovered a new land and wait suspended in the moment before our feet graze the unknown shore. The meaning of connections left behind simmers poignantly to tears, tears shed alone, settling into gratitude for time past and time to come. Forever Young coming on the radio surreptitiously in the car. Grandpa reading it like a poem in his 80s, his wisdom fierce and bold in his unrelenting kindness. His buoyancy. His optimism. His example.

Take 3.

Enter Associative Memory stage right. It’s 8:22 am. I’m on a run. Fog partially conceals CN tower. What do I see? What does the opened net of my consciousness catch? This.

water
Mon objet trouvé

It was more a sound than a sight. The repetition of the moving tide, always already**** there, Earth’s steady heartbeat in its quantum entanglement with the moon. The water rising and falling, lapping the shores with grace and ease under the foggy morning sky. Stammering, after all, being the native eloquence of fog people. The sodden sultriness of Egdon Heath alive in every passing wave, Eustacia’s imagination too large and bold for this world, a destroyer of men like Nataraja in her eternal dance.

Next, my mind saw this (as featured above):

vide

And, coincidentally, the woman on the France Culture podcast I was listening to as I ran uttered the phrase épuisée par le vide. 

Exhausted by nothingness. The timing could not have been more perfect.

It’s in these moments of loneliness and transition that very thin paintbrushes unlock the meaning of life. Our attention freed from the shackles of associations and time, left alone to wander labyrinths of impressions, passive, vulnerable, seeking. The only goals to be as kind as possible to others, to accept without judgment, to watch as the story unfolds.


* I don’t know how to translate pinceau de feu, so decided to go with just paintbrush. Welcome a more accurate translation!

** Hat tip to Tim Urban’s hilarious TED talk. And also, etymology lovers will love that cras means tomorrow in Latin, so procrastinate is the act of deferring to tomorrow. And also, hat tip to David Foster Wallace (somewhat followed by Michael Chabon, just to a much lesser degree) for inspiring me to put random thoughts that interrupt me mid sentence into blog post footnotes.

*** Hyperlinks in the quotation are the original.

**** If you haven’t read Heidegger and his followers, this phrase won’t be as familiar and potentially annoying to you as it is to me. Continental philosophers use it to refer to what Sellars would call the “myth of the given,” the phenomenological context we cannot help but be born into, because we use language that our parents and those around us have used before and this language shapes how we see what’s around us and we have to do a lot of work to get out of it and eat the world raw.

Commonplaces

My standard stamina stunted, I offer only a collection of the most beautiful and striking encounters I had this week. To elevate the stature of what would otherwise just be a list (newsletters are indeed merely curation, indexing valuable only because the web is too vast), I’ll compare what follows to an early-modern commonplace book, the then popular practice of collecting quotations and sayings useful for future writing or speeches. True commonplaces, locus communis, were affiliated with general rules of thumb or tokens of wisdom; they played a philosophical role to illustrate the morals of stories in classical rhetoric. The likes of Francis Bacon and John Milton kept commonplace books. The most interesting contemporary manifestation of the practice is Maria Popova’s always delightful Brain Pickings. Popova, moreover, inspires the first selection in today’s list.

What delights me the most in compiling this list is that I can’t help but do so. There is much change afoot, and I wanted to grant myself the luxury of taking a weekend off. But I couldn’t. My mind will remain restless until I write. It’s a wonderful sign, these handcuffs of habit.

Without further ado, I present a collection of things that were meaningful to me this week:

Euclid alone has looked on beauty bare

Monday evening, my dear friend Alfred Lee and I walked 45 minutes to Pioneerworks in Red Hook to attend The Universe in Verse. It was packed: the line curved around the corner and slithered down Van Brunt street towards the water and, lemmings, we rushed to get two slices of pizza to stave off our hunger before the show. It was a momentous gathering, so touching to see over 800 people gathered to listen to people read poetry about science! Maria Popova introduced each reader and spoke like she writes, eloquence unparalleled and harkening the encyclopedic knowledge of former days. It was a celebration of feminism, of the will to knowledge against the shackles of tyranny, of minds inquisitive, uniting in the observation of nature always ineffable yet craftily crystallized under the constraints of form.

ednastvincentmillay
A portrait of Edna St. Vincent Millay

My favorite poems were those by Adrienne Rich and this sonnet by the very beautiful Edna St. Vincent Millay.

Euclid alone has looked on Beauty bare.
Let all who prate of Beauty hold their peace,
And lay them prone upon the earth and cease
To ponder on themselves, the while they stare
At nothing, intricately drawn nowhere
In shapes of shifting lineage; let geese
Gabble and hiss, but heroes seek release
From dusty bondage into luminous air.
O blinding hour, O holy, terrible day,
When first the shaft into his vision shone
Of light anatomized! Euclid alone
Has looked on Beauty bare. Fortunate they
Who, though once only and then but far away,
Have heard her massive sandal set on stone.

A glutton for abstraction and the traps of immutability and stasis, I found this poem gripping. I cannot help but imagine a sandal etched in white marble at the end, the toes of Minerva immutable, inexorable, ineluctable in the hallways of the Louvre, the memories of a younger self thirsting to understand our world. The nostalgia ever present and awaiting. Euclid declaring with such force that for him, σημεῖον sēmeion, a sign or mark, meant a point, that which has no parts. And from this point he built a world of beauty bare.

Nutshell

I’m reading McEwan’s latest, Nutshell. It’s marvelous. A contemporary retelling of Hamlet, where the doubting antihero is an unborn baby observing Gertrude and Claudius’s (Trudy and Claude, in the retelling) murderous plot from his mother’s womb.

There are breathtaking moments:

“But here’s life’s most limiting truth–it’s always now, always here, never then and there.”

“There was a poem you recited then, too good for one of yours, I think you’d be the first to concede. Short, dense, better to the point of resignation, difficult to understand. The sort that hits you, hurts you, before you’ve followed exactly what was said…The person the poem addressed I think of as the world I’m about to meet. Already, I love it too hard. I don’t know what it will make of me, whether it will care of even notice me…Only the brave would send their imaginations inside the final moments.”

I have a post arguing against immortality brewing, to respond to Konrad Pabianczyk and continue the relentless fight against the Silicon Valley Futurists. It’s not possible to love the world too hard if you never die. There’s something right about the Freudian death drive, the lyricism of the brink of decay. Gracq harnesses it to create the ecstatic psychology of Au Chateau d’Argol. Borges describes how the nature of choice, the value we ascribe the experiences–the beauty of coincidence, the feeling of wonder that two minds might somehow connect so deeply that, as the angel made man in Wenders’s Wings of Desire, the voices finally stop, where the loneliness halts temporarily to usher aloneness in peace, true aloneness in the company of another, another like you, with you deeply and fully–would disappear if we know that the probability of experiencing everything and the possibility of doing everything would go up if we could indeed live forever in this continual eternal return. And even way back when in Mesopotamia, in the days of the great Gilgamesh, the gods do grant Utnapishtim immortality, but on the condition of a life of loneliness, a life lived “in the distance, at the mouth of the rivers.”

Style is an exercise in applied psychology

On Thursday morning, I listened to Steven Pinker (coincidentally, or perhaps not so coincidentally, in dialogue with Ian McEwan, McEwan with his deep voice, the English accent a paradigm of steadied wisdom worth attending to) talk about good writing on an Intelligence Squared podcast recorded in 2014. He basically described how bad writing, in particular bad academic writing, results from psychological maladies of having to preemptively qualify and defend every statement you make against the pillories of peers and critiques. His talk reminded me of David Foster Wallace’s essay Authority and American Usage, what with collapsing the distinction between descriptivist and prescriptivist linguistics and exposing the unseemly truth that style, diction, and language index social class. The gem I took away was Pinker’s claim that style is an exercise in applied psychology, that we must consider who our readers are, what they’ve read, how the speak and think, and adapt what we present to meet them there without friction or rejection.

Screen Shot 2017-04-30 at 12.14.54 PM
Foster Wallace’s brilliant essay reviews a dictionary, and in doing so, critiques all the horrendous faux pas with make using the English language.

What’s freeing about this blog is that, unlike most of my other writing, I forget about the audience. There is no applied psychology. It’s just a mind revealed and revealing.

Music

Coda by Aaron Martin/Christoph Berg caught my attention yesterday evening as I walked under the bridge from the Lorimer station and waited, reading, in front of a bar in Williamsburg.

I had this Proustian madeleine experience last Sunday when The Beatitudes, by Vladimir Martynov, showed up on my Spotify Discover Weekly list. The Kronos Quartet version is featured in Paolo Sorrentino’s The Great Beauty. Hearing the music transported me back to a wintry Sunday morning in Chicago, up at the Music Box theater to see that film with the man I lived with and loved at the time. I relived this love, deeply. It was so touching, and yet another type of experience I just don’t think would be as powerful and impactful if I weren’t mortal, if there weren’t this knowledge that it’s no longer, but somehow always is, a commonplace as old as Greece, tucked away like shy toes under the sandal strap of Minerva’s marble shoe, cold, material, inside me, deeply, until I die, to be unlocked and unearthed by surprise, as if it were again present.

The image is of John Locke’s 1705 “New method of a commonplace book.” Looks like Locke wanted to add some structure to the Renaissance mess of just jotting things down to ease future retrieval. This is housed in the Beinecke rare book library at Yale. 

Homage to Derek Walcott

I remember it like it was yesterday. Like it was right now.

It was late June, 2007. My dear friend Andrew Gradman, whom I cherish so deeply, who shares my birthday and, unfortunately, some of the struggles of my temperament, was visiting me in Germany. We left the little studio apartment I lived in for a year in Sachsenhausen, a neighborhood in Frankfurt, stopped at Documenta in Kassel (which I appreciated far more than Andrew), trained our way to Berlin. I don’t remember how we spent most of our time in Berlin, but clearly recall the evening where we wandered towards the east side, following the traces of the former wall, and, without purpose or plan, arrived at the KulturBrauerei, a former brewery now converted into an arts and events hall. It just so happened they were hosting the opening night of an international Poesiefestival there that evening. Andrew was kind enough to indulge my desire to buy tickets and check it out.

The room was full but not stuffy or crowded. Late arrivers, we sat very close to the first row. Andrew was antsy, as he didn’t speak German and wasn’t jazzed about the prospect of having German poetry wash over him for an hour or two. Empathic to a fault, I sensed and lived his alienation, but selfishly tolerated his discomfort as I was excited about the event. The experience was in keeping with those I’d had at the many Literaturhäuser (literally, literature houses) across Germany, small cultural centers that house live novel and poetry readings. While living in Frankfurt, I caught the last three readings of the last chapters in the last volume of Marcel Proust’s In Search of Lost Time: a man named Peter Heusch had taken 13 years (!!) to read the entire work aloud, in one-hour installments on Fridays at 6 or 7 pm.

The room hushed. The festival started. The MC said she was delighted and honor to introduce the first poet, a man named Derek Walcott, to the stage.

I’d never heard of Walcott, but quickly understood he was a big deal. Nobel prize and such.

He walked slowly to the podium and paused. Breathed in and out a few times. His demeanor, his entire being, exuded the same epic energy we read in his verse. Like the storyteller in Wenders’ Wings of Desire, his gestures and eyes relay the dead poets that girth his heart and his mind, Homer and Baudelaire and Yeats and everyone, just everyone, processed anew under the heavy anchor of honesty, of experience. He was old. He moved slowly with the poise of an oracle. He promised wisdom.

And then he read. He read the first book of The Prodigala long-form poem he first published in 2014. I don’t know how to adequately capture the emotions that swept over me listening to him read. All I can say is that, with each passing moment, my heart opened more. The alienation and discomfort passed. It was pure presence. This man, with his old skin and his oracular voice, relived the experiences he’d had as a young, erudite Caribbean man–or even as a old, erudite Caribbean man, the self of today telescoped through the self of yesterday–living in Boston, a Brahmin in nature and soul, just living in a body that looked different from everyone around him. A young man riding on the train up the northeastern corridor, watching the herons and grasses graze the shore, as past voices, echoes of Emerson and Thoreau, rose again into the thirst of his curiosity. I heard and saw myself, and reveled at the deep kinship that existed between me, a 23-year-old white girl, and him, a 77-year-old black man. I felt fused with him. I felt love. I felt such deep wonder and gratitude that chance had brought us there that evening. I don’t remember The Prodigal in detail, but a few scenes have been grafted into my mind. By now, I’ve given it as a Christmas gift to my mom and many dear friends. What I recall acutely is the depth, intensity, purity of the emotions I felt when he read. Derek Walcott gave me art.

He died old and, it appears, lived well. He was a magnificent poet. I heard Love After Love on NPR this morning, and, as always with Walcott, broke into tears. He captures the cadence of a self grateful for being, a self finally settling into love. A beauty always available to us all.

Love After Love

The time will come
when, with elation
you will greet yourself arriving
at your own door, in your own mirror
and each will smile at the other’s welcome,

and say, sit here. Eat.
You will love again the stranger who was your self.
Give wine. Give bread. Give back your heart
to itself, to the stranger who has loved you

all your life, whom you ignored
for another, who knows you by heart.
Take down the love letters from the bookshelf,

the photographs, the desperate notes,
peel your own image from the mirror.
Sit. Feast on your life.

The image is Rembrandt’s Prodigal Son, which I also fortuitously found in the Hermitage in Saint Petersburg. As with Walcott, I didn’t know this painting existed before it found me. I didn’t seek it out as a coveted must see tourist experience. It wasn’t the Mona Lisa at the Louvre. It was a freezing February morning, I wandered the Hermitage all alone, and, upon turning a corner, saw this painting. I froze in my tracks and started to cry. Never had I seen contrition and forgiveness and unconditional love so delicately represented, the one foot without a shoe, the curling toes, the father’s ease after all those years of worry and fear, and the jealous, resentful gaze of the good son standing tall and ominous, watching. 

One Feminine Identity

On January 21, women marched on Washington. They marched on New York City. On Seattle. On San Jose. On Antarctica. On Auckland. On Kolkata. 4,956,422 women and men marched in 673 locations. The energy across social media was contagious and powerful, at least inside the liberal and progressive continent of the internet I inhabit.

On January 22, I drafted a blog post articulating what I think it means to champion women’s rights. To amplify rhetorical impact, I used general, normative statements: “Women should have reproductive rights. They should have the right to use birth control and have a legal and safe abortion. They should…” I didn’t have time to finish the post, but showed the draft to my partner. His primary critique was that I – falsely and perhaps offensively – made it seem like my particular experience holds – or, even worse, should hold – for all women, that my way of being in the world was the way all women should be in the world.

That’s the opposite of what I wanted to communicate. Indeed, for me, to champion women’s rights is to champion human rights. It is to champion tolerance. To champion freedom to be oneself and express oneself, the freedom to think, question, create, and criticize. To champion the mental and spiritual work required to deeply accept alterity and difference, to endorse and cultivate the skills of citizenship and democracy. And not just in the realm of discourse, but in the realm of intimacy and action. To go to places that create discomfort and abide in those places, normalizing the difference, truly absorbing a worldview that, a termite, nibbles away at the foundations and security of what we once believed. This, for me, is the essence of what millions of people marched for on January 21. Maybe it’s called empathy with a generous dose of reflection.

So much has happened since then. Just yesterday, I introduced a brilliant, capable, female Muslim data scientist to an entrepreneur in Toronto. President Trump’s Executive Order has pushed her out of the United States. I hope she will prosper in a country more accepting of her talents.

I find it difficult to write in these turbulent times because it feels immoral to write about anything but the most pressing issue of the day. Should I write about immigrant rights? Should I write about the Yemeni bodega strike I experienced Thursday night on my way home from work? Or should I insulate my writing from politics and allow myself to explore humanistic aspects of artificial intelligence, my current professional focus? The imperative to write about social and political issues stems in great part from the ostracism I imagine I’d experience if I were to write about something technical and abstract at this critical moment in history. And this very imperative, as it happens, is at the heart of the one branch of the ethics of artificial intelligence. I’ll write an in-depth post on this soon, but for now suffice it to say that I agree with Joanna Bryson that “pain, suffering, and concern for social status are things essential to a social species, and as such they are integral to our intelligence.” No matter how intelligent they get, AIs will not suffer from social exclusion like we do. There’s a lot to that.

Circling back to women’s rights, I would like to sketch one particular take on feminine identity. It is my take, shaped by my experiences. It will resonate with some and offend others. I accept that. It’s all I can give.

A woman, I should have reproductive rights. I should have the right to use birth control and have a legal and safe abortion. This is important so that motherhood is an active choice, so that I am in a position to raise my children fitfully, responsibly, joyfully, and to the best of my abilities. As such, this is more my child’s right to the best life possible than my own right to use my body as I like.

A woman, I should have the right to excel in my career. I work in business and strive to be CEO of a tech company in the future. Were I in politics, I would want the right to be president (naturally only after meriting the role by busting my ass for years and years to understand the astounding complexity of domestic and international affairs). Were I a stay-at-home mom, I would want the right to be a mom, to take pride raising my children to become awesome people and citizens, like the Abigail Adamses of the early American Republic.

A woman, I should have the right to be treated and seen without gender in some social contexts and with gender in other social contexts. Negotiating with men in suits at hedge funds and Wall Street banks, I would like to be seen as a man in a suit; not even as a man, but as a rational vessel executing a function. A business brain in a vat. In other contexts, I would like the right to embody my femininity, to feel myself as beautiful, to know the timber in my voice, to deliberately craft elegance in my gestures, and to have this unashamedly be part of who I am. I do not believe there need be blanket systematicity to gender or feminism. As with many other aspects of modern, secular identity, gender itself can be latent or activated accordingly to context.

A woman, I should have the right to embrace my professional identity in sales and marketing without the branded (as in scarlet letter) shame that these are roles more often occupied by women. Most “women in tech” energy is devoted to technical roles (think Grace Hopper). I think that’s great, and certainly lament how few women I see at highly technical security or engineering conferences. What I don’t think is great constantly being treated as a second-class citizen just because I do not spend my days coding. I love math. I love geeking out on the details of how software works. I love statistical models, and love how satisfying it is to learn how much more there is to learn every day. But I also love leading a life of action and interaction. Meeting people every day, encountering curious folks and territorial folks, listening to them, asking them questions, and finding a way to make technology valuable and interesting for both their company and their personal ambitions within their company. I should have the right to see value in the task of building bridges between technical and non-technical communities, in ushering technology from academic experiment to impactful commercial product. As I explored in my dissertation, there is a difference between l’esprit géométrique (slow, logical-deductive thinking in math proofs) and l’esprit de finesse (quick, synthetic judgments in response to unanticipated information in social contact). Doing sales and marketing well requires both. Left-brain, right-brain stuff. I’m tired of getting slack for allowing myself to feel alive.

A woman, I should have the right to understand and accept the lifestyles and practices of other women who live radically differently than I. Saba Mahmood’s Politics of Piety helped me appreciate how, counter to our Western Liberal Mindset, female Islamic dress and practices can actually be lived as empowering expressions of self-worth and piety, not enslaving repression. My dear friend Gillian Power recently became woman after spending 40 years trapped in her male body. Post transition, she and her wife continue to raise their two cherubic daughters. I will never exactly appreciate the fear Gillian felt when she wrote her coming out letter to the management of her conservative law firm. But I read the drafts, and, having myself suffered from alienation and fear of judgment, felt deep joy in being part of the circle that enabled her to come more fully into herself.

There are other rights women should have. Expressing any of this is delicate, as it exposes deeper aspects of self than those I normally reveal in the safe, abstract space of math and technology. As with everything else on this blog, it’s as near as may be.

 

What it means to write regularly

2:09 pm on January 2, 2017 is as good a time as any to finally start blogging.

And I do mean blogging, not writing. I created this website August 28, 2016, having discovered that the New York venture capital power couple Fred and Joanne Wilson had mustered the discipline to blog daily on their AVC and Gotham Gal sites. I found it liberating that their posts aren’t always (or even often) groundbreaking. Sometimes they just post pictures of and brief commentary on shows they’ve seen (Joanne recently posted about Manifesto, which features Cate Blanchett in 20 avatars reading historic manifestos) or links to interesting tech videos (Fred is investing in blockchain companies and posts about the technology frequently). I thought to myself, “hell, I could do that!” What’s most interesting about their blogs is the fact that they approach writing as a daily practice. Velcro to the quotidian. Blogging akin to but different from journal writing: akin in that what’s written pulses our (my) daily thoughts, discoveries, experiences, writing a space to actively process what we (I) live and thereby transform what we live, sedimenting the seen, heard, read, and watched into a different area of our (my) consciousness than that available to passive experience (writing a documentation very different from the flash activity of selfies and social media posts, which mediate present experiences and change our methods of recollection); different in that what I (we?) write for a public audience won’t, given who I am, be as raw and direct as what I write for myself and for the men I have loved. Now, I share far more private thoughts with friends and strangers in face-to-face conversations than I think is appropriate, polite, or beneficial. One of my New Year’s resolutions is to cultivate interiority. But the spoken word is, at least partially, transient (I qualify this because the older I get, the more I appreciate, and often regret, the consequences of what I’ve said and shown to family and friends). Transient, directed, and selectively shared. What’s written stays there, exposed, available to whoever decides to look. This fear of judgment is what makes public writing – and art, and so many endeavors – so hard. It’s also hard not to write to someone, as Coetzee discussed in Elizabeth Costello. I’ve always found that the blank image of an open audience folds in on itself and lands at the iron gates of the superego. That is to say, to write for everyone is to write for no one is to write for oneself. My fear, finally, is much more palpable for me with some subjects than others. I prefer to hide behind the conventions of business, math, and philosophy, to write in a way that a man would never deride as feminine. In this blog, I am going to try to expose more personal thoughts. I do think some of the prose will be worth reading.

I’m a sucker for daily practices (and understand that daily means almost daily). So much so that I think the strongest daily practice I have is my habit of setting new daily practices. Write a blog every day. Study at least 20 minutes of Mandarin every day. Meditate at least 15 minutes every day. Refresh my other foreign languages. Exercise. Find a way to practice gratitude. Keep a journal. Play violin. The point is it’s silly. The point is it’s a way to avoid the responsibility and prioritization required to actually build a new skill, to define oneself in a particular way. Instead of making hard choices, and picking something at the expense of other things, I often wear myself thin trying to do lots of things, trying to inhabit the liminal space of potential as opposed to closing the door on a different possible world. That’s ok: I believe there’s a space for generalists in this world, even though I’ve always envied those who truly excel at one art or skill. But what’s healing about doing something regularly is that there’s space to mess up. It doesn’t always have to be great. You just have to sit down and do it. I love how routine can remove the agony of choice, although I do agree with Ruth Chang that it’s helpful to view hard choices as life presenting us with opportunities to exercise normative agency. But habits are the fruit of lots of little choices, the how of our life, which define us so concretely that they can challenge projections of who we think we are.

This post did not turn out to be what I thought it would be. I originally entitled it “2017 resolutions,” and realized midway into the first paragraph that wasn’t going to be the right title. What I perhaps love most about writing (how wonderful that in the course of this post I may have mustered the courage to write) is giving ourselves over to insights, to the truth that cannot help but be told if we allow ourselves to follow where the writing takes us. I’ve found that writing helps me crystallize my understanding of technical concepts and excavate the caverns of my emotions. I’m surprised by how personal I’ve allowed this to become. My New Year’s resolutions include intentions to cultivate interiority, honesty, bravery, and self-love. This feels like a step in the right direction.

The image is Mikhail Vrubel’s Demon Seated, painted in 1890. I find Vrubel so wonderfully lyrical, this image betokening androgyny, loneliness, strength, and introspection, and thereby a fitting representation for my personal experience of writing.