Of Thread and Mermen

I bought a dress Tuesday evening. It’s silk and it billows, the cut loose, elegant, harkening flappers and 1930s France. The print seems sampled directly from a Wes Anderson film. Featured in the image above, it has a Jerry Garcia merman with sunglasses and a ping pong paddle rippling regularly across the silk. The design house, La Prestic Ouiston*, hails from a family that maintains a traditional oyster farm in Brittany. The brand’s manifesto celebrates “craft, tradition, virtuosity, always [seeking] to work and to hightlight** the craftsmanship of artisans by producing unique pieces such as garments with embroideries hand-made in Rwanda or clogs made in Brittany and painted in Paris by an artist.” It’s a small brand that proclaims the local, that “dedicates itself to the return of slow fashion.” The silk is infused with mists from the nearby salt marshes in Guérande, it billows brine gusts and ocean raked into flat white squares. Eyes closed, I imagine walking in merman silk barefoot, bare legged, over the desiccated marshes, shaved salt embedded flake by flake into foot arches, falling flake by flake back to a new place of rest, ever migrant with the opalescent tides. Oysters turgid under rugged shells, their taste reminiscent of our common ancestry as ocean. Sweat and blood betokening a past too old to be remembered but by cortisol and heartbreak.***

le-parc-de-briere-4_reference
The salt marshes in Guérande, just south of Brittany, where you can find the La Maison Mer oyster farm affiliated with La Prestic Ouiston. I visited the marshes in the summer of 2009, learning that the fleur de sel is the crystal crust that forms atop the rectangular marshes. They still use rakes to shave off the salt, collecting it into piles like those in the photo.

I purchased the dress at GASPARD, my favourite clothing boutique in Toronto. The owners are attentive and visionary; they comb the world to find designers with beautiful clothing backed by stories and introduce their unique clothing to Toronto. The first time I visited, I immediately felt the ease and grace of a new relationship. I told Ayalah, who was working at the boutique when I bought the dress, that I speak in public frequently and was excited to wear such a rad dress on a panel the next day. She invited me to send photos in the dress to Richard, one of GASPARD’s owners, as it was currently his favourite. And then she asked if work ever paid for my wardrobe, given my public-facing role. I laughed the idea off as absurd, as I work for a small startup and can only imagine what our controller would think if I included a line item for a merman dress on my January expense report.

But her suggestion sparked an idea. How awesome would it be to collaborate with a design house like La Prestic Ouiston on a wardrobe for talks and public appearances, to design an identity either tailored to or able to challenge an audience, in the same way that, as speaker, I shift my approach, content, and tone depending on whether I’m addressing a super technical artificial intelligence research audience, a super practical business audience who need just enough technical detail to feel empowered but not so much as to feel alienated, a passionate and righteous sociology and critical theory audience who want to unpack the social implications of new technologies and do something to fix them, or a muted, constrained policy audience fascinated by the potential of a new conceptual framework to think about what it might mean to regulate AI but trapped within the confines of legal precedent and the broad strokes of the electorate?

What I imagine isn’t sponsorship à la Tiger Woods or pick your favourite athlete. It isn’t trendsetting or luxury branding à la pick your favourite actress wearing Alexander McQueen or Dior or Armani or Gucci or Carolina Herrera on the red carpet at the Oscars. It’s more like Bowie or Lady Gaga or Madonna, Protean shapeshifters whose songs and performances embody a temporary persona that vanishes into something new in the next project. I imagine a collaboration with an artist or designer. Couture not as fitting a dress to individual proportions but as context, each performance exposing its roots, not just measuring bust and waistlines but identity and persona, my providing constraints and parameters and abandoning myself to the materials, shapes, patterns, folds, twists, buttons, sleeves, lengths, tones, textures the designer felt appropriate for a given performance. Not unlike the dance between authorship and abandonment Kyle McDonald experiences in algorithmic art, where the coder sets the initial parameters of the algorithm and experiences what results. Design a mode of creation girding both fashion and product marketing, both ethnographies of what exists today, techniques to tweeze out mental models that guide behaviour and experience and emotion, but that always go beyond observation, that infuse empiricism with the intuition of what could be possible, of how today’s behaviours could be improved, changed, optimized to create something new.

Screen Shot 2018-02-04 at 11.48.30 AM
Kyle McDonald has been working a lot of algorithmically generated music of late, and featured this image on a recent post about using neural nets for music.

I knew from the outset the idea would be polarizing. Fashion and brand sponsorship is at home in sports because athletes are more than athletes; they are cultural icons. It’s at home in entertainment, where physical appearance and beauty are part and parcel of stardom, whether we like it or not. But it’s not at home in math, quantitative fields in academia, or technology. Which is why the topic is thorny, uncomfortable, interesting.

I was concerned about the potential negative reaction to the very post I’m writing (you’re reading) so shopped the idea with a few people to tally reactions.****

Those in fashion were non-plussed: “Fashion x public figures is as old as bread, it’s just a question of finding someone up for a collaboration.”

The way I engage with my younger, technical, male colleagues inspired a presentation of the idea as an act of badass empowerment. They saw and heard what they normally see and hear from me. I could have been talking about research. I could have been talking about speaking on cybersecurity to a bunch of generals. They didn’t hear me speak about fashion. They heard the persona I embody when I work with them, one where I am at once trusted mentor and role model for the leadership positions they want to occupy someday. My being a woman in amazing clothes on stage was a means of embodying something empowering for them, perhaps even masculine.

My ambitious, female colleague, passionate about diversity and inclusion and also interested in clothing and style, said, “gosh, can I do that too?” She and I inhabit our positions as strong women in technology differently. A jack of all trades, she owned branding efforts early on and got excited about the prospect of our having bright pink business cards. I was appalled, as I couldn’t imagine myself giving a bright pink business card to the scientists and executives I typically engage with at conferences. At the time, I felt it was important to deliberately embody androgyny, but elegant androgyny, to wear a-lines and black and neutral professional clothing, but nonetheless extremely feminine clothing, this subtle dance that both erases and underlines gender, but that is so much different from the direct statement of hot pink. Grappling with the difference teases out the subtleties here.

Friends who openly eschew gender essentialism commented on the thorniness of the issue, likely engaging with my own hesitation, which muted the brazen excitement I embody with my younger colleagues. Here conversations waxed consequentialist, focusing on the fact that, whether intended or not, deliberately collaborating with a designer would reinforce stereotypes aligning women with clothing, while brogrammers perform nonchalance in, well, standard brogrammer garb or icons like Steve Jobs perform aestheticism that indexes the life of the mind by donning plain black sweater uniforms. I worried.

Some admonished me for pursuing the project, commenting on my responsibility to the brand identity of the various organizations with which I am associated professionally. This harkened the split ethical imperatives I explored in Censorship and the Liberal Arts. For indeed, as professionals we sign a social contract where we trade unadulterated free speech and expression for the benefits of collaborating with others to build something and do something we’d be unable to accomplish ourselves. But the line between personal and professional brand is anything but clear, and varies greatly between companies and contexts. As evidenced by his world-class out-of-office emails*****, my partner John Frankel at ffVC falls a few standard deviations from the norm, while also insisting on rigour and consistency on the firm’s positions on investment theses. Friends in government rarely express their personal opinions, ever beholden to their duties as representatives of a public body. This forces the question of how much the integrate.ai brand, for example, stands for personal expression. The nuances here are as delicate as those related to feminine identity: it’s our responsibility to embody the brand that supports our business goals, but I’ve always found that success emerges from the breath of fresh air promoted by authenticity.

What do I think?

I doubt the collaboration will come to be, at least not anytime soon. I spent a few days inhabiting an imaginary potential, thinking about how fun it would be to co-create outfits for different performances, one day a boxy Yamamoto, the next a flowery Dior, the next a Katharine Hepburn-inspired pants suit to index a potential future in politics. I remembered all the articles about Marissa Mayer’s style back in 2013, the fact that her having style was news for the tech industry. I reread Susan Fowler’s post about her disgusting experience at Uber and found another very touching post she wrote about what it feels like to be someone who “wants to know it all,” who lacks a singular destiny. I imagined peppering this post with myriad quotations from Ellen Ullman, my new hero, whose Life in Code I devoured with the attention and curiosity spurred by feeling prose so much in line with my own, by reading a vision of what I’d like to write and become.******  I thought about the responsibilities I have right now as a pseudo-visible woman in technology, as a pseudo-visible woman in venture, as a woman who doesn’t write code (yet!!) but serves as translator between so many different domains, who struggles with her identity but wouldn’t have it any other way, who wants to do what’s right for the thousands and thousands of young women out there watching, dreaming, yearning, ready to do amazing things in the world. I just want them to be themselves and not to fear and to create and to be free to become. To have a voice to shape the world. And to fucking wear beautiful clothing if that makes them happy, and alive.

I wore my merman dress on Wednesday on a panel with my friend Steve Woods and the CEO of Wysdom.AI. The audience comprised mostly men; I felt they paid attention to what I said, not what I wore. On Friday, another strong female leader in the Toronto AI community told me she admires my style, and asked where I buy my clothes. I referred her to GASPARD, delighted to support local entrepreneurs making the world more beautiful.


* It took some digging to find the primary designer behind La Prestic Ouiston. Her name is Laurence Mahéo. She looks unabashedly at the camera in the photos various media outlets have posted about her and her spectacular, singular existence. Her head often tilts slightly to the side. She doesn’t smile widely.

**Typo in the original (English translation from the original French).

***Isak Dinesen understood our oceanic roots, as in one of my favourite quotations: “The cure for anything is salt water: sweat, tears or the sea.” I remember hikes up Windy Hill in 2009 and 2010, mourning the loss of my first real love, tears, and sweat, and sea all needed to get back on my feet, love that broke me, that altered my course in life, that changed my emotions ever forward, instilling both negative patterns I still struggle with eight years later and positive patterns, widening my heart and permitting expressiveness I hadn’t known possible there prior. Memories fixed solid in my synapses, of such heightened emotional importance I will carry them with me intact until the day I die. He always knew that the self he saw and enlivened wasn’t the current me but the me he saw I might one day become, knew I was helplessly addicted to this promised self, as I knew he was helplessly addicted to the child I recovered in him, personhood long silenced, but for which he desperately yearned and was grateful to remember existed as a kernel of possibility.

****I had a hell of a time writing One Feminine Identity exactly one year ago today (curious how those things work; my father had a heart attack exactly one year after his father died, as I commemorate in this post). I was dating an ardent feminist at the time, who criticized me for the lack of rigour and systematicity in my approach to female empowerment. His critique lodged itself in my superego and bastardized my writing. I hedged so as not to offend anyone with what I assumed were offensive positions. Then, two other friends read the piece and criticized the hedging! I learned something.

*****This week, John’s out-of-office email featured this poem, which I sent to two colleagues as I felt they’d appreciate it:

Life is like a grain of sand;
it can slip through your fingers
at any time and be lost forever.
We must enjoy every minute
while we have it
in case that too
slips through our fingers.
Love is a fleeting thing
that passes all too quickly through our lives
unless we grasp it tightly
never letting it go.
Our lives are like a grain of sand
and will slip through our fingers
before we get to enjoy it thoroughly.
A Grain of Sand by David Harris

******Here is Ullman giving a talk at Google. https://www.youtube.com/watch?v=bCcVyuq9aRE

I took a photo of the featured image last Tuesday evening with my iPhone. The shadows arise from the ill-fit black plastic cover that partially covers the lens, tailored for a previous iPhone release. The tag on the dress indicates that the merman’s name is Seb le Poisson. Seb is in the closet, awaiting his next appearance. I write in my pyjamas. 

Exploration-Exploitation and Life

There was another life that I might have had, but I am having this one. – Kazuo Ishiguro

On April 18, 2016*, I attended an NYAI Meetup** featuring a talk by Columbia Computer Science Professor Dan Hsu on interactive learning. Incredibly clear and informative, the talk slides are worth reviewing in their entirety. But one in particular caught my attention (fortunately it summarizes many of the subsequent examples):

Screen Shot 2017-12-02 at 9.44.34 AM
From Dan Hsu’s excellent talk on interactive machine learning

It’s worth stepping back to understand why this is interesting.

Much of the recent headline-grabbing progress in artificial intelligence (AI) comes from the field of supervised learning. As I explained in a recent HBR article, I find it helpful to think of supervised learning like the inverse of high school algebra:

Think back to high school math — I promise this will be brief — when you first learned the equation for a straight line: y = mx + b. Algebraic equations like this represent the relationship between two variables, x and y. In high school algebra, you’d be told what m and b are, be given an input value for x, and then be asked to plug them into the equation to solve for y. In this case, you start with the equation and then calculate particular values.

Supervised learning reverses this process, solving for m and b, given a set of x’s and y’s. In supervised learning, you start with many particulars — the data — and infer the general equation. And the learning part means you can update the equation as you see more x’s and y’s, changing the slope of the line to better fit the data. The equation almost never identifies the relationship between each x and y with 100% accuracy, but the generalization is powerful because later on you can use it to do algebra on new data. Once you’ve found a slope that captures a relationship between x and y reliably, if you are given a new x value, you can make an educated guess about the corresponding value of y.

Supervised learning works well for classification problems (spam or not spam? relevant or not for my lawsuit? cat or dog?) because of how the functions generalize. Effectively, the “training labels” humans provide in supervised learning assign categories, tokens we affiliate to abstractions from the glorious particularities of the world that enable us to perceive two things to be similar. Because our language is relatively stable (stable does not mean normative, as Canadian Inuit perceive snow differently from New Yorkers because they have more categories to work with), generalities and abstractions are useful, enabling the learned system to act correctly in situations not present in the training set (e.g., it takes a hell of a long time for golden retrievers to evolve to be indistinguishable from their great-great-great-great-great-grandfathers, so knowing what one looks like on April 18, 2016 will be a good predictor of what one looks like on December 2, 2017). But, as Rich Sutton*** and Andrew Barto eloquently point out in their textbook on reinforcement learning,

This is an important kind of learning, but alone it is not adequate for learning from interaction. In interactive problems it is often impractical to obtain examples of desired behavior that are both correct and representative of all the situations in which the agent has to act. In uncharted territory—where one would expect learning to be most beneficial—an agent must be able to learn from its own experience.

In his NYAI talk, Dan Hsu also mentioned a common practical limitation of supervised learning, namely that many companies often lack good labeled training data and it can be expensive, even in the age of Mechanical Turk, to take the time to provide labels.**** The core thing to recognize is that learning from generalization requires that future situations look like past situations; learning from interaction with the environment helps develop a policy for action that can be applied even when future situations do not look exactly like past situations. The maxim “if you don’t have anything nice to say, don’t say anything at all” holds both in a situation where you want to gossip about a colleague and in a situation where you want to criticize a crappy waiter at a restaurant.

In a supervised learning paradigm, there are certainly traps to make faulty generalizations from the available training data. One classic problem is called “overfitting”, where a model seems to do a great job on a training data set but fails to generalize well to new data. But the super critical salient difference Hsu points out in his talk is that, while with supervised learning the data available to the learner is exogenous to the system, with interactive machine learning approaches, the learner’s performance is based on the learner’s decisions and the data available to the world depends on the learner’s decisions. 

Think about that. Think about what that means for gauging the consequences of decisions. Effectively, these learners cannot evaluate counterfactuals: they cannot use data or evidence to judge what would have happened if they took a different action. An ideal optimization scenario, by contrast, would be one where we could observe the possible outcomes of any and all potential decisions, and select the action with the best outcome across all these potential scenarios (this is closer, but not identical, to the spirit of variational inference, but that is a complex topic for another post).

To share one of Hsu’s***** concrete examples, let’s say a website operator has a goal to personalize website content to entice a consumer to buy a pair of shoes. Before the user shows up at the site, our operator has some information about her profile and browsing history, so can use past actions to guess what might be interesting bait to get a click (and eventually a purchase). So, at the moment of truth, the operator says “Let’s show the beige Cole Hann high heels!”, displays the content, and observes the reaction. We’ll give the operator the benefit of the doubt and assume the user clicks, or even goes on to purchase. Score! Positive signal! Do that again in the future! But was it really the best choice? What would have happened if the operator had shown the manipulatable consumer the red Jimmy Choo high heels, which cost $750 per pair rather than a more modest $200 per pair? Would the manipulatable consumer have clicked? Was this really the best action?

The learner will never know. It can only observe the outcome of the action it took, not the action it didn’t take.

The literature refers to this dilemma as the trade-off between exploration and exploitation. To again cite Sutton and Barto:

One of the challenges that arise in reinforcement learning, and not in other kinds of learning, is the trade-off between exploration and exploitation. To obtain a lot of reward, a reinforcement learning agent must prefer actions that it has tried in the past and found to be effective in producing reward. But to discover such actions, it has to try actions that it has not selected before. The agent has to exploit what it already knows in order to obtain reward, but it also has to explore in order to make better action selections in the future. The dilemma is that neither exploration nor exploitation can be pursued exclusively without failing at the task. The agent must try a variety of actions and progressively favor those that appear to be best. On a stochastic task, each action must be tried many times to gain a reliable estimate of its expected reward.

There’s a lot to say about the exploration-exploitation tradeoff in machine learning (I recommend starting with the Sutton/Barto textbook). Now that I’ve introduced the concept, I’d like to pivot to consider where and why this is relevant in honest-to-goodness-real-life.

The nice thing about being an interactive machine learning algorithm as opposed to a human is that algorithms are executors, not designers or managers. They’re given a task (“optimize revenues for our shoe store!”) and get to try stuff and make mistakes and learn from feedback, but never have to go through the soul-searching agony of deciding what goal is worth achieving. Human designer overlords take care of that for them. And even the domain and range of possible data to learn from is constrained by technical conditions: designers make sure that it’s not all the data out there in the world that’s used to optimize performance on some task, but a tiny little baby subset (even if that tiny little baby entails 500 million examples) confined within a sphere of relevance.

Being a human is unfathomably more complicated.

Many choices we make benefit from the luxury of triviality and frequency. “Where should we go for dinner and what should we eat when we get there?” Exploitation can be a safe choice, in particular for creatures of habit. “Well, sweetgreen is around the corner, it’s fast and reliable. We could take the time to review other restaurants (which could lead to the most amazing culinary experience of our entire lives!) or we could not bother to make the effort, stick with what we know, and guarantee a good meal with our standard kale caesar salad, that parmesan crisp thing they put on the salad is really quite tasty…” It’s not a big deal if we make the wrong choice because, low and behold, tomorrow is another day with another dinner! And if we explore something new, it’s possible the food will be just terrible and sometimes we’re really not up for the risk, or worse, the discomfort or shame of having to send something we don’t like back. And sometimes it’s fine to take the risk and we come to learn we really do love sweetbreads, not sweetgreens, and perhaps our whole diet shifts to some decadent 19th-century French paleo practice in the style of des Esseintes.

Des_Esseintes_at_study_Zaidenberg_illustration
Arthur Zaidenberg’s depiction of des Esseintes, decadent hero extraordinaire, who embeds gems into a tortoise shell and has a perfume organ.

Other choices have higher stakes (or at least feel like they do) and easily lead to paralysis in the face of uncertainty. Working at a startup strengthens this muscle every day. Early on, founders are plagued by an unknown amount of unknown unknowns. We’d love to have a magic crystal ball that enables us to consider the future outcomes of a range of possible decisions, and always act in the way that guarantees future success. But the crystal balls don’t exist, and even if they did, we sometimes have so few prior assumptions to prime the pump that the crystal ball could only output an #ERROR message to indicate there’s just not enough there to forecast. As such, the only option available is to act and to learn from the data provided as a result of that action. To jumpstart empiricism, staking some claim and getting as comfortable as possible with the knowledge that the counterfactual will never be explored, and that each action taken shifts the playing field of possibility and probability and certainty slightly, calming minds and hearts. The core challenge startup leaders face is to enable the team to execute as if these conditions of uncertainty weren’t present, to provide a safe space for execution under the umbrella of risk and experiment. What’s fortunate, however, is that the goals of the enterprise are, if not entirely well-defined, at least circumscribed. Businesses exist to turn profits and that serves as a useful, if not always moral, constraint.

Big personal life decisions exhibit further variability because we but rarely know what to optimize for, and it can be incredibly counter-productive and harmful to either constrain ourselves too early or suffer from the psychological malaise of assuming there’s something wrong with us if we don’t have some master five-year plan.

This human condition is strange because we do need to set goals–it’s beneficial for us to consider second- and third-tier consequences, i.e., if our goal is to be healthy and fit, we should overcome the first-tier consequence of receiving pleasure when we drown our sorrows in a gallon of salted caramel ice cream–and yet it’s simply impossible for us to imagine the future accurately because, well, we overfit to our present and our past.

I’ll give a concrete example from my own experience. As I touched upon in a recent post about transitioning from academia to business, one reason why it’s so difficult to make a career change is that, while we never actually predict the future accurately, it’s easier to fear loss from a known predicament than to imagine gain from a foreign predicament.****** Concretely, when I was deciding whether to pursue a career in academia or the private sector in the fifth year in graduate school, I erroneously assumed that I was making a strict binary choice, that going into business meant forsaking a career teaching or publishing. As I was evaluating my decision, I never in my wildest dreams imagined that, a mere two years later, I would be invited to be an adjunct professor at the University of Calgary Faculty of Law, teaching about how new technologies were impacting traditional professional ethics. And I also never imagined that, as I gave more and more talks, I would subsequently be invited to deliver guest lectures at numerous business schools in North America. This path is not necessarily the right path for everyone, but it was and is the right path for me. In retrospect, I wish I’d constructed my decision differently, shifting my energy from fearing an unknown and unknowable future to paying attention to what energized me and made me happy and working to maximize the likelihood of such energizing moments occurring in my life. I still struggle to live this way, still fetishize what I think I should be wanting to do and living with an undercurrent of anxiety that a choice, a foreclosure of possibility, may send me down an irreconcilably wrong path. It’s a shitty way to be, and something I’m actively working to overcome.

So what should our policy be? How can we reconcile this terrific trade-off between exploration and exploitation, between exposing ourselves to something radically new and honing a given skill, between learning from a stranger and spending more time with a loved one, between opening our mind to some new field and developing niche knowledge in a given domain, between jumping to a new company with new people and problems, and exercising our resilience and loyalty to a given team?

There is no right answer. We’re all wired differently. We all respond to challenges differently. We’re all motivated by different things.

Perhaps death is the best constraint we have to provide some guidance, some policy to choose between choice A and choice B. For we can project ourselves forward to our imagined death bed, where we lie, alone, staring into the silent mirror of our hearts, and ask ourselves “Was my life was meaningful?” But this imagined scene is not actually a future state: it is a present policy. It is a principle we can use to evaluate decisions, a principle that is useful because it abstracts us from the mire of emotions overly indexed towards near-term goals and provides us with perspective.

And what’s perhaps most miraculous is that, at every present, we can sit there are stare into the silent mirror of our hearts and look back on the choices we’ve made and say, “That is me.” It’s so hard going forward, and so easy going backward. The proportion of what may come wanes ever smaller than the portion of what has been, never quite converging until it’s too late, and we are complete.


*Thank you, internet, for enabling me to recall the date with such exacting precision! Using my memory, I would have deduced the approximate date by 1) remembering that Robert Colpitts, my boyfriend at the time (Godspeed to him today, as he participates in a sit-a-thon fundraiser for the Interdependence Project in New York City, a worthy cause), attended with me, recalling how fresh our relationship was (it had to have been really fresh because the frequency with which we attended professional events together subsequently declined), and working backwards from the start to find the date; 2) remembering what I wore! (crazy!!), namely a sheer pink sleeveless shirt, a pair of wide-legged white pants that landed just slightly above the ankle and therefore looked great with the pair of beige, heeled sandals with leather so stiff it gave me horrific blisters that made running less than pleasant for the rest of the week. So I’d recently purchased those when my brother and his girlfriend visited, which was in late February (or early March?) 2016; 3) remembering that afterwards we went to some fast food Indian joint nearby in the Flatiron district, food was decent but not good enough to inspire me to return. So that would put is in the March-April, 2016 range, which is close but not the exact April 18. That’s one week after my birthday (April 11); I remember Robert and I had a wonderful celebration on my birthday. I felt more deeply cared for than I had in any past birthdays. But I don’t remember this talk relative to the birthday celebration (I do remember sending the marketing email to announce the Fast Forward Labs report on text summarization on my birthday, when I worked for half day and then met Robert at the nearby sweetgreen, where he ordered, as always, (Robert is a creature of exploitation) the kale caesar salad, after which we walked together across the Brooklyn Bridge to my house, we loved walking together, we took many, many walks together, often at night after work at the Promenade, often in the morning, before work, at the Promenade, when there were so few people around, so few people awake). I must say, I find the process of reconstructing when an event took place using temporal landmarks much more rewarding than searching for “Dan Hsu Interactive Learning NYAI” on Google to find the exact date. But the search terms themselves reveal something equally interesting about our heuristic mnemonics, as every time we reconstruct some theme or topic to retrieve a former conversation on Slack.

**Crazy that WeWork recently bought Meetup, although interesting to think about how the two business models enable what I am slowly coming to see as the most important creative force in the universe, the combinatory potential of minds meeting productively, where productively means that each mind is not coming as a blank slate but as engaged in a project, an endeavor, where these endeavors can productively overlap and, guided by a Smithian invisible hand, create something new. The most interesting model we hope to work on soon at integrate.ai is one that optimizes groups in a multiplayer game experience (which we lovingly call the polyamorous online dating algorithm), so mapping personality and playing style affinities to dynamically allocate the best next player to an alliance. Social compatibility is a fascinating thing to optimize for, in particular when it goes beyond just assembling a pleasant cocktail party to pairing minds, skills, and temperaments to optimize the likelihood of creating something beautiful and new.

***Sutton has one of the most beautiful minds in the field and he is kind. He is a person to celebrate. I am grateful our paths have crossed and thoroughly enjoyed our conversation on the In Context podcast.

***Maura Grossman and Gordon Cormack have written countless articles about the benefits of using active learning for technology assisted review (TAR), or classifying documents for their relevance for a lawsuit. The tradeoffs they weigh relate to system performance (gauged by precision and recall on a document set) versus time, cost, and effort to achieve that performance.

*****Hsu did not mention Haan or Choo. I added some more color.

******Note this same dynamic occurs in our current fears about the future economy. We worry a hell of a lot more about the losses we will incur if artificial intelligence systems automate existing jobs than we celebrate the possibilities of new jobs and work that might become possible once these systems are in place. This is also due to the fact that the future we imagine tends to be an adaptation of what we know today, as delightfully illustrated in Jean-Marc Côté’s anachronistic cartoons of the year 2000. The cartoons show what happens when our imagination only changes one variable as opposed to a set of holistically interconnected variables.

barber
19th-century cartoons show how we imagine technological innovations in isolation. That said, a hipster barber shop in Portland or Brooklyn could feature such a palimpsestic combination.

 

The featured image is a photograph I took of the sidewalk on State Street between Court and Clinton Streets in Brooklyn Heights. I presume a bird walked on wet concrete. Is that how those kinds of footprints are created? I may see those footprints again in the future, but not nearly as soon as I’d be able to were I not to have decided to move to Toronto in May. Now that I’ve thought about them, I may intentionally make the trip to Brooklyn next time I’m in New York (certainly before January 11, unless I die between now and then). I’ll have to seek out similar footprints in Toronto, or perhaps the snows of Alberta. 

 

 

 

 

 

 

 

 

On Mentorship

On Tuesday, together with four fellow eloquent and inspiring women, I addressed an audience of a hundred and fifty (I think?) odd young women about becoming a woman leader in technology.

I recently passed a crucial threshold in my life. I am no longer primarily a seeker of mentors and role models, but primarily a mentor and role model for others. I will always have mentors. Forever. Wherever. In whatever guise they appear. I have a long way to go in my career, much to work on in my character. Three female mentors who currently inspire me are Maura Grossman (a kickass computer science professor at Waterloo who was effectively the founder of using machine learning to find relevant documents in a lawsuit as a former partner at Wachtell); Janet Bannister (a kickass venture capital partner at Real Ventures who has led multiple businesses and retains a kind, open energy); and Venerable Pannavati (a kickass Buddhist monk and former Christian pastor who infuses Metta Meditation with the slave spirit of Billy Holiday, man it’s incredible, and who practices a stance of radical compassion and forgiveness, to the point of transforming all victimhood–including rape–into grounded self-reliance).

I’m in my early thirties. I have no children, no little ones whose minds and emotions are shaped by my example. I hope someday I will. I live every day with the possibility that I may not. The point is, I’m not practiced in the art of living where every action matters, of living with the awareness that I’m impacting and affecting others, others looking to me for guidance, inspiration, example. And here, suddenly, I find myself in a position where others look up to me for inspiration every day. How should I act? How can I lead by example? How might I inspire? How must I fuel ambition, passion, curiosity, kindness?

What a marvelous gift. What a grave responsibility.

I ask myself, should I project strength, should I perform the traits we want all women to believe they can and should have, or should I expose vulnerability, expose all the suffering and doubts and questions and pain and anxiety I’ve dealt with–and continue to deal with, just tempered–on this meandering path to this current version of me?

There is an art to exposing vulnerability to inspire good. Acting from a place of insecurity or anxiety leads to nothing but chaos. I’ve done it a zillion times; it’s hurt a zillion and one. Having a little temper tantrum, gossiping, breaking cool in a way that poisons a mood, enforcing territory, displaying sham superiority, all this stuff sucks. Being aware of weaknesses and asking for help to compensate for them; relaying anecdotes or examples of lessons learned; apologizing; regretting; accepting a mess of a mind for the moment and trying one’s damnedest not to act on it out of awareness of the damage it may cause, all this stuff is great.

I believe in the healing power of identification and of embracing our humanity. Being a strong woman leader in tech need not only be about projecting strength and awesomeness. It can be about sharing what lies under the covers, sharing what hurt, sharing the doubts. Finding strength in the place of radical acceptance so we can all say, “Nevertheless, she persisted.”

0N0A2825
This is me saying something at Tuesday’s event.

Many of the audience members reached out over LinkedIn after the event. Here is the message that touched me deepest.

It was great to meet you and hear you speak last night. Thanks for taking the time to share your experience. It is comforting to know that other women, especially ones as accomplished as those on the panel, have doubts about their capabilities too.

As sharing doubts can inspire comfort and even inspiration, I figured I’d share some more. As I sat meditating this morning, I was suddenly overcome by the sense that I had a truth worth sharing. Not a propositional truth, but an emotional truth. Perhaps we call that wisdom. Here’s the story.

I had a very hard time in the last two years of my PhD. So hard, in fact, that I decided to leave Stanford for a bit and spend time at home with my family in Boston. It was a dark time. My mind was rattled, lost, unshackled, unfettered, unable. My mother had recommended for a while that I start volunteering, that I use the brute and basic reality of doing work for others as a starter, as yeast for my daily bread, to reset my neurons and work my way back to stability. Finally, I acquiesced. It was a way to pass the time. Like housekeeping.

I started working every day at the Women’s Lunch Place, a women’s-only soup kitchen located in the basement of an old church at the corner of Boylston and Arlington streets in Boston. Homeless and practically homeless women came there as a sanctuary from the streets, as a safe space after a night staving off unwanted sexual advances at a shelter, as a space for community or a space to be left alone in peace. Some were social: they painted and laughed together. Some were introverted, watching from the shadows. Some were sober. Some were drunk. I treated the Women’s Lunch Place like my job, coming in every morning to start at 7:00 am. The guests didn’t know I needed the kitchen as much as they did.

Except for one. Her name was Anne. When I asked her where she was from, she told me she was from the world.

Anne was one of the quiet, solitary guests at the kitchen. I’d never noticed her, as she hung out in a corner to the left of the kitchen, a friend of the shadows. One afternoon towards the end of my shift she approached me, touching my shoulder. I was startled.

The first thing Anne did was to thank me. She told me she’d been watching me for the better part of a month and was impressed by my diligence and leadership skills. She watched me chop onions, noticing how I gradually honed my knife skills, transferring the motions to a more graceful wrist and turning the knife upside down to scrap the chopped pieces into the huge soup pots without dumbing the blade. She watched how new volunteers naturally flocked to me for directions on what to do next, watched how I fell into a place of leadership without knowing it, just as my mother had done before me. She watched how I cared, how deeply I cared for the guests and how I executed my work with integrity. I think she may have known I needed this more than they.

For then, out of the blue, without knowing anything about my history and my experiences beyond the actions she’d observed, she told me a story.

“Once upon a time,” started Anne from the World, “there was a medieval knight. Like all medieval knights, he was sent on a quest to pass through the forbidden castle and save the beautiful princess captured by the dragon. He set out, intrepid and brave. He arrived at the castle and found the central door all legends had instructed him to pass through to reach the dragon’s den, where lay captured the beautiful princess. He reached the door and went to turn the knob. It was locked. He pulled and pushed harder, without any luck. He tried and struggled for hours, for days, bloodying his hands, bruising his legs, wearing himself down to nothing. Eventually he gave up in despair, sunk with the awareness of his failure. He turned back for home, readying his emotions for shame. But after starting out, something inspired him to turn around and scan the castle one more time. His removed vantage point afforded a broader perspective of the castle, not just the local view of the door. And then he noticed something. The castle had more than just the central door, there were two others at the flanks. Crestfallen and doubting, he nevertheless mustered the courage to try another door, just in case. He approached, turned the knob, and the door opened, effortlessly.”

This wonderful gift I’ve been given to serve as a role model for other women did not come easily. It was not a clear path, not the stuff of trodden legends. It was a path filled with struggles and doubts, filled with moments of grueling uncertainty where I knew not what the future might hold, for the path I was tracing for myself was not one commonly traced before.

I’ve been fortunate to have had many people open doors for me, turning knobs on my behalf. My deepest wisdom to date is that we can’t know the future. All we can do is try our best, always, and trust that opportunities we’ve never considered will unfold. When I struggled hopelessly at the end of graduate school, I never imagined the life that has since unfolded. I was so scared of failing that I couldn’t embrace what it might mean to succeed. Finally, with the patient support of many friends and lovers, I gained the ability to step back and find a door that I could open with less effort and more joy.

Since I earned my PhD in 2012, I’ve spoken to many audiences about my experiences transitioning from literature to technology. I frequently start my talks with this story, with this gift from Anne from the World. God only knows why Anne knew it was the right story to tell. But she did. And her meme evolves, here as elsewhere. She is one of the most important mentors I’ve ever had, my Athena waiting in the shadows, a giver of wisdom and grace. I will forever be grateful I took the time to listen and look.

I can’t figure out where the featured image comes from, but it’s the most beautiful image of Telemachus, Odysseus’ son, on the web. The style looks like a fusion between Fragonard and Blake. I love the color palette and the forlorn look on the character’s face. A seemingly humble and unimportant man, Mentor was actually the goddess Athena, wisdom donning a surprising habit, showing up where we least expect it, if only we are open to attend. 

Degrees of Knowledge

That familiar discomfort of wanting to write but not feeling ready yet.*

(The default voice pops up in my brain: “Then don’t write! Be kind to yourself! Keep reading until you understand things fully enough to write something cogent and coherent, something worth reading.”

The second voice: “But you committed to doing this! To not write** is to fail.***”

The third voice: “Well gosh, I do find it a bit puerile to incorporate meta-thoughts on the process of writing so frequently in my posts, but laziness triumphs, and voilà there they come. Welcome back. Let’s turn it to our advantage one more time.”)

This time the courage to just do it came from the realization that “I don’t understand this yet” is interesting in itself. We all navigate the world with different degrees of knowledge about different topics. To follow Wilfred Sellars, most of the time we inhabit the manifest image, “the framework in terms of which man came to be aware of himself as man-in-the-world,” or, more broadly, the framework in terms of which we ordinarily observe and explain our world. We need the manifest image to get by, to engage with one another and not to live in a state of utter paralysis, questioning our every thought or experience as if we were being tricked by the evil genius Descartes introduces at the outset of his Meditations (the evil genius toppled by the clear and distinct force of the cogito, the I am, which, per Dan Dennett, actually had the reverse effect of fooling us into believing our consciousness is something different from what it actually is). Sellars contrasts the manifest image with the scientific image: “the scientific image presents itself as a rival image. From its point of view the manifest image on which it rests is an ‘inadequate’ but pragmatically useful likeness of a reality which first finds its adequate (in principle) likeness in the scientific image.” So we all live in this not quite reality, our ability to cooperate and coexist predicated pragmatically upon our shared not-quite-accurate truths. It’s a damn good thing the mess works so well, or we’d never get anything done.

Sellars has a lot to say about the relationship between the manifest and scientific images, how and where the two merge and diverge. In the rest of this post, I’m going to catalogue my gradual coming to not-yet-fully understanding the relationship between mathematical machine learning models and the hardware they run on. It’s spurring my curiosity, but I certainly don’t understand it yet. I would welcome readers’ input on what to read and to whom to talk to change my manifest image into one that’s slightly more scientific.

So, one common thing we hear these days (in particular given Nvidia’s now formidable marketing presence) is that graphical processing units (GPUs) and tensor processing units (TPUs) are a key hardware advance driving the current ubiquity in artificial intelligence (AI). I learned about GPUs for the first time about two years ago and wanted to understand why they made it so much faster to train deep neural networks, the algorithms behind many popular AI applications. I settled with an understanding that the linear algebra–operations we perform on vectors, strings of numbers oriented in a direction in an n-dimensional space–powering these applications is better executed on hardware of a parallel, matrix-like structure. That is to say, properties of the hardware were more like properties of the math: they performed so much more quickly than a linear central processing unit (CPU) because they didn’t have to squeeze a parallel computation into the straightjacket of a linear, gated flow of electrons. Tensors, objects that describe the relationships between vectors, as in Google’s hardware, are that much more closely aligned with the mathematical operations behind deep learning algorithms.

There are two levels of knowledge there:

  • Basic sales pitch: “remember, GPU = deep learning hardware; they make AI faster, and therefore make AI easier to use so more possible!”
  • Just above the basic sales pitch: “the mathematics behind deep learning is better represented by GPU or TPU hardware; that’s why they make AI faster, and therefore easier to use so more possible!”

At this first stage of knowledge, my mind reached a plateau where I assumed that the tensor structure was somehow intrinsically and essentially linked to the math in deep learning. My brain’s neurons and synapses had coalesced on some local minimum or maximum where the two concepts where linked and reinforced by talks I gave (which by design condense understanding into some quotable meme, in particular in the age of Twitter…and this requirement to condense certainly reinforces and reshapes how something is understood).

In time, I started to explore the strange world of quantum computing, starting afresh off the local plateau to try, again, to understand new claims that entangled qubits enable even faster execution of the math behind deep learning than the soddenly deterministic bits of C, G, and TPUs. As Ivan Deutsch explains this article, the promise behind quantum computing is as follows:

In a classical computer, information is stored in retrievable bits binary coded as 0 or 1. But in a quantum computer, elementary particles inhabit a probabilistic limbo called superposition where a “qubit” can be coded as 0 and 1.

Here is the magic: Each qubit can be entangled with the other qubits in the machine. The intertwining of quantum “states” exponentially increases the number of 0s and 1s that can be simultaneously processed by an array of qubits. Machines that can harness the power of quantum logic can deal with exponentially greater levels of complexity than the most powerful classical computer. Problems that would take a state-of-the-art classical computer the age of our universe to solve, can, in theory, be solved by a universal quantum computer in hours.

What’s salient here is that the inherent probabilism of quantum computers make them even more fundamentally aligned with the true mathematics we’re representing with machine learning algorithms. TPUs, then, seem to exhibit a structure that best captures the mathematical operations of the algorithms, but exhibit the fatal flaw of being deterministic by essence: they’re still trafficking in the binary digits of 1s and 0s, even if they’re allocated in a different way. Quantum computing seems to bring back an analog computing paradigm, where we use aspects of physical phenomena to model the problem we’d like to solve. Quantum, of course, exhibits this special fragility where, should the balance of the system be disrupted, the probabilistic potential reverts down to the boring old determinism of 1s and 0s: a cat observed will be either dead or alive, as the harsh law of the excluded middle haunting our manifest image.

What, then, is the status of being of the math? I feel a risk of falling into Platonism, of assuming that a statement like “3 is prime” refers to some abstract entity, the number 3, that then gets realized in a lesser form as it is embodied on a CPU, GPU, or cup of coffee. It feels more cogent to me to endorse mathematical fictionalism, where mathematical statements like “3 is prime” tell a different type of truth than truths we tell about objects and people we can touch and love in our manifest world.****

My conclusion, then, is that radical creativity in machine learning–in any technology–may arise from our being able to abstract the formal mathematics from their substrate, to conceptually open up a liminal space where properties of equations have yet to take form. This is likely a lesson for our own identities, the freeing from necessity, from assumption, that enables us to come into the self we never thought we’d be.

I have a long way to go to understand this fully, and I’ll never understand it fully enough to contribute to the future of hardware R&D. But the world needs communicators, translators who eventually accept that close enough can be a place for empathy, and growth.


*This holds not only for writing, but for many types of doing, including creating a product. Agile methodologies help overcome the paralysis of uncertainty, the discomfort of not being ready yet. You commit to doing something, see how it works, see how people respond, see what you can do better next time. We’re always navigating various degrees of uncertainty, as Rich Sutton discussed on the In Context podcast. Sutton’s formalization of doing the best you can with the information you have available today towards some long-term goal, but learning at each step rather than waiting for the long-term result, is called temporal-difference learning.

**Split infinitive intentional.

***Who’s keeping score?

****That’s not to say we can’t love numbers, as Euler’s Identity inspires enormous joy in me, or that we can’t love fictional characters, or that we can’t love misrepresentations of real people that we fabricate in our imaginations. I’ve fallen obsessively in love with 3 or 4 imaginary men this year, creations of my imagination loosely inspired by the real people I thought I loved.

The image comes from this site, which analyzes themes in films by Darren Aronofsky. Maximilian Cohen, the protagonist of Pi, sees mathematical patterns all over the place, which eventually drives him to put a drill into his head. Aronofsky has a penchant for angst. Others, like Richard Feynman, find delight in exploring mathematical regularities in the world around us. Soap bubbles, for example, offer incredible complexity, if we’re curious enough to look.

Macro_Photography_of_a_soap_bubble
The arabesques of a soap bubble

 

The Secret Miracle

….And God made him die during the course of a hundred years and then He revived him and said: “How long have you been here?” “A day, or part of a day,” he replied.  – The Koran, II 261

The embryo of this post has gestated between my prefrontal cortex and limbic system for one year and eight months. It’s time.*

There seem to be two opposite axes from which we typically consider and evaluate character. Character as traits, Eigenschaften (see Musil), the markers of personality, virtue, and vice.

One extreme is to say that character is formed and reinforced through our daily actions and habits.** We are the actions we tend towards, the self not noun but verb, a precipitate we shape using the mysterious organ philosophers have historically called free will. Thoughts rise up and compete for attention,*** drawing and calling us to identify as a me, a me reinforced as our wrists rotate ever more naturally to wash morning coffee cups, a me shocked into being by an acute feeling of disgust, coiling and recoiling from some exogenous stimulus that drives home the need for a barrier between self and other, a me we can imagine looking back on from an imagined future-perfect perch to ask, like Ivan Ilyich, if we have indeed lived a life worth living. Character as daily habit. Character, as my grandfather used to say, as our ability to decide if today will be a good or a bad day when we first put our feet on the ground in the morning (Naturally, despite all the negative feelings and challenges, he always chose to make today a good day).

The other extreme is to say that true character is revealed in the fox hole. That traits aren’t revealed until they are tested. That, given our innate social nature, it’s relatively easy to seem one way when we float on, with, and in the waves of relative goodness embodied in a local culture (a family, a team, a company, a neighborhood, a community, perhaps a nation, hopefully a world, imagine a universe!), but that some truer nature will be shamelessly revealed when the going gets tough. This notion of character is the stuff of war movies. We like the hero who irrationally goes back to save one sheep at the expense of the flock when the napalm shit hits the fan. It seems we need these moments and myths to keep the tissue of social bonds intact. They support us with tears nudged and nourished by the sentimental cadences of John Williams soundtracks.

How my grandfather died convinced me that these two extremes are one.

On the evening of January 14, 2016, David William Hume (Bill, although it’s awesome to be part of a family with multiple David Humes!) was taken to a hospital near Pittsburgh. He’d suffered from heart issues for more than ten years and on that day the blood simply stopped pumping into his legs. He was rushed behind the doors of the emergency operating room, while my aunts, uncles, and grandmother waited in the silence and agony one comes to know in the limbo state upon hearing that a loved one has just had a heart attack, has just been shot, has just had a stroke, has just had something happen where time dilates to a standstill and, phenomenologically, the principles of physics linking time and space are halted in the pinnacle of love, of love towards another, of all else in the world put on hold until we learn whether the loved one will survive. (It may be that this experience of love’s directionality, of love at any distance, of our sense of self entangled in the existence and well being of another, is the clearest experiential metaphor available build our intuitions of quantum entanglement.****) My grandfather survived the operation. And the first thing he did was to call my grandmother and exclaim, with the glee and energy of a young boy, that he was alive, that he was delighted to be alive, and that he couldn’t have lived without her beside him, through 60 years of children crying and making pierogis and washing the floor and making sure my father didn’t squander his life at the hobby shop in Beaver Meadows Pennsylvania and learning that Katie, me, here, writing, the first grandchild was born, my eyebrows already thick and black as they’ll remain my whole life until they start to grey and signing Sinatra off key and loving the Red Sox and being a role model of what it means to live a good life, what it means to be a patriarch for our family, yes he called her and said he did it, that he was so scared but that he survived and it was just the same as getting out of bed every morning and making a choice to be happy and have a good day.

She smiled, relieved.

A few minutes later, he died.

It’s like a swan song. His character distilled to its essence. I think about this moment often. It’s so perfectly representative of the man I knew and loved.

And when I first heard about my grandfather’s death, I couldn’t help but think of Borges’s masterful (but what by Borges is not masterful?) short story The Secret Miracle. Instead of explaining why, I bid you, reader, to find out for yourself.


 * Mark my words: in 50 years time, we will cherish the novels of Jessie Ferguson, perhaps the most talented novelist of our time. Jessie was in my cohort in the comparative literature department at Stanford. The depth of her intelligence, sensitivity, and imagination eclipsed us all. I stand in awe of her talents as Jinny to Rhoda in Virginia Woolf’s The Waves. At her wedding, she asked me to read aloud Paul Celan’s Corona. I could barely do it without crying, given how immensely beautiful this poem is. Tucked away in the Berkeley Hills, her wedding remains the most beautiful ceremony I’ve ever attended.

**My ex-boyfriends, those privileged few who’ve observed (with a mixture of loving acceptance and tepid horror) my sacrosanct morning routine, certainly know how deeply this resonates with me.

***Thomas Metzinger shares some wonderful thoughts about consciousness and self-consciousness in his interview with Sam Harris on the Waking Up podcast. My favorite part of this episode is Metzinger’s very cogent conclusion that, should an AI ever suffer like we humans do (which Joanna Bryson compelling argues will not and should not occur), the most rational action it would then take would be to self-annihilate. Pace Bostrom and Musk, I find the idea that a truly intelligent being would choose non-existence over existence to be quite compelling, if only because I have first-hand experience with the acute need to allay acute suffering like anxiety immediately, whereas boredom, loneliness, and even sadness are emotional states within which I more comfortably abide.

****Many thanks to Yanbo Xue at D-Wave for first suggesting that metaphor. Jean-Luc Marion explores the subjective phenomenon of love in Le Phénomène Erotique; I don’t recall his mentioning quantum physics, although it’s been years since I read the book, but, based on conversations I had with him years ago at the University of Chicago, I predict this would be a parallel he’d be intrigued to explore.

My last dance with my grandfather, the late David William Hume. Snuff, as we lovingly called him, was never more at home than on the dance floor, even though he couldn’t sing and couldn’t dance. He used to do this cute knees-back-and-forth dance. He loved jazz standards, and would send me mix CDs he burned when I lived in Leipzig, Germany. In his 80s, he embarrassed the hell out of my grandmother, his wife of 60 years, by joining the local Dancing with the Stars chapter and taking Zumba lessons. He lived. He lived fully and with great integrity. 

When Writing Fails

This post is for writers.

I take that back.

This post shares my experience as a writer to empathize with anyone working to create something from nothing, to break down the density of an intuition into a communicable sequence of words and thoughts, to digitize, which Daniel Dennett eloquently defines as “obliging continuous phenomena to sort themselves out into discontinuous, all-or-nothing phenomena” (I’m reading and very much enjoying From Bacteria to Bach and Back: The Evolution of Minds), to perform an act of judgment that eliminates other possibilities, foreclosing other forms to create its own form, Shiva and Vishnu forever linked in cycles of destruction, creation, and stability. That is to say, this post shares my experience as a writer as metonymy for our human experience as finite beings living finite lives.

shiva
The Nataraja, Shiva in his form as the cosmic ecstatic dancer, inspires trusting calm in me.

Earlier this morning, I started a post entitled Competence without Comprehension. I’ll publish it eventually, hopefully next week. It will feature a critique of explainable artificial intelligence (AI), efforts in the computer science and policy communities to develop AI systems that make sense for human users. I have tons to say here. I think it’s ok for systems to be competent without being comprehensible (my language is inspired by Dan Dennett, who thinks consciousness is an illusion) because I think there’s a lot of cognitive competencies we exhibit without comprehension (ranging from ways of transforming our habits or even become believers in some religious system by going through the motions, as I wrote about in my dissertation, to training students in operations like addition and subtraction before they learn the theoretical underpinnings of abstract algebra – which many people never even learn!). I think the word why is a complex word that we use in different ways: Aristotle thought there were four types of causes and, again following Dennett, we can distinguish between why as “how come” (what input data created this output result?) and why as “what for” (what action will be taken from this output result?). Aristotle’s causal theory was largely toppled during the scientific revolution and then again by Sartre in Existentialism is Humanism (where he shows we humans exist in a very different ways from paper knifes, which are an outdated technology!), but I think there’s value in resurrecting his categories to think about machine learning pipelines and explainable AI. I think there are different ethical implications for using AI in different settings, and I think there’s something crucial about social norms – how we expect humans to behave towards other humans – that is driving widespread interest in this topic and that, when analyzed, can help us understand what may (or may not!) be unique about the technology in its use in society.

In short, my blog post was a mess. I was trying to do too much at once, there were multiple lines of synthetic thought that need to be teased out to make sense to anyone, including myself. I will understand my position better once I devote the time and patience to exploring it, formalizing it, unpacking ideas that currently sit inchoate like bile. What I started today contains at least five different blog posts’ worth of material, on topics that many other people are thinking about, so could have some impact in the social circles that are meaningful for me and my identity. This is crucial: I care about getting this one right, because I can imagine the potential readers, or at least the hoped-for readers. That said, upon writing this, I can also step back and remember that the approval I think I’m seeking rarely matters in the end. I always feel immense gratitude when anyone — a perfect stranger — reads my work, and the most gratitude when someone feels inspired to write or grow herself.

So I allowed myself to pivot from seeking approval to instilling inspiration. To manifesting the courage to publish whatever – whatever came out from the primordial sludge of my being, the stream of consciousness that is the dribble of expression, ideas without form, but ideas nonetheless, the raw me sitting here trying my best on a Sunday afternoon in August, imagining the negative response of anyone who would bother to read this, but also knowing the charity I hold within my own heart for consistency, habit, effort, exposure, courage to display what’s weakest and most vulnerable to the public eye.

I see my experience this morning as metonymy for our experience as finite beings living finite lives because of the anxiety of choice. Each word written conditions the space of possibility of what can, reasonably, come next (Skip-Thought vectors assume this to function). The best writing is not about everything but is about something, just as many of the happiest and most successful people become that way by accepting the focus required to create and achieve, focus that shuts doors — or at least Japanese screens — on unrealized selves. I find the burden of identity terrific. My being resists the violence of definition and prefers to flit from self to self in the affordance of friendships, histories, and contexts. It causes anxiety, confusion, false starts, but also a richness I’m loathe to part with. It’s the give and take between creation and destruction, Shiva dancing joyfully in the heavens, her smile peering ironic around the corners of our hearts like the aura of the eclipse.

The featured image represents Tim Jenison’s recreation of Vermeer’s The Music Lesson. Tim’s Vermeer is a fantastic documentary about Jenison’s quest to confirm his theory of Vermeer’s optical painting technique, which worked somewhat similarly to a camera (refracting light to create a paint-by-number-like format for the artist). It’s a wonderful film that makes us question our assumptions about artistic genius and creativity. I firmly believe creativity stems from constraint, and that Romantic ideas of genius miss the mark in shaping cultural understandings of creativity. This morning, I lacked the constraints required to write. 

The Unreasonable Effectiveness of Proxies*

Imagine it’s December 26. You’re right smack in the midst of your Boxing Day hangover, feeling bloated and headachy and emotionally off from the holiday season’s interminable festivities. You forced yourself to eat Aunt Mary’s insipid green bean casserole out of politeness and put one too many shots of dark rum in your eggnog. The chastising power of the prefrontal cortex superego is in full swing: you start pondering New Year’s Resolutions.

Lose weight! Don’t drink red wine for a year! Stop eating gluten, dairy, sugar, processed foods, high-fructose corn syrup–just stop eating everything except kale, kefir, and kimchi! Meditate daily! Go be a free spirit in Kerala! Take up kickboxing! Drink kombucha and vinegar! Eat only purple foods!

Right. Check.

(5:30 pm comes along. Dad’s offering single malt scotch. Sure, sure, just a bit…neat, please…)**

We’re all familiar with how hard it is to set and stick to resolutions. That’s because our brains have little instant gratification monkeys flitting around on dopamine highs in constant guerrilla warfare against the Rational Decision Maker in the prefrontal cortex (Tim Urban’s TEDtalk on procrastination is a complete joy). It’s no use beating ourselves up over a physiological fact. The error of Western culture, inherited from Catholicism, is to stigmatize physiology as guilt, transubstantiating chemical processes into vehicles of self deprecation with the same miraculous power used to transform just-about-cardboard wafers into the living body of Christ. Eastern mindsets, like those proselytized by Buddha, are much more empowering and pragmatic: if we understand our thoughts and emotions to be senses like sight, hearing, touch, taste, smell, we can then dissociate self from thoughts. Our feelings become nothing but indices of a situation, organs to sense a misalignment between our values–etched into our brains as a set of habitual synaptic pathways–and the present situation around us. We can watch them come in, let them sit there and fester, and let them gradually fade before we do something we regret. Like waiting out the internal agony until the baby in front of you in 27G on your overseas flight to Sydney stops crying.

Resolutions are so hard to keep because we frame them the wrong way. We often set big goals, things like, “in 2017 I’ll lose 30 pounds” or “in 2017 I’ll write a book.” But a little tweak to the framework can promote radically higher chances for success. We have to transform a long-term, big, hard-to-achieve goal into a short-term, tiny, easy-to-achieve action that is correlated with that big goal. So “lose weight” becomes “eat an egg rather than cereal for breakfast.” “Write a book” becomes “sit down and write for 30-minutes each day.” “Master Mandarin Chinese” becomes “practice your characters for 15 minutes after you get home from work.” The big, scary, hard-to-achieve goal that plagues our consciousness becomes a small, friendly, easy-to-achieve action that provides us with a little burst of accomplishment and satisfaction. One day we wake up and notice we’ve transformed.

It’s doubtful that the art of finding a proxy for something that is hard to achieve or know is the secret of the universe. But it may well be the secret to adapting the universe to our measly human capabilities, both at the individual (transform me!) and collective (transform my business!) level. And the power extends beyond self-help: it’s present in the history of mathematics, contemporary machine learning, and contemporary marketing techniques known as growth hacking.

Ut unum ad unum, sic omnia ad omnia: Archimedes, Cavalieri, and Calculus

Many people are scared of math. Symbols are scary: they’re a type of language and it takes time and effort to learn what they mean. But most of the time people struggle with math because they were badly taught. There’s no clearer example of this than calculus, where kids memorize equations that something is so instead of conceptually grasping why something is so.

The core technique behind calculus–and I admit this just scratches the surface–is to reduce something that’s hard to know down to something that’s easy to know. Slope is something we learn in grade school: change in y divided by change in x, how steep a line is. Taking the derivative is doing this same process but on a twisting, turning, meandering curve rather than just a line. This becomes hard because we add another dimension to the problem: with a line, the slope is the same no matter what x we put in; with a curve, the slope changes with our x input value, like a mountain range undulating from mesa to vertical extreme cliff. What we do in differential calculus is find a way to make a line serve as a proxy for a curve, to turn something we don’t know how to do and into something know how to do. So we take magnifying glasses with ever increasing potency and zoom in until our topsy-turvy meandering curve becomes nothing but a straight line; we find the slope; and then we sum up those little slopes all the way across our curve. The big conceptual breakthrough Newton and Leibniz made in the 17th century was to turn this proxy process into something continuous and infinite: to cross a conceptual chasm between a very, very small number and a number so small that it was effectively zero. Substituting close-enough-for-government-work-zero with honest-to-goodness-zero did not go without strong criticism from the likes of George Berkeley, a prominent philosopher of the period who argued that it’s impossible for us to say anything about the real world because we can only know how our minds filter the real world. But its pragmatic power to articulate the mechanics of the celestial motions overcame such conceptual trifles.***

riemann sum
Riemann Sums use the same proxy method to find the area under a curve. One replaces that hard task with the easier task of summing up the area of rectangles approximate the area of the curve.

This type of thinking, however, did not start in the 17th century. Greek mathematicians like Archimedes (famous for screaming Eureka! (I’ve found it!) and running around naked like a madman when he noticed that water levels in the bathtub rose proportionately to his body mass) used its predecessor, the method of exhaustion, to find the area of a shape like a circle or a blob by inscribing it within a series of easier-to-measure shapes like polygons or squares to get an approximation of the area by proxy to the polygon.

exhaustion
The method of exhaustion in ancient Greek math.

It’s challenging for us today to reimagine what Greek geometry was like because we’re steeped in a post-Cartesian mindset, where there’s an equivalence between algebraic expressions and geometric shapes. The Greeks thought about shapes as shapes. The math was tactical, physical, tangible. This mindset leads to interesting work in the Renaissance like Bonaventura’s Cavalieri’s method of indivisibles, which showed that the areas of two shapes were equivalent (often a hard thing to show) by cutting the shapes into parts and showing that each of the parts were equivalent (an easier thing to show). He turns the problem of finding equivalence into an analogy, ut unum ad unum, sic omnia ad omnia–as the one is to the one, so all are to all–substituting the part for the whole to turn this in a tractable problem. His worked paved the way for what would eventually become the calculus.****

Supervised Machine Learning for Dummies

My dear friend Moises Goldszmidt, currently Principal Research Scientist at Apple and a badass Jazz musician, once helped me understand that supervised machine learning is quite similar.

Again, at an admittedly simplified level, machine learning can be divided into two camps. Unsupervised machine learning is using computers to find patterns in data and sort different data into clusters. When most people hear they world machine learning, they think about unsupervised learning: computers automagically finding patterns, “actionable insights,” in data that would evade detection of measly human minds. In fact, unsupervised learning is an area of research in the upper echelons of the machine learning community. It can be valuable for exploratory data analysis, but only infrequently powers the products that are making news headlines. The real hero of the present day is supervised learning.

I like to think about supervised learning as follows:

Screen Shot 2017-07-02 at 9.51.14 AM

Let’s take a simple example. We’re moving, and want to know how much to put our house on the market for. We’re not real estate brokers, so we’re not great at measuring prices. But we do have a tape measure, so we are great at measuring the square footage of our house. Let’s say we go look through a few years of real estate records, and find a bunch of data points about how much houses go for and what their square footage is. We also have data about location, amenities like an in-house washer and dryer, and whether the house has a big back yard. But we notice a lot of variation in prices for houses with different sized back yards, but pretty consistent correlations between square footage and price. Eureka! we say, and run around the neighbourhood naked horrifying our neighbours! We can just plot the various data points of square footage : price, measure our square footage (we do have our handy tape measure), and then put that into a function that outputs a reasonable price!

This technique is called linear regression. And it’s the basis for many data science and machine learning techniques.

Screen Shot 2017-07-02 at 9.57.31 AM

The big breakthroughs in deep learning over the past couple of years (note, these algorithms existed for a while, but they are now working thanks to more plentiful and cheaper data, faster hardware, and some very smart algorithmic tweaks) are extensions of this core principle, but they add the following two capabilities (which are significant):

  • Instead of humans hand selecting a few simple features (like square footage or having a washer/dryer), computers transform rich data into a vector of numbers and find all sorts of features that might evade our measly human minds
  • Instead of only being able to model phenomena using simple linear lines, deep learning neural networks can model phenomena using topsy-turvy-twisty functions, which means they can capture richer phenomena like the environment around a self-driving car

At its root, however, even deep learning is about using mathematics to identify a good proxy to represent a more complex phenomenon. What’s interesting is that this teaches us something about the representational power of language: we barter in proxies at every moment of every day, crystallizing the complexities of the world into little tokens, words, that we use to exchange our experience with others. These tokens mingle and merge to create new tokens, new levels of abstraction, adding from from the dust from which we’ve come and to which we will return. Our castles in the sky. The quixotic figures of our imagination. The characters we fall in love with in books, not giving a dam that they never existed and never will. And yet, children learn that dogs are dogs and cats are cats after only seeing a few examples; computers, at least today, need 50,000 pictures of dogs to identify the right combinations of features that serve as a decent proxy for the real thing. Reducing that quantity is an active area of research.

Growth Hacking: 10 Friends in 14 Days

I’ve spent the last month in my new role at integrate.ai talking with CEOs and innovation leaders at large B2C businesses across North America. We’re in that miraculously fun, pre product-market fit phase of startup life where we have to make sure we are building a product that will actually solve a real, impactful, valuable business problem. The possibilities are broad and we’re managing more unknown unknowns than found in a Donald Rumsfeld speech (hat tip to Keith Palumbo of Cylance for the phrase). But we’re starting to see a pattern:

  • B2C businesses have traditionally focused on products, not customers. Analytics have been geared towards counting how many widgets were sold. They can track how something moves across a supply chain, but cannot track who their customers are, where they show up, and when. They can no longer compete on just product. They want to become customer centric.
  • All businesses are sustained by having great customers. Great means having loyalty and alignment with brand and having a high life-time value. They buy, they buy more, they don’t stop buying, and there’s a positive association when they refer a brand to others, particularly others who behave like them.
  • Wanting great customers is not a good technical analytics problem. It’s too fuzzy. So we have to find a way to transform a big objective into a small proxy, and focus energy and efforts on doing stuff in that small proxy window. Not losing weight, but eating an egg instead of pancakes for breakfast every morning.

Silicon Valley giants like Facebook call this type of thinking growth hacking: finding some local action you can optimize for that is a leading indicator of a long-term, larger strategic goal. The classic example from Facebook (which some rumour to be apocryphal, but it’s awesome as an example) was when the growth team realized that the best way to achieve their large, hard-to-achieve metric of having as many daily active users as possible was to reduce it to a smaller, easy-to-achieve metric of getting new users up to 10 friends in their first 14 days. 10 was the threshold for people’s ability to appreciate the social value of the site, a quantity of likes sufficient to drive dopamine hits that keep users coming back to the site.***** These techniques are rampant across Silicon Valley, with Netflix optimizing site layout and communications when new users join given correlations with potential churn rates down the line and Eventbrite making small product tweaks to help users understand they can use to tool to organize as well as attend events. The real power they unlock is similar to that of compound interest in finance: a small investment in your twenties can lead to massive returns after retirement.

Our goal at integrate.ai is to bring this thinking into traditional enterprises via a SaaS platform, not a consulting services solution. And to make that happen, we’re also scouting small, local wins that we believe will be proxies for our long-term success.

Conclusion

The spirit of this post is somewhat similar to a previous post about artifice as realism. There, I surveyed examples of situations where artifice leads to a deeper appreciation of some real phenomenon, like when Mendel created artificial constraints to illuminate the underlying laws of genetics. Proxies aren’t artifice, they’re parts that substitute for wholes, but enable us to understand (and manipulate) wholes in ways that would otherwise be impossible. Doorways into potential. A shift in how we view problems that makes them tractable for us, and can lead to absolutely transformative results. This takes humility. The humility of analysis. The practice of accepting the unreasonable effectiveness of the simple.


*Shout out to the amazing Andrej Karpathy, who authored The Unreasonable Effectiveness of Recurrent Neural Networks and Deep Reinforcement Learning: Pong from Pixels, two of the best blogs about AI available.

**There’s no dearth of self-help books about resolutions and self-transformation, but most of them are too cloying to be palatable. Nudge by Cass Sunstein and Richard Thaler is a rational exception.

***The philosopher Thomas Hobbes was very resistant to some of the formal developments in 17th-century mathematics. He insisted that we be able to visualize geometric objects in our minds. He was relegated to the dustbins of mathematical history, but did cleverly apply Euclidean logic to the Leviathan.

****Leibniz and Newton were rivals in discovering the calculus. One of my favourite anecdotes (potentially apocryphal?) about the two geniuses is that they communicated their nearly simultaneous discovery of the Fundamental Theorem of Calculus–which links derivatives to integrals–in Latin anagrams! Jesus!

*****Nir Eyal is the most prominent writer I know of on behavioural design and habit in products. And he’s a great guy!

The featured image is from the Archimedes Palimpsest, one of the most exciting and beautiful books in the world. It is a Byzantine prayerbook–or euchologion–written on a piece of parchment paper that originally contained mathematical treatises by the Greek mathematician Archimedes. A palimpsest, for reference, is a manuscript or piece of writing material on which the original writing has been effaced to make room for later writing but of which traces remain. As portions of Archimedes’ original Archimedes are very hard to read, researchers recently took the palimpsest to the Stanford Accelerator Laboratory and threw all sorts of particles at it really fast to see if they might shine light on hard-to-decipher passages. What they found had the potential to change our understanding of the history of math and the development of calculus! 

Objet Trouvé

A la pointe de la découverte, de l’instant où pour les premiers navigateurs une nouvelle terre fut en vue à celui où ils mirent le pied sur la côte, de l’instant où tel savant put se convaincre qu’il venait d’être témoin d’un phénomène jusqu’à lui inconnu à celui où il commença à mesurer la portée de son observation, tout sentiment de durée aboli dans l’enivrement de la chance, un très fin pinceau de feu dégage ou parfait comme rien autre le sens de la vie. – André Breton, 1934

(At the point of discovery — from the moment when a new land comes into the field of vision for a group of explorers to that when their feet first touch the shore — from the moment when a certain savant convinces herself that she’s observed a previously unknown phenomenon to that when she begins to measure her observation’s significance — the intoxication of luck abolishing all notions of time, a very thin paintbrush* unlocks, or perfects, like nothing else, the meaning of life.)

I have a few blog post ideas brewing but had lost my weekly writing momentum in the process of moving from New York City to Toronto for my new role at integrate.ai. It’s incredible how quickly a habit atrophies: the little monkey procrastinator** in my mind has found many reasons to dissuade me from writing these past two weeks. I already feel my mind intaking the world differently, without the same synthetic gumption. Anxiety creeps in. Enter Act of Will stage left, sauntering or skipping or prancing or curtseying or however you’d like to imagine her. A bias towards action, yes, yes indeed, and all those little procrastination monkeys will dissipate like tomorrow’s bug bites, smeared with pink calamine lotion bought on sale at Shoppers Drug Mart.

But what to write about? That is (always) the question.

Enter Associative Memory stage right. It’s 8:22 am. I’m on a run. Fog partially conceals CN tower. A swans stretches her neck to bite little nearby ducks as the lady with her ragged curly hair — your hair at 60 dear Kathryn — chuckles in delight, arms akimbo and crumbs askance, by the docks on the shore. The Asian rowers don rainbow windbreakers, lined up in a row like the refracted waves of a prism (seriously!). What do I write about? Am I ready to write about quantum computing and Georg Cantor (god not yet!), about why so many people reject consequentialist ethics for AI (closer, and Weber must be able to help), about the talk I recently gave defining AI, ML, Deep Learning, and NLP (I could do this today but the little monkey is still too powerful at the moment), about the pesky health issues I’m struggling with at the moment (too personal for prime time, and I’ll simply never be that kind of blogger)? About the move? About the massive changes in my life? About how emotionally charged it can be to start again, to start again how many times, to reinvent myself again, in this lifestyle I can’t help but adopt as I can’t help but be the self I reinforce through my choices, year after year, choices, I hope, oriented to further the exploration into the meaning of life?

Associative Memory got a bit sidetracked by the ever loquacious Stream of Consciousness. Please do return!

Take 2.

Enter Associative Memory stage right. It’s 8:22 am. I’m on a run. Fog partially conceals CN tower. Searching for something to write about. Well, what about drawing upon the objet trouvé technique the ever-inspiring Barbara Maria Stafford taught us in Art History 101 at the University of Chicago? According to Wikipediaobjet trouvé refers to “art created from undisguised, but often modified, objects or products that are not normally considered materials from which art is made, often because they already have a non-art function.”*** Think Marcel Duchamp’s ready-made objects, which I featured in a previous post and will feature again here.

Duchamp.-Bicycle-Wheel-395x395
One of Marcel Duchamp’s ready-made artworks.

But that’s not how I remember it. Stafford presented the objet trouvé as a psychological technique to open our attention to the world around us, helping our minds cast wide, porous, technicolor nets to catch impressions we’d otherwise miss when the wardens of the pre-frontal cortex confine our mental energy into the prisons cells of goals and tasks, confine our handmaidens under the iron-clad chastity belt of action. (Enter Laertes stage left, peaking through only to be quickly pulled back by Estragon’s cane.)

You see, moving to a new place, having all these hours alone, opens the world anew to meaning. We become explorers having just discovered a new land and wait suspended in the moment before our feet graze the unknown shore. The meaning of connections left behind simmers poignantly to tears, tears shed alone, settling into gratitude for time past and time to come. Forever Young coming on the radio surreptitiously in the car. Grandpa reading it like a poem in his 80s, his wisdom fierce and bold in his unrelenting kindness. His buoyancy. His optimism. His example.

Take 3.

Enter Associative Memory stage right. It’s 8:22 am. I’m on a run. Fog partially conceals CN tower. What do I see? What does the opened net of my consciousness catch? This.

water
Mon objet trouvé

It was more a sound than a sight. The repetition of the moving tide, always already**** there, Earth’s steady heartbeat in its quantum entanglement with the moon. The water rising and falling, lapping the shores with grace and ease under the foggy morning sky. Stammering, after all, being the native eloquence of fog people. The sodden sultriness of Egdon Heath alive in every passing wave, Eustacia’s imagination too large and bold for this world, a destroyer of men like Nataraja in her eternal dance.

Next, my mind saw this (as featured above):

vide

And, coincidentally, the woman on the France Culture podcast I was listening to as I ran uttered the phrase épuisée par le vide. 

Exhausted by nothingness. The timing could not have been more perfect.

It’s in these moments of loneliness and transition that very thin paintbrushes unlock the meaning of life. Our attention freed from the shackles of associations and time, left alone to wander labyrinths of impressions, passive, vulnerable, seeking. The only goals to be as kind as possible to others, to accept without judgment, to watch as the story unfolds.


* I don’t know how to translate pinceau de feu, so decided to go with just paintbrush. Welcome a more accurate translation!

** Hat tip to Tim Urban’s hilarious TED talk. And also, etymology lovers will love that cras means tomorrow in Latin, so procrastinate is the act of deferring to tomorrow. And also, hat tip to David Foster Wallace (somewhat followed by Michael Chabon, just to a much lesser degree) for inspiring me to put random thoughts that interrupt me mid sentence into blog post footnotes.

*** Hyperlinks in the quotation are the original.

**** If you haven’t read Heidegger and his followers, this phrase won’t be as familiar and potentially annoying to you as it is to me. Continental philosophers use it to refer to what Sellars would call the “myth of the given,” the phenomenological context we cannot help but be born into, because we use language that our parents and those around us have used before and this language shapes how we see what’s around us and we have to do a lot of work to get out of it and eat the world raw.

Commonplaces

My standard stamina stunted, I offer only a collection of the most beautiful and striking encounters I had this week. To elevate the stature of what would otherwise just be a list (newsletters are indeed merely curation, indexing valuable only because the web is too vast), I’ll compare what follows to an early-modern commonplace book, the then popular practice of collecting quotations and sayings useful for future writing or speeches. True commonplaces, locus communis, were affiliated with general rules of thumb or tokens of wisdom; they played a philosophical role to illustrate the morals of stories in classical rhetoric. The likes of Francis Bacon and John Milton kept commonplace books. The most interesting contemporary manifestation of the practice is Maria Popova’s always delightful Brain Pickings. Popova, moreover, inspires the first selection in today’s list.

What delights me the most in compiling this list is that I can’t help but do so. There is much change afoot, and I wanted to grant myself the luxury of taking a weekend off. But I couldn’t. My mind will remain restless until I write. It’s a wonderful sign, these handcuffs of habit.

Without further ado, I present a collection of things that were meaningful to me this week:

Euclid alone has looked on beauty bare

Monday evening, my dear friend Alfred Lee and I walked 45 minutes to Pioneerworks in Red Hook to attend The Universe in Verse. It was packed: the line curved around the corner and slithered down Van Brunt street towards the water and, lemmings, we rushed to get two slices of pizza to stave off our hunger before the show. It was a momentous gathering, so touching to see over 800 people gathered to listen to people read poetry about science! Maria Popova introduced each reader and spoke like she writes, eloquence unparalleled and harkening the encyclopedic knowledge of former days. It was a celebration of feminism, of the will to knowledge against the shackles of tyranny, of minds inquisitive, uniting in the observation of nature always ineffable yet craftily crystallized under the constraints of form.

ednastvincentmillay
A portrait of Edna St. Vincent Millay

My favorite poems were those by Adrienne Rich and this sonnet by the very beautiful Edna St. Vincent Millay.

Euclid alone has looked on Beauty bare.
Let all who prate of Beauty hold their peace,
And lay them prone upon the earth and cease
To ponder on themselves, the while they stare
At nothing, intricately drawn nowhere
In shapes of shifting lineage; let geese
Gabble and hiss, but heroes seek release
From dusty bondage into luminous air.
O blinding hour, O holy, terrible day,
When first the shaft into his vision shone
Of light anatomized! Euclid alone
Has looked on Beauty bare. Fortunate they
Who, though once only and then but far away,
Have heard her massive sandal set on stone.

A glutton for abstraction and the traps of immutability and stasis, I found this poem gripping. I cannot help but imagine a sandal etched in white marble at the end, the toes of Minerva immutable, inexorable, ineluctable in the hallways of the Louvre, the memories of a younger self thirsting to understand our world. The nostalgia ever present and awaiting. Euclid declaring with such force that for him, σημεῖον sēmeion, a sign or mark, meant a point, that which has no parts. And from this point he built a world of beauty bare.

Nutshell

I’m reading McEwan’s latest, Nutshell. It’s marvelous. A contemporary retelling of Hamlet, where the doubting antihero is an unborn baby observing Gertrude and Claudius’s (Trudy and Claude, in the retelling) murderous plot from his mother’s womb.

There are breathtaking moments:

“But here’s life’s most limiting truth–it’s always now, always here, never then and there.”

“There was a poem you recited then, too good for one of yours, I think you’d be the first to concede. Short, dense, better to the point of resignation, difficult to understand. The sort that hits you, hurts you, before you’ve followed exactly what was said…The person the poem addressed I think of as the world I’m about to meet. Already, I love it too hard. I don’t know what it will make of me, whether it will care of even notice me…Only the brave would send their imaginations inside the final moments.”

I have a post arguing against immortality brewing, to respond to Konrad Pabianczyk and continue the relentless fight against the Silicon Valley Futurists. It’s not possible to love the world too hard if you never die. There’s something right about the Freudian death drive, the lyricism of the brink of decay. Gracq harnesses it to create the ecstatic psychology of Au Chateau d’Argol. Borges describes how the nature of choice, the value we ascribe the experiences–the beauty of coincidence, the feeling of wonder that two minds might somehow connect so deeply that, as the angel made man in Wenders’s Wings of Desire, the voices finally stop, where the loneliness halts temporarily to usher aloneness in peace, true aloneness in the company of another, another like you, with you deeply and fully–would disappear if we know that the probability of experiencing everything and the possibility of doing everything would go up if we could indeed live forever in this continual eternal return. And even way back when in Mesopotamia, in the days of the great Gilgamesh, the gods do grant Utnapishtim immortality, but on the condition of a life of loneliness, a life lived “in the distance, at the mouth of the rivers.”

Style is an exercise in applied psychology

On Thursday morning, I listened to Steven Pinker (coincidentally, or perhaps not so coincidentally, in dialogue with Ian McEwan, McEwan with his deep voice, the English accent a paradigm of steadied wisdom worth attending to) talk about good writing on an Intelligence Squared podcast recorded in 2014. He basically described how bad writing, in particular bad academic writing, results from psychological maladies of having to preemptively qualify and defend every statement you make against the pillories of peers and critiques. His talk reminded me of David Foster Wallace’s essay Authority and American Usage, what with collapsing the distinction between descriptivist and prescriptivist linguistics and exposing the unseemly truth that style, diction, and language index social class. The gem I took away was Pinker’s claim that style is an exercise in applied psychology, that we must consider who our readers are, what they’ve read, how the speak and think, and adapt what we present to meet them there without friction or rejection.

Screen Shot 2017-04-30 at 12.14.54 PM
Foster Wallace’s brilliant essay reviews a dictionary, and in doing so, critiques all the horrendous faux pas with make using the English language.

What’s freeing about this blog is that, unlike most of my other writing, I forget about the audience. There is no applied psychology. It’s just a mind revealed and revealing.

Music

Coda by Aaron Martin/Christoph Berg caught my attention yesterday evening as I walked under the bridge from the Lorimer station and waited, reading, in front of a bar in Williamsburg.

I had this Proustian madeleine experience last Sunday when The Beatitudes, by Vladimir Martynov, showed up on my Spotify Discover Weekly list. The Kronos Quartet version is featured in Paolo Sorrentino’s The Great Beauty. Hearing the music transported me back to a wintry Sunday morning in Chicago, up at the Music Box theater to see that film with the man I lived with and loved at the time. I relived this love, deeply. It was so touching, and yet another type of experience I just don’t think would be as powerful and impactful if I weren’t mortal, if there weren’t this knowledge that it’s no longer, but somehow always is, a commonplace as old as Greece, tucked away like shy toes under the sandal strap of Minerva’s marble shoe, cold, material, inside me, deeply, until I die, to be unlocked and unearthed by surprise, as if it were again present.

The image is of John Locke’s 1705 “New method of a commonplace book.” Looks like Locke wanted to add some structure to the Renaissance mess of just jotting things down to ease future retrieval. This is housed in the Beinecke rare book library at Yale. 

Homage to Derek Walcott

I remember it like it was yesterday. Like it was right now.

It was late June, 2007. My dear friend Andrew Gradman, whom I cherish so deeply, who shares my birthday and, unfortunately, some of the struggles of my temperament, was visiting me in Germany. We left the little studio apartment I lived in for a year in Sachsenhausen, a neighborhood in Frankfurt, stopped at Documenta in Kassel (which I appreciated far more than Andrew), trained our way to Berlin. I don’t remember how we spent most of our time in Berlin, but clearly recall the evening where we wandered towards the east side, following the traces of the former wall, and, without purpose or plan, arrived at the KulturBrauerei, a former brewery now converted into an arts and events hall. It just so happened they were hosting the opening night of an international Poesiefestival there that evening. Andrew was kind enough to indulge my desire to buy tickets and check it out.

The room was full but not stuffy or crowded. Late arrivers, we sat very close to the first row. Andrew was antsy, as he didn’t speak German and wasn’t jazzed about the prospect of having German poetry wash over him for an hour or two. Empathic to a fault, I sensed and lived his alienation, but selfishly tolerated his discomfort as I was excited about the event. The experience was in keeping with those I’d had at the many Literaturhäuser (literally, literature houses) across Germany, small cultural centers that house live novel and poetry readings. While living in Frankfurt, I caught the last three readings of the last chapters in the last volume of Marcel Proust’s In Search of Lost Time: a man named Peter Heusch had taken 13 years (!!) to read the entire work aloud, in one-hour installments on Fridays at 6 or 7 pm.

The room hushed. The festival started. The MC said she was delighted and honor to introduce the first poet, a man named Derek Walcott, to the stage.

I’d never heard of Walcott, but quickly understood he was a big deal. Nobel prize and such.

He walked slowly to the podium and paused. Breathed in and out a few times. His demeanor, his entire being, exuded the same epic energy we read in his verse. Like the storyteller in Wenders’ Wings of Desire, his gestures and eyes relay the dead poets that girth his heart and his mind, Homer and Baudelaire and Yeats and everyone, just everyone, processed anew under the heavy anchor of honesty, of experience. He was old. He moved slowly with the poise of an oracle. He promised wisdom.

And then he read. He read the first book of The Prodigala long-form poem he first published in 2014. I don’t know how to adequately capture the emotions that swept over me listening to him read. All I can say is that, with each passing moment, my heart opened more. The alienation and discomfort passed. It was pure presence. This man, with his old skin and his oracular voice, relived the experiences he’d had as a young, erudite Caribbean man–or even as a old, erudite Caribbean man, the self of today telescoped through the self of yesterday–living in Boston, a Brahmin in nature and soul, just living in a body that looked different from everyone around him. A young man riding on the train up the northeastern corridor, watching the herons and grasses graze the shore, as past voices, echoes of Emerson and Thoreau, rose again into the thirst of his curiosity. I heard and saw myself, and reveled at the deep kinship that existed between me, a 23-year-old white girl, and him, a 77-year-old black man. I felt fused with him. I felt love. I felt such deep wonder and gratitude that chance had brought us there that evening. I don’t remember The Prodigal in detail, but a few scenes have been grafted into my mind. By now, I’ve given it as a Christmas gift to my mom and many dear friends. What I recall acutely is the depth, intensity, purity of the emotions I felt when he read. Derek Walcott gave me art.

He died old and, it appears, lived well. He was a magnificent poet. I heard Love After Love on NPR this morning, and, as always with Walcott, broke into tears. He captures the cadence of a self grateful for being, a self finally settling into love. A beauty always available to us all.

Love After Love

The time will come
when, with elation
you will greet yourself arriving
at your own door, in your own mirror
and each will smile at the other’s welcome,

and say, sit here. Eat.
You will love again the stranger who was your self.
Give wine. Give bread. Give back your heart
to itself, to the stranger who has loved you

all your life, whom you ignored
for another, who knows you by heart.
Take down the love letters from the bookshelf,

the photographs, the desperate notes,
peel your own image from the mirror.
Sit. Feast on your life.

The image is Rembrandt’s Prodigal Son, which I also fortuitously found in the Hermitage in Saint Petersburg. As with Walcott, I didn’t know this painting existed before it found me. I didn’t seek it out as a coveted must see tourist experience. It wasn’t the Mona Lisa at the Louvre. It was a freezing February morning, I wandered the Hermitage all alone, and, upon turning a corner, saw this painting. I froze in my tracks and started to cry. Never had I seen contrition and forgiveness and unconditional love so delicately represented, the one foot without a shoe, the curling toes, the father’s ease after all those years of worry and fear, and the jealous, resentful gaze of the good son standing tall and ominous, watching.