Two lessons from giving talks

So I’m writing this blog post about why the AlphaGo documentary isn’t really about AlphaGo at all, but is squarely about Lee Sedol and the psychological pressure we put on ourselves when we strive to be top performers, the emotional connection we create with opponents, even when they bluff, and a few other things, and I (naturally) ended up down this rabbit hole about the absurd experiences I undergo as a public speaker-in particular a woman speaking about tech-and there are two things that are quite revelatory and meaningful.

The first is that my time with the AV crew before going on stage is priceless. They are always my talk angels, the perfect outlet for self-deprecation and humor and energy release before having to perform. I don’t know if they know this may be the most important service they provide to speakers, at least speakers like me who are introverts inverted into extroverts on stage, who crave the feedback of a smile or a vote of confidence or a pat on the back or a friend before the show. My slides are always ludicrous by design, so we laugh over their skepticism that the slide deck starting with an image of Osama bin Laden’s compound in Abbottabad, Pakistan is the right one for a talk about machine learning. And then I NEVER have pants or pockets and we play the find-a-place-to-hook-the-microphone-jack game, be that on the back of my neck, the back of my bra, or even the back of my underwear (no joke, did that for the World Science Festival, THANK GOD it was a panel and I got to sit down after walking on stage because it was weighing down my underwear big time and I thought they would fall off right on stage…oh yeah, by the way, this is the stuff you have to think about as a performer, like, all the time, or at least as a woman performer, because I don’t think men have to deal with this kind of stuff).

The second is that having the AV break down may be the best theatrical device to deliver a great talk. I’ve had it happen to me multiple times now (Charlie Oliver thinks my business-card motto should be NO SLIDES NEEDED!) and have actually found that I prefer the energy when I’m screwed on stage. It seems best when the slides stop working two-thirds of the way in. That way, I have the luxury of communicating something using the props of images and memes on slides for a while (note to self: front load the deck with any mathematical concepts that are best explained with visual aids) and then have this magical moment where people are trailing off or looking at their phones or distracted by the rest of their busy lives, and they get surprised and it elicits first their confusion, then their empathy, and THEN, and here’s where the magic happens, their curiosity and their imagination! Because then I am forced to paint imaginative pictures of what the slides would have looked like if they were there, and my audience has a prior for how my talks tend to work, as they’ve seen the first two-thirds worth of images and can fill in the gaps. And the most electrified and engaged audiences I’ve ever addressed have been those whose attention perked up, who were with me, who followed me word by word after everything broke down. It elicits their compassion and, therethrough, their rapt attention. And it creates a virtuous feedback cycle. I have to work that much harder to ensure they understand, and they give me the nods or furrowed brows to show they do or don’t, and we communicate. It’s marvelous. They become actors in my story, part of the talk. Not just a passive audience.

Both of these lessons are about people being people. People connecting as people. Our identity as ruthlessly social beings. We abstract ourselves from our sociality in situations of performance, envisioning ourselves as brains in a vat who act on one plane only. But that’s not who we are. My delight in the absurd details surrounding the performance shows me otherwise. AlphaGo has a lot to say about that too (stay tuned…).

The featured image is of the Fillmore Miami. I gave this talk there, addressing an audience of industrial control systems security professionals. The lights glared in my face and I had no idea what people thought. I only had my own reflection in my mind, so I thought they hated it. After, many people told me it was the best talk of the conference.

Transitioning from Academia to Business

The wittiest (and longest) tweet thread I saw this week was (((Curtis Perry)))‘s masterful narrative of the life of a graduate student as kin to the life of Job:

Screen Shot 2017-11-12 at 9.32.17 AM
The first tweet chapter in Perry’s grad student life of Job. For the curious, Perry’s Twitter profile reads: Quinquegenarian lit prof and chronic feeder of Titivillus. Professional perseverator. Fellowship of the half-circle.

The timing of the tweet epic was propitious in my little subjective corner of the universe: Just two days before, I’d given a talk for Stanford’s Humanities Education Focal Group about my transition from being a disgruntled PhD in comparative literature to being an almost-functioning-normal-human-being executive at an artificial intelligence startup and a venture partner at a seed-stage VC firm.

Many of the students who attended the talk, ranging from undergrad seniors to sixth- or seventh-year PhDs, reached out afterwards to thank me and ask for additional advice. It was meaningful to give back to the community I came from and provide advice of a kind I sought but couldn’t find (or, more accurately, wasn’t prepared to listen to) when I struggled during the last two years of my PhD.

This post, therefore, is for the thousands of students studying humanities, fearing the gauntlet of the academic job market, and wondering what they might do to explore a different career path or increase their probability of success once they do. I offer only the anecdotes of one person’s successes and failures. Some things will be helpful for others; some will not. If nothing else, it serves as testimony that people need not be trapped in the annals of homogeneity. The world is a big and mighty place.


Important steps in my transition

Failure

As I narrated in a previous post, I hit rock bottom in my last year of graduate school. I remember sitting in Stanford’s Green Library in a pinnacle of anxiety, festering in a local minimum where I couldn’t write, couldn’t stick with the plan for my dissertation, couldn’t do much of anything besides play game after game of Sudoku to desperately pass the time. I left Stanford for a bit. I stopped trying. Encouraged by my worrying mother, I worked at a soup kitchen in Boston every day, pretending it was my job. I’d go in every day at 7:00 am and leave every afternoon at 3:00 pm. Working with my hands, working for others, gradually nurtured me back to stability.

It was during this mental breakdown that applications for a sixth-year dissertation fellowship were due. I forced myself to write a god awful application in the guest bedroom at my parents’ Boston townhouse. It was indescribably hard. Paralyzed, I submitted an alienated abstract and dossier. A few months later, I received a letter informing me that the Humanities Center committee had rejected my application.

I remember the moment well. I was at Pluto’s salad joint on University Avenue in Palo Alto. By then, I had returned back to Stanford and was working one day per week at Saint Martin’s Soup Kitchen in San Francisco, 15 hours per week at a location-based targeted advertising startup called Vantage Local (now Frequence), 5 hours per week tutoring Latin and Greek around the Valley, playing violin regularly, running, and reserving my morning hours to write. I had found balance, balance fit for my personality and needs. I had started working with a career counselor to consider alternative career paths, but had yet to commit to a move out of academia.

The letter gave me clarity. It was the tipping point I needed to say, that’s it; I’m done; I’m moving on. It did not feel like failure; it felt like relief. My mind started to plot next steps before I finished reading the rejection letter.

Luck

The timing couldn’t have been better. My friend Anaïs Saint-Jude had started Bibliotech,  a forward-thinking initiative devoted to exploring the value graduate-level training in the humanities could provide to technology companies. I was fortunate enough to be one of the students who pitched their dissertation to conference attendees, including Silicon Valley heavyweights like Geoffrey Moore, Edgar Masri, Jeff Thermond, Bob Tinker, and Michael Korcuska, all of whom have since become mentors and friends. My intention to move into the private sector came off loud and clear at the event. Thanks to my internship at the advertising company, I had some exposure to the diction and mores of startups. The connections I made there were invaluable to my career. People opened doors that would have otherwise remained shut. All I needed was the first opportunity, and a few years to recalibrate my sense of self as I adapted to the reward system of the private sector.

Authenticity

I’ve mentored a few students who made similar transitions from academia into tech companies, and all have asked me how to defend their choice of pursuing a PhD instead of going directly into marketing, product, sales, whatever the role may be. Our culture embraces a bizarre essentialism, where we’re supposed to know what we want to be when we grow up from the ripe of old of 14, as opposed to finding ourselves in the self we come to inhabit through the serendipitous meanderings of trial and tribulation. (Ben Horowitz has a great commencement speech on the fallacy of following your passion.) The symptom of this essentialism in the transition from humanities to, say, marketing, is this strange assumption that we need to justify the PhD as playing part of a logical narrative, as some step in a master plan we intended from the beginning.

That just can’t be true. I can’t think of anyone who pursues a PhD in French literature because she feels it’s the most expedient move for a successful career in marketing. We pursue literature degrees because we love literature, we love the life of the mind, we are gluttons for the riches of history and culture. And then we realize that the professional realities aren’t quite what we expected. And, for some of us, acting for our own happiness means changing professions.

One thing I did well in my transition was to remain authentic. When I interviewed and people asked me about my dissertation, I got really great at giving them a 2-minute, crisp explanation of what I wrote about and why it was interesting. What they saw was an ability to communicate a complex topic in simple, compelling words. They saw the marks of a good communicator, which is crucial for enterprise marketing and sales. I never pretended I wanted to be a salesperson. I showed how I had excelled in every domain I’d played in, and could do the same in the next challenge and environment.

Selecting the right opportunity

Every company is different. Truly. Culture, stage, product, ethics, goals, size, role, so many factors contribute to shaping what an experience is like, what one learns in a role, and what future opportunities a present experience will afford.

When I left graduate school, I intentionally sought a mid-sized private company that had a culture that felt like a good fit for a fresh academic. It took some time, but I ended up working at a legaltech startup called Intapp. I wanted an environment where I’d benefit from a mentor (after all, I didn’t really have any business skills besides writing and teaching) and where I would have insight into strategic decisions made by executive management (as opposed to being far removed from executives at a large company like Google or Facebook). Intapp had the right level of nerdiness. I remember talking to the CTO about Confucius during my interviews. I plagued my mentor Dan Bressler with endless existential dribble as I went through the growing pains of becoming a business person. I felt embarrassed and pushy asking for a seat at the table for executive meetings, but made my way in on multiple occasions. Intapp sold business software to law firms. The what of the product was really not that interesting. But I learned that I loved the how, loved supporting the sales teams as a subject matter expert on HIPAA and professional responsibility, loved the complex dance of transforming myriad input from clients into a general product, loved writing on tight timelines and with feedback across the organization. I learned so incredibly much in my first role. It was a foundation for future success.

I am fortunate to be a statistical anomaly as a woman. Instead of applying for jobs where I satisfy skill requirements, I tend to seek opportunities with exponential growth potential. I come in knowing a little about the role I have to accomplish, and leave with a whole new set of skills. This creates a lot of cognitive dissonance and discomfort, but I wouldn’t have it any other way. My grey hairs may lead me to think otherwise soon, but I doubt it.

Humility

Last but certainly not least, I have always remained humble and never felt like a task was beneath me. I grew up working crappy jobs as a teenager: I was a janitor; a hostess; a busgirl; a sales representative at the Bombay company in the mall in Salem, New Hampshire; a clerk at the Court Theater at University of Chicago; a babysitter; a lawnmower; an intern at a Blackberry provisioning tech company, where I basically drove a big truck around and lugged stuff from place to place and babysat the CEO’s daughter. I see no work as beneath me, and view grunt work as the sacrifice due to have the amazing, amazing opportunities I have in my work (like giving talks to large audiences and meeting smart and inspiring people almost every day).

Having this humility helps enormously when you’re an entrepreneur. I didn’t mind starting as a marketing specialist, as I knew I could work hard and move up. I’ll yell at the computer in frustration when I have to upload email addresses to a go-to-webinar console or get the HTML to format correctly in a Mailchimp newsletter, but I’m working on showing greater composure as I grow into a leader. I always feel like I am going to be revealed as a fraud, as not good enough. This incessant self-criticism is a hallmark of my personality. It keeps me going.


Advice to current students

Screen Shot 2017-11-12 at 11.02.02 AM
A rad Roman mosaic with the Greek dictum, Know Thyself

Finish your PhD

You’ll buy options for the future. No one cares what you studied or what your grades were. They do care that you have a doctorate and it can open up all sorts of opportunities you don’t think about when you’re envisioning the transition. I’ve lectured at multiple universities and even taught a course at the University of Calgary Faculty of Law. This ability to work as an adjunct professor would have been much, much harder to procure if I were only ABD.

This logic may not hold for students in their first year, where 4 years is a lot of sunk opportunity cost. But it’s not that hard to finish if you lower your standards and just get shit done.

Pity the small-minded

Many professors and peers will frown upon a move to business for all sorts of reasons. Sometimes it’s progressive ideology. Sometimes it’s insecurity. Most of the time it’s just lack of imagination. Most humanists profess to be relativists. You’d think they could do so when it comes to selecting a profession. Just know that the emotional pressure of feeling like a failure if you don’t pursue a research career dwindles almost immediately when your value compass clocks a different true north.

Accept it’s impossible to imagine the unknown

The hardest part of deciding to do something radically different is that you have no mental model of your future. If you follow the beaten path, you can look around to role model professors and know what your life will look like (with some variation depending on which school you end up in). But it’s impossible to know what a different decision will lead to. This riddles the decision with anxiety, requiring something like a blind leap of faith. A few years down the line, you come to appreciate the creative possibility of a blank future.

Explore

There are so many free meetups and events taking place everywhere. Go to them. Learn something new. See what other people are doing. Ask questions. Do informational interviews. Talk to people who aren’t like yourself. Talk to me! Keep track of what you like and don’t like.

Collaborate

One of the biggest changes in moving from academia to business is the how of work. Cultures vary, but businesses are generally radically collaborative places and humanities work is generally isolated and entirely individual. It’s worthwhile to co-author a paper with a fellow grad student or build skills running a workshop or meetup. These logistics, communication, and project management skills are handy later on (and are good for your resume).

Experiment with different writing styles

Graduate school prepares you to write 20-page papers, which are great preparation for peer-reviewed journals and, well, nothing else. They don’t prepare you to write a good book. They don’t prepare you to write a good blog post or newspaper article. Business communication needs to be terse and on point so people can act on it. Engineers need guidance and clarity, need a sense of continuity of purpose. Customers need you to understand their point of view. Audiences need stories or examples to anchor abstract ideas. Having the agility to fit form to purpose is an invaluable skill for business communications. It’s really hard. Few do it well. Those who do are prized.

Learn how to give a good talk

Reading a paper aloud to an audience is the worst. Just don’t do it. People like funny pictures.

Know thyself

There is no right path. We’re all different. Business was a great path for me, and I’ve molded my career to match my interests, skill, personality, and emotional sensitivities. You may thrive in a totally different setting. So keep track of what you like and dislike. Share this thinking with others you love and see if what they think of you is similar to what you think of you. Figuring this out is the trickiest and potentially most valuable exercise in life. And sometimes it’s a way to transform what feels like a harrowing experience into an opportunity to gain yet another inch of soul.


The featured image is from William Blake’s illustrated Book of Job, depicting the just man rebuked by his friends. Blake has masterful illustrations of the Bible, including this radical image from Genesis, where Eve’s wandering eye displays a proleptic fall from grace, her vision, her fantasy too large for the limits of what Adam could safely provide - a heroine of future feminists, despite her fall. 

blake adam eve

 

 

 

Censorship and the Liberal Arts

A few months ago, I interviewed a researcher highly respected in his field to support marketing efforts at my company. Before conducting the interview, I was asked to send my questions for pre-approval by the PR team of the corporation with which the researcher is affiliated. Backed by the inimitable power of their brand, the PR scions struck crimson lines through nearly half my questions. They were just doing their job, carrying out policy to draw no public attention to questions of ethics, safety, privacy, security, fear. Power spoke. The sword showed that it is always mightier than the pen, fool ourselves though we may.

Pangs of injustice rose fast in my chest. And yet, I obeyed.

Was this censorship? Was I a coward?

Intellectual freedom is nuanced in the private sector because when we accept a job we sign a social contract. In exchange for a salary and a platform for personal development and growth, we give up full freedom of expression and absorb the values, goals, norms, and virtual personhood of the organization we join. The German philosopher Emmanuel Kant explains the tradeoffs we make when constructing our professional identity in What is Enlightenment? (apologies for the long quotation, but it needed to be cited in full):

“This enlightenment requires nothing but freedom-and the most innocent of all that may be called “freedom”: freedom to make public use of one’s reason in all matters. Now I hear the cry from all sides: “Do not argue!” The officer says: “Do not argue-drill!” The tax collector: “Do not argue-pay!” The pastor: “Do not argue-believe!” Only one ruler in the world says: “Argue as much as you please, but obey!” We find restrictions on freedom everywhere. But which restriction is harmful to enlightenment? Which restriction is innocent, and which advances enlightenment? I reply: the public use of one’s reason must be free at all times, and this alone can bring enlightenment to mankind.

On the other hand, the private use of reason may frequently be narrowly restricted without especially hindering the progress of enlightenment. By ‘public use of one’s reason’ I mean that use which a man, as scholar, makes of it before the reading public. I call ‘private use’ that use which a man makes of his reason in a civic post that has been entrusted to him. In some affairs affecting the interest of the community a certain [governmental] mechanism is necessary in which some members of the community remain passive. This creates an artificial unanimity which will serve the fulfillment of public objectives, or at least keep these objectives from being destroyed. Here arguing is not permitted: one must obey. Insofar as a part of this machine considers himself at the same time a member of a universal community-a world society of citizens-(let us say that he thinks of himself as a scholar rationally addressing his public through his writings) he may indeed argue, and the affairs with which he is associated in part as a passive member will not suffer. Thus it would be very unfortunate if an officer on duty and under orders from his superiors should want to criticize the appropriateness or utility of his orders. He must obey. But as a scholar he could not rightfully be prevented from taking notice of the mistakes in the military service and from submitting his views to his public for its judgment. The citizen cannot refuse to pay the taxes levied upon him; indeed, impertinent censure of such taxes could be punished as a scandal that might cause general disobedience. Nevertheless, this man does not violate the duties of a citizen if, as a scholar, he publicly expresses his objections to the impropriety or possible injustice of such levies. A pastor, too, is bound to preach to his congregation in accord with the doctrines of the church which he serves, for he was ordained on that condition. But as a scholar he has full freedom, indeed the obligation, to communicate to his public all his carefully examined and constructive thoughts concerning errors in that doctrine and his proposals concerning improvement of religious dogma and church institutions. This is nothing that could burden his conscience. For what he teaches in pursuance of his office as representative of the church, he represents as something which he is not free to teach as he sees it. He speaks as one who is employed to speak in the name and under the orders of another. He will say: “Our church teaches this or that; these are the proofs which it employs.” Thus he will benefit his congregation as much as possible by presenting doctrines to which he may not subscribe with full conviction. He can commit himself to teach them because it is not completely impossible that they may contain hidden truth. In any event, he has found nothing in the doctrines that contradicts the heart of religion. For if he believed that such contradictions existed he would not be able to administer his office with a clear conscience. He would have to resign it. Therefore the use which a scholar makes of his reason before the congregation that employs him is only a private use, for no matter how sizable, this is only a domestic audience. In view of this he, as preacher, is not free and ought not to be free, since he is carrying out the orders of others. On the other hand, as the scholar who speaks to his own public (the world) through his writings, the minister in the public use of his reason enjoys unlimited freedom to use his own reason and to speak for himself. That the spiritual guardians of the people should themselves be treated as minors is an absurdity which would result in perpetuating absurdities.”

Kant makes a tricky distinction between our public and private use of reason. What he calls “public use of reason” is what we normally consider to be private: The sacred space of personal opinion, not as unfettered stream of consciousness, but as the reflections and opinions that result from our sense of self as part of the species homo sapiens (some criticize this humanistic focus and think we should expand the space of commonality to include animals, plants, robots, rocks, wind, oceans, and other types of beings). Beliefs that are fair because they apply to me just as they apply to you and everyone else. Kant deems this “public” because he espouses a particular take on reason that is tied up with our ability to project ourselves as part of a larger universal we call humanity: for Kant, our freedom lies not in doing whatever we want, not in behaving like a toddler who gets to cry on a whim or roam around without purpose or drift in opiate stupor, but rather in our willingly adhering to self-imposed rules that enable membership in a collectivity beyond the self. This is hard to grasp, and I’m sure Kant scholars would poke a million holes in my sloppy interpretation. But, at least for me, the point here is public reason relates to the actions of our mind when we consider ourselves as citizens of the world, which, precisely because it is so broad, permits fierce individuality.

By contrast, “private use of reason” relates to a sense of self within a smaller group, not all of humanity. So, when I join a company, by making that decision, I willingly embrace the norms, culture, and personhood of this company. Does this mean I create a fictional sub-self every time I start a new job or join some new club or association? And that this fictional self is governed by different rules than the real me that exercises public reason in the comfort of my own mind and conscience? I don’t think so. It would require a fictional sub-self if the real self were a static thing that persists over time. But there’s no such thing as the real self. It’s a user illusion (hat tip to Dan Dennett for the language). We come as diads and triads, the connections between the neurons in our brains ever morphing to the circumstances we find ourselves in. Because we are mortal, because we don’t have infinite time to explore the permutations of possible selves that would emerge as we shapeshift from one collectivity to the next, it’s important that we select our affiliations carefully, especially if we accept the tradeoffs of “private use of reason.” We don’t have time to waste our willful obedience on groups whose purpose and values skew too far from what our public reason holds dear. And yet, the restriction of self-interest that results from being part of a team is quite meaningful. It is perhaps the most important reason why we must beware the lore of a world without work.

This long exploration of Kant’s distinction between public and private reason leads to the following conclusion: No, I argue, it was not an act of cowardice to obey the PR scions when they censored me. I was exercising my “private use of reason,” as it would not have been good for my company to pick a fight. In this post, by contrast, I exercise my “public use of reason” and make manifest the fact that, as a human being, I feel pangs of rage against any form of censorship, against any limitation of inquiry, curiosity, discourse, and expression.

But do I really mean any? Can I really mean any in this age of Trumpism, where the First Amendment serves as a rhetorical justfication to traffic fake news, racism, or pseudo-scientific justifications to explain why women don’t occupy leadership roles at tech companies?* And, where and how do we draw the line between actions that aren’t right according to public reason but are right according to private reason and those that are simply not right, period? By making a distinction between general and professional ethics, do we not risk a slippery slope where following orders can permit atrocities, as Hannah Arendt explores in Eichmann in Jerusalem?

These are dicey questions.

There are others that are even more dicey and delicate. What happens if the “private use of reason” is exercised not within the a corporation or office, affiliations we choose to make (should we be fortunate enough to choose…), but in a collectivity defined by trait like age, race, gender, sexuality, religion, or class (where elective choice is almost always absent except when it absolutely is present (e.g., a decision to be transgender))? These categories are charged with social meaning that breaks Kant’s logic. Naive capitalists say we can earn our class through hard work. Gender and race are not discrete categories but continuous variables on a spectrum defined by local contexts and norms: In some circles, gender is pure expression of mind over body, a malleable sense of self in a dance with the impressions and reactions of others; in others, the rules of engagement are fixed to the point of submission and violence. Identity politics don’t follow the logic of the social contract. A willed trade off doesn’t make sense here. What act of freedom could result from subsuming individual preference for the greater good of a universal or local whole? (Open to being told why I’m totally off the mark, as these issues are far from my forte.)

What’s dangerous is when the experience of being part of a minority expresses itself as willed censorship, as a cloak to avoid the often difficult challenge of grappling with the paradoxical twists of private and public reason. When the difficult nuances of ethics reduce to the cocoon of exclusion, thwarting the potential of identifying common ground.

The censorship I accepted when I constrained my freedom as a professional differs from the censorship contemporary progressives demand from professors and peers. I agree with the defenders of liberalism that the distinction between private and public reason should collapse at the university. That the university should be a place where young minds are challenged, where we flex the muscles of transforming a gut reaction into an articulated response. Where being exposed to ideas different from one’s own is an opportunity for growth. Where, as dean of students Jay Ellison wrote to the incoming class of 2020 at the University of Chicago, “we do not support so called ‘trigger warnings,’ we do not cancel invited speakers because their topics might prove controversial,** and we do not condone the creation of intellectual ‘safe spaces’ where individuals can retreat from ideas and perspectives at odds with their own.” As an alumna of the University of Chicago, I felt immense pride at reading Bret Stephens’ recent New York Times op-ed about why Robert Zimmer is America’s best university president. Gaining practice in the art of argument and debate, in reading or hearing an idea and subjecting it to critical analysis, in appreciating why we’ve come to espouse some opinion given the set of circumstances afforded to us in our minute slice of experience in the world, in renting our positions until evidence convinces us to change our point of view, in deeply listening to others to understand why they think what they think so we can approach a counterargument from a place of common ground, all of these things are the foundations of being a successful professional. Being a good communicator is not a birthright. It is a skill we have to learn and exercise just like learning how to ride a bike or code or design a website. Except that it is much harder, as it requires a Stoic’s acceptance that we cannot control the minds or emotions of others; We can only seek to influence them from a place of mutual respect.

Given the ungodly cost of a university education in the United States, and our society’s myopic focus on creating productive workers rather than skeptical citizens, it feels horribly elitist to advocate for the liberal arts in this century of STEM, robots, and drones. But my emotions won’t have it otherwise: They beat with the proud tears of truth and meaning upon reading articles like Marilynne Robinson’s What Are We Doing Here?, where she celebrates the humanities as our reverence to the beautiful, to the possible, to the depth we feel in seeing words like grandeur and the sadness that results when imagine a world without the vastness of the Russian imagination or the elegance of the Chinese eye and hand.

But as the desire to live a meaningful life is not enough to fund the liberal arts, perhaps we should settle for a more pragmatic argument. Businesses are made of people, technologies are made by people, technologies are used by people. Every day, every person in every corporation faces ethical conundrums like the censorship example I outlined above. How can we approach these conundrums without tools or skills to break down the problem? How can we work to create the common ground required for effective communication if we’ve siphoned ourselves off into the cocoon of our subjective experience? Our universities should evolve, as the economic-social-political matrix is not what it once was. But they should not evolve at the expense of the liberal arts, which teach us how to be free.

*One of the stranger interviews James Damore conducted after his brief was leaked from Google was with the conservative radio host Stefan Molyneux, who suggested that conservatives and libertarians make better programmers because they are accustomed to dissecting the world in clear, black and white terms, as opposed to espousing the murky relativism of the liberals. It would be a sad world indeed if our minds were so inflexible that they lacked the ability to cleave a space to practice a technical skill.

**Sam Harris has discussed academic censorship and the tyranny of the progressives widely on the Waking Up podcast (and has met no lack of criticism for doing so), interviewing figures like Charles Murray, Nicolas Christakis, Mark Lilla, and others.

The featured image is from some edition of Areopagitica, a speech John Milton (yep, the author of Paradise Lost) gave to the British Parliament to protest censorship. In this speech, Milton argues that virtue is not innate but learned, that just as we have to exercise our self-restraint to achieve the virtue of temperance, so too should we be exposed to all sorts of ideas from all walks of life to train our minds in virtue, to give ourselves the opportunity to be free. I love that bronze hand.

Education in the Age of AI

There’s all this talk that robots will replace humans in the workplace, leaving us poor, redundant schmucks with nothing to do but embrace the glorious (yet terrifying) creative potential of opiates and ennui. (Let it be noted that bumdom was all the rage in the 19th century, leading to the surging ecstasies of Baudelaire, Rimbaud, and the crown priest of hermeticism (and my all-time favorite poet besides Sappho*), Stéphane Mallarmé**).

As I’ve argued in a previous post, I think that’s bollocks. But I also think it’s worth thinking about what cognitive, services-oriented jobs could and should look like in the next 20 years as technology advances. Note that I’m restricting my commentary to professional services work, as the manufacturing, agricultural, and transportation (truck and taxi driving) sectors entail a different type of work activity and are governed by different economic dynamics. They may indeed be quite threatened by emerging artificial intelligence (AI) technologies.

So, here we go.

I’m currently reading Yuval Noah Harari’s latest book, Homo Deusand the following passage caught my attention:

“In fact, as time goes by it becomes easier and easier to replace humans with computer algorithms, not merely because the algorithms are getting smarter, but also because humans are professionalizing. Ancient hunter-gatherers mastered a very wide variety of skills in order to survive, which is why it would be immensely difficult to design a robotic hunter-gatherer. Such a robot would have to know how to prepare spear points from flint stones, find edible mushrooms in a forest, track down a mammoth and coordinate a charge with a dozen other hunters, and afterwards use medicinal herbs to bandage any wounds. However, over the last few thousand years we humans have been specializing. A taxi driver or a cardiologist specializes in a much narrower niche than a hunter-gatherer, which makes it easier to replace them with AI. As I have repeatedly stressed, AI is nowhere near human-like existence. But 99 per cent of human qualities and abilities are simply redundant for the performance of most modern jobs. For AI to squeeze humans out of the job market it needs only to outperform us in the specific abilities a particular profession demands.”

duchamp toilet
Harari is at his best critiquing liberal humanism. He features Duchamp’s ready-made art as the apogee of humanist aesthetics, where beauty is in the eye of the beholder.

This is astute. I love how Harari debunks the false impression that the human race progresses over time. We tend to be amazed upon seeing the technical difficulty of ancient works of art at the Met or the Louvre, assuming History (big H intended) is a straightforward, linear march from primitivism towards perfection. While culture and technologies are passed down through language and traditions from generation to generation, shaping and changing how we interact with one another and with the physical world, how we interact as a collective and emerge into something way beyond our capacities to observe, this does not mean that the culture and civilization we inhabit today is morally superior to those that came before, or those few that still exist in the remote corners of the globe. Indeed, primitive hunter-gatherers, given the broad range of tasks they had to carry out to survive prior to Adam Smith’s division of labor across a collective, may have a skill set more immune to the “cognitive” smarts of new technologies than a highly educated, highly specialized service worker!

This reveals something about both the nature of AI and the nature of the division of labor in contemporary capitalism arising from industrialism. First, it helps us understand that intelligent systems are best viewed as idiot savants, not Renaissance Men. They are specialists, not generalists. As Tom Mitchell explains in the opening of his manifesto on machine learning:

“We say that a machine learns with respect to a particular task T, performance metric P, and type of experience E, if the system reliably improves its performance P at task T, following experience E. Depending on how we specify T, P, and E, the learning task might also be called by names such as data mining, autonomous discovery, database updating, programming by example, etc.”

Confusion about super-intelligent systems stems from the popular misunderstanding of the word “learn,” which is a term of art with a specific meaning in the machine learning community. The learning of machine learning, as Mitchell explains, does not mean perfecting a skill through repetition or synthesizing ideas to create something new. It means updating the slope of your function to better fit new data. In deep learning, these functions need not be simple, 2-D lines like we learn in middle school algebra: they can be incredibly complex curves that transverse thousands of dimensions (which we have a hard time visualizing, leading to tools like t-SNE that compress multi-dimensional math into the comfortable space-time parameters of human cognition).

Screen Shot 2017-04-08 at 9.28.32 AM
t-SNE reminds me of Edwin Abbott’s Flatland, where dimensions signify different social castes.

The AI research community is making baby steps in the dark trying to create systems with more general intelligence, i.e., systems that reliably perform more than one task. OpenAI Universe and DeepMind Lab are the most exciting attempts. At the Future Labs AI Summit this week, Facebook’s Yann LeCun discussed (largely failed) attempts to teach machines common sense. We tend to think that highly skilled tasks like diagnosing pneumonia from an X-ray or deeming a tax return in compliance with the IRS code require more smarts than intuiting that a Jenga tower is about to fall or perceiving that someone may be bluffing in a poker game. But these physical and emotional intuitions are, in fact, incredibly difficult to encode into mathematical models and functions. Our minds are probabilistic, plastic approximation machines, constantly rewiring themselves to help us navigate the physical world. This is damn hard to replicate with math, no matter how many parameters we stuff into a model! It may also explain why the greatest philosophers in history have always had room to revisit and question the givens of human experience****, infinitely more interesting and harder to describe than the specialized knowledge that populates academic journals.

Next, it is precisely this specialization that renders workers susceptible to being replaced by machines. I’m not versed enough in the history of economics to know how and when specialization arose, but it makes sense that there is a tight correlation between specialization, machine coordination, and scale, as R. David Dixon recently discussed in his excellent Medium article about machines and the division of labor. Some people are drawn to startups because they are the antithesis of specialization. You get to wear multiple hats, doubling, as I do in my role at Fast Forward Labs, as sales, marketing, branding, partnerships, and even consulting and services delivery. Guild work used to work this way, as in the nursery rhyme Rub-a-dub-dub: the butcher prepared meat from end to end, the baker made bread from end to end, and the candlestick maker made candles from end to end. As Dixon points out, tasks and the time it takes to do tasks become important once the steps in a given work process are broken apart, leading to theories of economic specialization as we see in Adam Smith, Henry Ford, and, in their modern manifestation, the cold, harsh governance of algorithms and KPIs. The corollary of scale is mechanism, templates, repetition, efficiency. And the educational system we’ve inherited from the late 19th century is tailored and tuned to farm out skilled, specialized automatons who fit nicely into the specific roles required by corporate machines like Google or Goldman Sachs.

Screen Shot 2017-04-08 at 10.25.03 AM
Frederick Taylor pioneered the scientific management theories that shaped factories in the 20th century, culminating in process methodologies like Lean Six Sigma

This leads to the core argument I’d like to put forth in this post: the right educational training and curriculum for the AI-enabled job market of the 21st century should create generalists, not specialists. Intelligent systems will get better and better at carrying out specific activities and specific tasks on our behalf. They’ll do them reliably. They won’t get sick. They won’t have fragile egos. They won’t want to stay home and eat ice cream after a breakup. They can and should take over this specialized work to drive efficiencies and scale. But, machines won’t be like startup employees any time soon. They won’t be able to reliably wear multiple hats, shifting behavior and style for different contexts and different needs. They won’t be creative problem solvers, dreamers, or creators of mission. We need to educate the next generation of workers to be more like startup employees. We need to bring back respect for the generalist. We need the honnête homme of the 17th century or Arnheim*** in Robert Musil’s Man Without Qualities. We need hunter-gatherers who may not do one thing fabulously, but have the resiliency to do a lot of things well enough to get by.

What types of skills should these AI-resistant generalists have and how can we teach them?

Flexibility and Adaptability

Andrew Ng is a pithy tweeter. He recently wrote: “The half-life of knowledge is decreasing. That’s why you need to keep learning your whole life, not only through college.”

This is sound. The apprenticeship model we’ve inherited from the guild days, where the father-figure professor passes down his wisdom to the student who becomes assistant professor then associate professor then tenured professor then stays there for the rest of his life only to repeat the cycle in the next generation, should probably just stop. Technologies are advancing quickly, which open opportunities to automate tasks that we used to do manually or do new things we couldn’t do before (like summarizing 10,000 customer reviews on Amazon in a second, as the system my colleagues at Fast Forward Labs built). Many people fear change and there are emotional hurdles to having to break out of habits and routine and learn something new. But honing the ability to recognize that new technologies are opening new markets and new opportunities will be seminal to succeeding in a world where things constantly change. This is not to extol disruption. That’s infantile. It’s to accept and embrace the need to constantly learn to stay relevant. That’s exciting and even meaningful. Most people wait until they retire to finally take the time to paint or learn a new hobby. What if work itself offered the opportunity to constantly expand and take on something new? That doesn’t mean that everyone will be up to the challenge of becoming a data scientist over night in some bootcamp. So the task universities and MOOCs have before them is to create curricula that will help laymen update their skills to stay relevant in the future economy.

Interdisciplinarity

From the late 17th to mid 18th centuries, intellectual giants like Leibniz, D’Alembert, and Diderot undertook the colossal task of curating and editing encyclopedias (the Greek etymology means “in the circle of knowledge”) to represent and organize all the world’s knowledge (Google and Wikipedia being the modern manifestations of the same goal). These Enlightenment powerhouses all assumed that the world was one, and that our various disciplines were simply different prisms that refracted a unified whole. The magic of the encyclopedia lay in the play of hyperlinks, where we could see the connections between things as we jumped from physics to architecture to Haitian voodoo, all different lenses we mere mortals required to view what God (for lack of a better name) would understand holistically and all at once.

Contemporary curricula focused on specialization force students to grow myopic blinders, viewing phenomena according to the methodologies and formalisms unique to a particular course of study. We then mistake these different ways of studying and asking questions for literally different things and objects in the world and in the process develop prejudices against other tastes, interests, and preferences.

There is a lot of value in doing the philosophical work to understand just what our methodologies and assumptions are, and how they shape how we view problems and ask and answer questions about the world. I think one of the best ways to help students develop sensitivities for methodologies is to have them study a single topic, like climate change, energy, truth, beauty, emergence, whatever it may be, from multiple disciplinary perspectives. So understanding how physics studies climate change; how politicians study climate change; how international relations study climate change; how authors have portrayed climate change and its impact on society in recent literature. Stanford’s Thinking Matters and the University of Chicago’s Social Thought programs approach big questions this way. I’ve heard Thinking Matters has not helped humanities enrollment at Stanford, but still find the approach commendable.

brodeur
The 18th-century Encyclopédie placed vocational knowledge like embroidery on equal footing with abstract knowledge of philosophy or religion.

Model Thinking

Michael Lewis does a masterful job narrating the lifelong (though not always strong) partnership between Daniel Kahneman and Amos Tversky in The Undoing Project. Kahneman and Tversky spent their lives showing how we are horrible probabilistic thinkers. We struggle with uncertainty and have developed all sorts of narrative and heuristic mental techniques to make our world make more concrete sense. Unfortunately, we need to improve our statistical intuitions to succeed in the world of AI, which are probabilistic systems that output responses couched in statistical terms. While we can hide this complexity behind savvy design choices, really understanding how AI works and how it may impact our lives requires that we develop intuitions for how models, well, model the world. At least when I was a student 10 years ago, statistics was not required in high school or undergrad. We had to take geometry, algebra, and calculus, not stats. It seems to make sense to make basic statistics a mandatory requirement for contemporary curricula.

Synthetic and Analogical Reasoning

There are a lot of TED Talks about brains and creativity. People love to hear about the science of making up new things. Many interesting breakthroughs in the history of philosophy or physics came from combining together two strands of thought that were formerly separate: the French psychoanalyst Jacques Lacan, whose unintelligibility is besides the point, cleverly combined linguistic theory from Ferdinand Saussure with psychoanalytic theory from Sigmund Freud to make his special brand of analysis; the Dutch physicist Erik Verlinde cleverly combined Newton and Maxwell’s equations with information theory to come to the stunning conclusion that gravity emerges from entropy (which is debated, but super interesting).

As we saw above, AI systems aren’t analogical or synthetic reasoners. In law, for example, they excel at classification tasks to identify if a piece of evidence is relevant for a given matter, but they fail at executing other types of reasoning tasks like identifying that the facts of a particular case are similar to the facts of another to merit a comparison using precedent. Technology cases help illustrate this. Data privacy law, for example, frequently thinks about our right to privacy in the virtual world through reference back to Katz v. United Statesa 1967 case featuring a man making illegal gambling bets from a phone booth. Topic modeling algorithms would struggle to recognize that words connoting phones and bets had a relationship to words connoting tracking sensors on the bottom of trucks (as in United States v. Jones). But lawyers and judges use Katz as precedent to think through this brave new world, showing how we can see similarities between radically different particulars from a particular level of abstraction.

Does this mean that, like stats, everyone should take a course on the basics of legal reasoning to make sure they’re relevant in the AI-enabled world? That doesn’t feel right. I think requiring coursework in the arts and humanities could do the trick.

Framing Qualitative Ideas as Quantitative Problems

A final skill that seems paramount for the AI-enabled economy is the ability to translate an idea into something that can be measured. Not everyone needs to be able to this, but there will be good jobs-and more and more jobs-for the people who can.

This is the data science equivalent of being able to go from strategy to tactical execution. Perhaps the hardest thing in data science, in particular as tooling becomes more ubiquitous and commoditized, is to figure out what problems are worth solving and what products are worth building. This requires working closely with non-technical business leaders who set strategy and have visions about where they’d like to go. But it takes a lot of work to break down a big idea into a set of small steps that can be represented as a quantitative problem, i.e., translated into some sort of technology or product. This is also synthetic and interdisciplinary thinking. It requires the flexibility to speak human and speak machine, to prioritize projects and have a sense for how long it will take to build a system that does what need it to do, to render the messy real-world tractable for computation. Machines won’t be automating this kind of work anytime soon, so it’s a skill set worth building. The best way to teach this is through case studies. I’d advocate for co-op training programs alongside theoretical studies, as Waterloo provides for its computer science students.

Conclusion

While our culture idealizes and extols polymaths like Da Vinci or Galileo, it also undervalues generalists who seem to lack the discipline and rigor to focus on doing something well. Our academic institutions prize novelty and specialization, pushing us to focus on earning the new leaf at the edge of a vast tree wizened with rings of experience. We need to change this mindset to cultivate a workforce that can successfully collaborate with intelligent machines. The risk is a world without work; the reward is a vibrant and curious new humanity.


The featured image is from Émile, Jean-Jacques Rousseau’s treatise on education. Rousseau also felt educational institutions needed to be updated to better match the theories of man and freedom developed during the Enlightenment. Or so I thought! Upon reading this, one of my favorite professors (and people), Keith Baker, kindly insisted that “Rousseau’s goal in Emile was not to show how educational institutions could be improved (which he didn’t think would be possible without a total reform of the social order) but how the education of an individual could provide an alternative (and a means for an individual to live free in a corrupt society).” Keith knows his stuff, and recalling that Rousseau is a misanthropic humanist makes things all the more interesting. 

*Sappho may be the sexiest poet of all time. An ancient lyric poet from Lesbos, she left fragments that pulse with desire and eroticism. Randomly opening a collection, for example, I came across this:

Afraid of losing you

I ran fluttering/like a little girl/after her mother

**I’m stretching the truth here for rhetorical effect. Mallarmé actually made a living as an English teacher, although he was apparently horrible at both teaching and speaking English. Like Knausgaard in Book 2 of My StruggleMallarmé frequently writes poems about how hard it is for him to find a block of silence while his kids are screaming and needing attention. Bourgeois family life sublimated into the ecstasy of hermeticism. Another fun fact is that the French Symbolists loved Edgar Allen Poe, but in France they drop the Allen and just call him Edgar Poe.

***Musil modeled Arnheim after his nemesis Walther Rathenau, the German Foreign Minister during the Weimar Republic. Rathenau was a Jew, but identified mostly as a German. He wrote some very mystical works on the soul that aren’t worth reading unless you’d like to understand the philosophical and cocktail party ethos of the Habsburg Empire.

****I’m a devout listener of the Partially Examined Life podcast, where they recently discussed Wilfrid Sellars’s Empiricism and the Philosophy of Mind. Sellars critiques what he calls “the myth of the given” and has amazing thoughts on what it means to tell the truth.

The Utility of the Humanities in the 21st Century

I did my PhD in Comparative Literature at Stanford. There is likely no university in the US with a culture more antithetical to the humanities: Stanford embodies the libertarian, technocratic values of Silicon Valley, where disruptive innovation has crystallized into a platitude* and engineers are the new priestly caste. Stanford had massive electrical engineering and computer science graduate cohorts; there were five students in my cohort in comparative literature (all women, of diverse backgrounds, and quite large in contrast to the two- or three-student cohorts in Italian, German, and French). I had been accepted into several graduate programs across the country, but felt a responsibility to study at a university where the humanities were threatened. I didn’t want the ivory tower, the prestigious rare book collection, the ability to misuse words like isomorphism and polymorphic because they sounded scientific (I was a math undergrad), the stultified comfort that Wordsworth and Shelley were on the minds of strangers on the street. I wanted to learn what it would mean to defend a discipline undervalued by society, in an age where universities were becoming private businesses tailoring to undergraduate student consumers and the rising costs of education made it borderline irresponsible not to pursue vocational training that would land a decent job coding for a startup. Stanford’s very libertarianism also enabled me to craft an interdisciplinary methodology-crossing literature, history of science and mathematics, analytic philosophy, and classics-that more conservative departments would never entertain. This was wonderful during my coursework, and my Achilles heel when I had to write a dissertation and build a professional identity more conservative departments could recognize. I went insane, but mustered the strength and resilience required to complete my dissertation (in retrospect, I’m very grateful I did, as having a PhD has enabled me to teach as adjunct faculty alongside my primary job). After graduation, I left academia for the greener, freer pastures of the private sector.

The 2008-2009 financial crisis took place in the midst of my graduate studies. Ever tighter departmental budgets exacerbated the identity crisis the humanities were already facing. Universities had to cut costs, and French departments or film studies departments or German departments were the first to go. This shrank the already minuscule demand for humanities faculty, and exponentially increased the level of anxiety my fellow PhDs and I experienced regarding our future livelihoods. In keeping with the futurism of the Valley, Stanford (or at least a few professors at Stanford) was at the vanguard for considering alternative career paths for humanities PhDs: professors discussed shortening the time to degree, providing students with more vocational communications training so they could land jobs as social media marketers, extolling the values of academic administration as a career path equal to that of a researcher. Others resisted vehemently. There was also a wave of activity defending the utility of the humanities to cultivate empathy and other social skills. I’ve spent a good portion of my life reading fiction, but must say it was never as rich a moral training ground as actual life experience. I’ve learned more about regulating my emotions and empathizing with others’ points of view in my four years in the private sector than I had in the 28 years of life before I embraced work as a career (rather than just a job). Some people are really hard to deal with, and you have to face these challenges head on to grow.

All this is context for my opinions defending the utility of the humanities in our contemporary society and economy. To be clear, in proposing these economic arguments, I’m not abandoning claims for the importance of the humanities in individual personal and intellectual development. On the contrary, I strongly believe that a balanced, liberal arts education is critical to foster the development of personal autonomy and civic judgement, to preserve and potentially resurrect our early Republican (as political experiment, not party) goals that education cultivate critical citizens, not compliant economic agents.  I was miserable as a graduate student, but don’t regret my path for a minute. And I think there is a case to be made that humanities will be as-if not more-important than STEM to our national interests in the near future. Here’s why:

Technology and White-Collar Professions - In The Future of the ProfessionsRichard and Daniel Susskind demonstrate how technology is changing professions like medicine, law, investment management, accounting, and architecture. Their key insight is to structurally define white-collar professionals by the information asymmetry that exists between professional and client. Professionals know things it is hard for laymen to know: the tax code is complex and arcane, and it would take too much time for the Everyman (gender intentional) to understand it well enough to make judgments in her (gender intentional) favor. Same goes for diagnosing and treating an illness or managing finances of a large corporation. The internet, and, perhaps more importantly, the new machine learning technologies that enable us to use the internet to answer hard, formerly professional, questions, however, levels this information asymmetry. Suddenly, tools can do what trained professionals used to do, and at a much lower costs (contrast the billed hours of a good lawyer with the economies of scale of Google). As such, the skills and activities professionals need are changing and will continue to change. Working in machine learning, I can say from experience that we are nowhere near an age where machines are going to flat out replace people, creating a utopian world with universal basic income and bored Baudelaires assuaging ennui with opiates, sex, and poetry (laced with healthy doses of Catholic guilt). What is happening is that the day-to-day work of professionals is changing and will continue to change. Machines are ready and able to execute many of the repetitive tasks done by many professionals (think young associates reviewing documents to find relevant information for lawsuit - in 2015, the Second Circuit tried to define what it means to practice law by contrasting tasks humans can do with tasks computers can do). As machines creep ever further into work that requires thinking and judgment, critical thinking, creativity, interpretation, emotions, and reasoning will become increasingly important. STEM may just lead to its own obsoleteness (AI software is now making its own AI software), and in doing so is increasing the value of professionals trained in the humanities. This value lies in the design methodologies required to transform what were once thought processes into statistical techniques, to crystallize probabilistic outputs into intuitive features for non-technical users. It lies in creating the training data required to make a friendly chat bot. Most importantly, it lies in the empathy and problem-solving skills that will be the essence of professional work in the future.

Autonomy and Mores in the Gig Economy - In October, 2015, I spoke at a Financial Times conference about corporate sustainability. The audience was filled with executives from organizations like the Hudson Bay Company (they started by selling beaver pelts and now own department stores like Saks Fifth Avenue) that had stayed in business over literally hundreds of years by gradually evolving and adding new business lines. The silver-haired rich men on the panel with me kept extolling the importance of “company values” as the key to keeping incumbents relevant in today’s society. And my challenge to them was to ask how modern, global organizations, in particular those with large, temporary 1099 workforces managed by impersonal algorithms, could cultivate mores and values like the small, local companies of the past. Indeed, I spent a few years helping international law firms build centralized risk and compliance operations, and in doing so came to appreciate that the Cravath model, an apprenticeship culture where skills and corporate culture and mores and passed down from generation to generation, as there is very low mobility between firms, simply does not scale to our mobile, changing, global workforce. As such, inculcating values takes a very different form and structure than it did in the past. We read a lot about how today’s careers are more like jungle gyms than ladders, where there is a need to constantly revamp and acquire new skills to keep up with changing technologies and demand, but this often overlooks the fact that companies - like clubs and societies - used to also shape our moral characters. You may say that user reviews (the five stars you can get as an Uber rider or AirBnB lodger) take the place of what was formerly subjective judgment of colleagues and peers. But these cold metrics are a far cry from the suffering and satisfaction we experience when we break from or align with a community’s mores. This merits much more commentary than the brief suggestions I’ll make here, but I believe our globalized, gig economy requires a self-reliant morality and autonomy that has no choice but to be cultivated apart from the workplace. And the seat of that cultivation would be some training in philosophy, ethics, and humanities. Otherwise corporate values will be reduced to the cold rationality of some algorithm measuring OKRs and KPIs.

Ethics and Emerging Technologies - Just this morning, Guru Banavar, IBM’s Chief Science Officer for Cognitive Computing, posted a blog admonishing technologists building AI products that they “now shoulder the added burden of ensuring these technologies are developed, deployed and adopted in responsible, ethical and enduring ways.” Banavar’s post is a very brief advertisement for the Partnership on AI IBM, Google, Microsoft, Amazon, Facebook, and Apple have created to formalize attention around the ethical implications of the technologies they are building. Elon Musk founded OpenAI with a similar mission to research AI technologies with an eye towards ethics and safety. Again, there is much to say about the different ethical issues new technologies present (I surveyed a few a year ago in a Fast Forward Labs newsletter). The point here is that ethics is moving from a niche interest of progressive technologists to a core component of large corporate technology strategy. And the ethical issues new technologies pose are not trivial. It’s very easy to fall into chicken little logic traps (where scholars like Nick Bostrom speculate on worst-case scenarios just because they are feasible for us to imagine) that grab headlines instead of sticking with the discipline required to recognize how data technologies can amplify existing social biases. As Ted Underwood recently tweeted, doing this well requires both people who are motivated by critical thinking and people who are actually interested in machine learning technologies. But the and is critical, else technologists will waste a lot of time reinventing methods philosophers and ethicists have already honed. And even if the auditing of algorithms is carried out by technologists, humanists can help voice and articulate what they find. Finally, it goes without saying that we all need to sharpen our critical reading skills to protect our democracy in the age of Trump, filter bubbles, and fake news.

This is just a start. Each of these points can be developed, and there are many more to make. My purpose here is to shift the dialogue on the value of the humanities from utility in cultivating empathy and emotional character to real economic and social impact. The humanities are worth fighting for.

 

*For those unaware, Clayton Christensen coined the term disruptive innovation in The Innovator’s DilemmaHe contrasted it with sustaining innovation, the gradual technical improvements companies make to a product to meet market and customer demands. Inspired by Thomas Kuhn’s Structure of Scientific Revolutions, Christensen artfully demonstrates how great companies miss out on opportunities for disruptive innovation precisely because they are well run: disruptive innovations seize upon new markets with an unserved need, and only catch up to incumbents because technology can change faster than market preferences and demand. As disruption has crystallized into ideology, people often overlook that most products are sustaining innovations, incremental improvements upon an existing product or market need. It’s admittedly much more exciting to carry out a Copernican revolution, but if we consider that Trump may well be a disruptive innovator, who identified a latent market whose needs were underserved only to topple the establishment, we might sit back, pause, and reconsider our ideological assumptions.

The image is Jacques-Louis David’s The Death of Socrates from 1787. Plato sits at the front with his head down and his legs and arms peacefully and plaintively crossed.