Transitioning from Academia to Business

The wittiest (and longest) tweet thread I saw this week was (((Curtis Perry)))‘s masterful narrative of the life of a graduate student as kin to the life of Job:

Screen Shot 2017-11-12 at 9.32.17 AM
The first tweet chapter in Perry’s grad student life of Job. For the curious, Perry’s Twitter profile reads: Quinquegenarian lit prof and chronic feeder of Titivillus. Professional perseverator. Fellowship of the half-circle.

The timing of the tweet epic was propitious in my little subjective corner of the universe: Just two days before, I’d given a talk for Stanford’s Humanities Education Focal Group about my transition from being a disgruntled PhD in comparative literature to being an almost-functioning-normal-human-being executive at an artificial intelligence startup and a venture partner at a seed-stage VC firm.

Many of the students who attended the talk, ranging from undergrad seniors to sixth- or seventh-year PhDs, reached out afterwards to thank me and ask for additional advice. It was meaningful to give back to the community I came from and provide advice of a kind I sought but couldn’t find (or, more accurately, wasn’t prepared to listen to) when I struggled during the last two years of my PhD.

This post, therefore, is for the thousands of students studying humanities, fearing the gauntlet of the academic job market, and wondering what they might do to explore a different career path or increase their probability of success once they do. I offer only the anecdotes of one person’s successes and failures. Some things will be helpful for others; some will not. If nothing else, it serves as testimony that people need not be trapped in the annals of homogeneity. The world is a big and mighty place.


Important steps in my transition

Failure

As I narrated in a previous post, I hit rock bottom in my last year of graduate school. I remember sitting in Stanford’s Green Library in a pinnacle of anxiety, festering in a local minimum where I couldn’t write, couldn’t stick with the plan for my dissertation, couldn’t do much of anything besides play game after game of Sudoku to desperately pass the time. I left Stanford for a bit. I stopped trying. Encouraged by my worrying mother, I worked at a soup kitchen in Boston every day, pretending it was my job. I’d go in every day at 7:00 am and leave every afternoon at 3:00 pm. Working with my hands, working for others, gradually nurtured me back to stability.

It was during this mental breakdown that applications for a sixth-year dissertation fellowship were due. I forced myself to write a god awful application in the guest bedroom at my parents’ Boston townhouse. It was indescribably hard. Paralyzed, I submitted an alienated abstract and dossier. A few months later, I received a letter informing me that the Humanities Center committee had rejected my application.

I remember the moment well. I was at Pluto’s salad joint on University Avenue in Palo Alto. By then, I had returned back to Stanford and was working one day per week at Saint Martin’s Soup Kitchen in San Francisco, 15 hours per week at a location-based targeted advertising startup called Vantage Local (now Frequence), 5 hours per week tutoring Latin and Greek around the Valley, playing violin regularly, running, and reserving my morning hours to write. I had found balance, balance fit for my personality and needs. I had started working with a career counselor to consider alternative career paths, but had yet to commit to a move out of academia.

The letter gave me clarity. It was the tipping point I needed to say, that’s it; I’m done; I’m moving on. It did not feel like failure; it felt like relief. My mind started to plot next steps before I finished reading the rejection letter.

Luck

The timing couldn’t have been better. My friend Anaïs Saint-Jude had started Bibliotech,  a forward-thinking initiative devoted to exploring the value graduate-level training in the humanities could provide to technology companies. I was fortunate enough to be one of the students who pitched their dissertation to conference attendees, including Silicon Valley heavyweights like Geoffrey Moore, Edgar Masri, Jeff Thermond, Bob Tinker, and Michael Korcuska, all of whom have since become mentors and friends. My intention to move into the private sector came off loud and clear at the event. Thanks to my internship at the advertising company, I had some exposure to the diction and mores of startups. The connections I made there were invaluable to my career. People opened doors that would have otherwise remained shut. All I needed was the first opportunity, and a few years to recalibrate my sense of self as I adapted to the reward system of the private sector.

Authenticity

I’ve mentored a few students who made similar transitions from academia into tech companies, and all have asked me how to defend their choice of pursuing a PhD instead of going directly into marketing, product, sales, whatever the role may be. Our culture embraces a bizarre essentialism, where we’re supposed to know what we want to be when we grow up from the ripe of old of 14, as opposed to finding ourselves in the self we come to inhabit through the serendipitous meanderings of trial and tribulation. (Ben Horowitz has a great commencement speech on the fallacy of following your passion.) The symptom of this essentialism in the transition from humanities to, say, marketing, is this strange assumption that we need to justify the PhD as playing part of a logical narrative, as some step in a master plan we intended from the beginning.

That just can’t be true. I can’t think of anyone who pursues a PhD in French literature because she feels it’s the most expedient move for a successful career in marketing. We pursue literature degrees because we love literature, we love the life of the mind, we are gluttons for the riches of history and culture. And then we realize that the professional realities aren’t quite what we expected. And, for some of us, acting for our own happiness means changing professions.

One thing I did well in my transition was to remain authentic. When I interviewed and people asked me about my dissertation, I got really great at giving them a 2-minute, crisp explanation of what I wrote about and why it was interesting. What they saw was an ability to communicate a complex topic in simple, compelling words. They saw the marks of a good communicator, which is crucial for enterprise marketing and sales. I never pretended I wanted to be a salesperson. I showed how I had excelled in every domain I’d played in, and could do the same in the next challenge and environment.

Selecting the right opportunity

Every company is different. Truly. Culture, stage, product, ethics, goals, size, role, so many factors contribute to shaping what an experience is like, what one learns in a role, and what future opportunities a present experience will afford.

When I left graduate school, I intentionally sought a mid-sized private company that had a culture that felt like a good fit for a fresh academic. It took some time, but I ended up working at a legaltech startup called Intapp. I wanted an environment where I’d benefit from a mentor (after all, I didn’t really have any business skills besides writing and teaching) and where I would have insight into strategic decisions made by executive management (as opposed to being far removed from executives at a large company like Google or Facebook). Intapp had the right level of nerdiness. I remember talking to the CTO about Confucius during my interviews. I plagued my mentor Dan Bressler with endless existential dribble as I went through the growing pains of becoming a business person. I felt embarrassed and pushy asking for a seat at the table for executive meetings, but made my way in on multiple occasions. Intapp sold business software to law firms. The what of the product was really not that interesting. But I learned that I loved the how, loved supporting the sales teams as a subject matter expert on HIPAA and professional responsibility, loved the complex dance of transforming myriad input from clients into a general product, loved writing on tight timelines and with feedback across the organization. I learned so incredibly much in my first role. It was a foundation for future success.

I am fortunate to be a statistical anomaly as a woman. Instead of applying for jobs where I satisfy skill requirements, I tend to seek opportunities with exponential growth potential. I come in knowing a little about the role I have to accomplish, and leave with a whole new set of skills. This creates a lot of cognitive dissonance and discomfort, but I wouldn’t have it any other way. My grey hairs may lead me to think otherwise soon, but I doubt it.

Humility

Last but certainly not least, I have always remained humble and never felt like a task was beneath me. I grew up working crappy jobs as a teenager: I was a janitor; a hostess; a busgirl; a sales representative at the Bombay company in the mall in Salem, New Hampshire; a clerk at the Court Theater at University of Chicago; a babysitter; a lawnmower; an intern at a Blackberry provisioning tech company, where I basically drove a big truck around and lugged stuff from place to place and babysat the CEO’s daughter. I see no work as beneath me, and view grunt work as the sacrifice due to have the amazing, amazing opportunities I have in my work (like giving talks to large audiences and meeting smart and inspiring people almost every day).

Having this humility helps enormously when you’re an entrepreneur. I didn’t mind starting as a marketing specialist, as I knew I could work hard and move up. I’ll yell at the computer in frustration when I have to upload email addresses to a go-to-webinar console or get the HTML to format correctly in a Mailchimp newsletter, but I’m working on showing greater composure as I grow into a leader. I always feel like I am going to be revealed as a fraud, as not good enough. This incessant self-criticism is a hallmark of my personality. It keeps me going.


Advice to current students

Screen Shot 2017-11-12 at 11.02.02 AM
A rad Roman mosaic with the Greek dictum, Know Thyself

Finish your PhD

You’ll buy options for the future. No one cares what you studied or what your grades were. They do care that you have a doctorate and it can open up all sorts of opportunities you don’t think about when you’re envisioning the transition. I’ve lectured at multiple universities and even taught a course at the University of Calgary Faculty of Law. This ability to work as an adjunct professor would have been much, much harder to procure if I were only ABD.

This logic may not hold for students in their first year, where 4 years is a lot of sunk opportunity cost. But it’s not that hard to finish if you lower your standards and just get shit done.

Pity the small-minded

Many professors and peers will frown upon a move to business for all sorts of reasons. Sometimes it’s progressive ideology. Sometimes it’s insecurity. Most of the time it’s just lack of imagination. Most humanists profess to be relativists. You’d think they could do so when it comes to selecting a profession. Just know that the emotional pressure of feeling like a failure if you don’t pursue a research career dwindles almost immediately when your value compass clocks a different true north.

Accept it’s impossible to imagine the unknown

The hardest part of deciding to do something radically different is that you have no mental model of your future. If you follow the beaten path, you can look around to role model professors and know what your life will look like (with some variation depending on which school you end up in). But it’s impossible to know what a different decision will lead to. This riddles the decision with anxiety, requiring something like a blind leap of faith. A few years down the line, you come to appreciate the creative possibility of a blank future.

Explore

There are so many free meetups and events taking place everywhere. Go to them. Learn something new. See what other people are doing. Ask questions. Do informational interviews. Talk to people who aren’t like yourself. Talk to me! Keep track of what you like and don’t like.

Collaborate

One of the biggest changes in moving from academia to business is the how of work. Cultures vary, but businesses are generally radically collaborative places and humanities work is generally isolated and entirely individual. It’s worthwhile to co-author a paper with a fellow grad student or build skills running a workshop or meetup. These logistics, communication, and project management skills are handy later on (and are good for your resume).

Experiment with different writing styles

Graduate school prepares you to write 20-page papers, which are great preparation for peer-reviewed journals and, well, nothing else. They don’t prepare you to write a good book. They don’t prepare you to write a good blog post or newspaper article. Business communication needs to be terse and on point so people can act on it. Engineers need guidance and clarity, need a sense of continuity of purpose. Customers need you to understand their point of view. Audiences need stories or examples to anchor abstract ideas. Having the agility to fit form to purpose is an invaluable skill for business communications. It’s really hard. Few do it well. Those who do are prized.

Learn how to give a good talk

Reading a paper aloud to an audience is the worst. Just don’t do it. People like funny pictures.

Know thyself

There is no right path. We’re all different. Business was a great path for me, and I’ve molded my career to match my interests, skill, personality, and emotional sensitivities. You may thrive in a totally different setting. So keep track of what you like and dislike. Share this thinking with others you love and see if what they think of you is similar to what you think of you. Figuring this out is the trickiest and potentially most valuable exercise in life. And sometimes it’s a way to transform what feels like a harrowing experience into an opportunity to gain yet another inch of soul.


The featured image is from William Blake’s illustrated Book of Job, depicting the just man rebuked by his friends. Blake has masterful illustrations of the Bible, including this radical image from Genesis, where Eve’s wandering eye displays a proleptic fall from grace, her vision, her fantasy too large for the limits of what Adam could safely provide – a heroine of future feminists, despite her fall. 

blake adam eve

 

 

 

Censorship and the Liberal Arts

A few months ago, I interviewed a researcher highly respected in his field to support marketing efforts at my company. Before conducting the interview, I was asked to send my questions for pre-approval by the PR team of the corporation with which the researcher is affiliated. Backed by the inimitable power of their brand, the PR scions struck crimson lines through nearly half my questions. They were just doing their job, carrying out policy to draw no public attention to questions of ethics, safety, privacy, security, fear. Power spoke. The sword showed that it is always mightier than the pen, fool ourselves though we may.

Pangs of injustice rose fast in my chest. And yet, I obeyed.

Was this censorship? Was I a coward?

Intellectual freedom is nuanced in the private sector because when we accept a job we sign a social contract. In exchange for a salary and a platform for personal development and growth, we give up full freedom of expression and absorb the values, goals, norms, and virtual personhood of the organization we join. The German philosopher Emmanuel Kant explains the tradeoffs we make when constructing our professional identity in What is Enlightenment? (apologies for the long quotation, but it needed to be cited in full):

“This enlightenment requires nothing but freedom–and the most innocent of all that may be called “freedom”: freedom to make public use of one’s reason in all matters. Now I hear the cry from all sides: “Do not argue!” The officer says: “Do not argue–drill!” The tax collector: “Do not argue–pay!” The pastor: “Do not argue–believe!” Only one ruler in the world says: “Argue as much as you please, but obey!” We find restrictions on freedom everywhere. But which restriction is harmful to enlightenment? Which restriction is innocent, and which advances enlightenment? I reply: the public use of one’s reason must be free at all times, and this alone can bring enlightenment to mankind.

On the other hand, the private use of reason may frequently be narrowly restricted without especially hindering the progress of enlightenment. By ‘public use of one’s reason’ I mean that use which a man, as scholar, makes of it before the reading public. I call ‘private use’ that use which a man makes of his reason in a civic post that has been entrusted to him. In some affairs affecting the interest of the community a certain [governmental] mechanism is necessary in which some members of the community remain passive. This creates an artificial unanimity which will serve the fulfillment of public objectives, or at least keep these objectives from being destroyed. Here arguing is not permitted: one must obey. Insofar as a part of this machine considers himself at the same time a member of a universal community–a world society of citizens–(let us say that he thinks of himself as a scholar rationally addressing his public through his writings) he may indeed argue, and the affairs with which he is associated in part as a passive member will not suffer. Thus it would be very unfortunate if an officer on duty and under orders from his superiors should want to criticize the appropriateness or utility of his orders. He must obey. But as a scholar he could not rightfully be prevented from taking notice of the mistakes in the military service and from submitting his views to his public for its judgment. The citizen cannot refuse to pay the taxes levied upon him; indeed, impertinent censure of such taxes could be punished as a scandal that might cause general disobedience. Nevertheless, this man does not violate the duties of a citizen if, as a scholar, he publicly expresses his objections to the impropriety or possible injustice of such levies. A pastor, too, is bound to preach to his congregation in accord with the doctrines of the church which he serves, for he was ordained on that condition. But as a scholar he has full freedom, indeed the obligation, to communicate to his public all his carefully examined and constructive thoughts concerning errors in that doctrine and his proposals concerning improvement of religious dogma and church institutions. This is nothing that could burden his conscience. For what he teaches in pursuance of his office as representative of the church, he represents as something which he is not free to teach as he sees it. He speaks as one who is employed to speak in the name and under the orders of another. He will say: “Our church teaches this or that; these are the proofs which it employs.” Thus he will benefit his congregation as much as possible by presenting doctrines to which he may not subscribe with full conviction. He can commit himself to teach them because it is not completely impossible that they may contain hidden truth. In any event, he has found nothing in the doctrines that contradicts the heart of religion. For if he believed that such contradictions existed he would not be able to administer his office with a clear conscience. He would have to resign it. Therefore the use which a scholar makes of his reason before the congregation that employs him is only a private use, for no matter how sizable, this is only a domestic audience. In view of this he, as preacher, is not free and ought not to be free, since he is carrying out the orders of others. On the other hand, as the scholar who speaks to his own public (the world) through his writings, the minister in the public use of his reason enjoys unlimited freedom to use his own reason and to speak for himself. That the spiritual guardians of the people should themselves be treated as minors is an absurdity which would result in perpetuating absurdities.”

Kant makes a tricky distinction between our public and private use of reason. What he calls “public use of reason” is what we normally consider to be private: The sacred space of personal opinion, not as unfettered stream of consciousness, but as the reflections and opinions that result from our sense of self as part of the species homo sapiens (some criticize this humanistic focus and think we should expand the space of commonality to include animals, plants, robots, rocks, wind, oceans, and other types of beings). Beliefs that are fair because they apply to me just as they apply to you and everyone else. Kant deems this “public” because he espouses a particular take on reason that is tied up with our ability to project ourselves as part of a larger universal we call humanity: for Kant, our freedom lies not in doing whatever we want, not in behaving like a toddler who gets to cry on a whim or roam around without purpose or drift in opiate stupor, but rather in our willingly adhering to self-imposed rules that enable membership in a collectivity beyond the self. This is hard to grasp, and I’m sure Kant scholars would poke a million holes in my sloppy interpretation. But, at least for me, the point here is public reason relates to the actions of our mind when we consider ourselves as citizens of the world, which, precisely because it is so broad, permits fierce individuality.

By contrast, “private use of reason” relates to a sense of self within a smaller group, not all of humanity. So, when I join a company, by making that decision, I willingly embrace the norms, culture, and personhood of this company. Does this mean I create a fictional sub-self every time I start a new job or join some new club or association? And that this fictional self is governed by different rules than the real me that exercises public reason in the comfort of my own mind and conscience? I don’t think so. It would require a fictional sub-self if the real self were a static thing that persists over time. But there’s no such thing as the real self. It’s a user illusion (hat tip to Dan Dennett for the language). We come as diads and triads, the connections between the neurons in our brains ever morphing to the circumstances we find ourselves in. Because we are mortal, because we don’t have infinite time to explore the permutations of possible selves that would emerge as we shapeshift from one collectivity to the next, it’s important that we select our affiliations carefully, especially if we accept the tradeoffs of “private use of reason.” We don’t have time to waste our willful obedience on groups whose purpose and values skew too far from what our public reason holds dear. And yet, the restriction of self-interest that results from being part of a team is quite meaningful. It is perhaps the most important reason why we must beware the lore of a world without work.

This long exploration of Kant’s distinction between public and private reason leads to the following conclusion: No, I argue, it was not an act of cowardice to obey the PR scions when they censored me. I was exercising my “private use of reason,” as it would not have been good for my company to pick a fight. In this post, by contrast, I exercise my “public use of reason” and make manifest the fact that, as a human being, I feel pangs of rage against any form of censorship, against any limitation of inquiry, curiosity, discourse, and expression.

But do I really mean any? Can I really mean any in this age of Trumpism, where the First Amendment serves as a rhetorical justfication to traffic fake news, racism, or pseudo-scientific justifications to explain why women don’t occupy leadership roles at tech companies?* And, where and how do we draw the line between actions that aren’t right according to public reason but are right according to private reason and those that are simply not right, period? By making a distinction between general and professional ethics, do we not risk a slippery slope where following orders can permit atrocities, as Hannah Arendt explores in Eichmann in Jerusalem?

These are dicey questions.

There are others that are even more dicey and delicate. What happens if the “private use of reason” is exercised not within the a corporation or office, affiliations we choose to make (should we be fortunate enough to choose…), but in a collectivity defined by trait like age, race, gender, sexuality, religion, or class (where elective choice is almost always absent except when it absolutely is present (e.g., a decision to be transgender))? These categories are charged with social meaning that breaks Kant’s logic. Naive capitalists say we can earn our class through hard work. Gender and race are not discrete categories but continuous variables on a spectrum defined by local contexts and norms: In some circles, gender is pure expression of mind over body, a malleable sense of self in a dance with the impressions and reactions of others; in others, the rules of engagement are fixed to the point of submission and violence. Identity politics don’t follow the logic of the social contract. A willed trade off doesn’t make sense here. What act of freedom could result from subsuming individual preference for the greater good of a universal or local whole? (Open to being told why I’m totally off the mark, as these issues are far from my forte.)

What’s dangerous is when the experience of being part of a minority expresses itself as willed censorship, as a cloak to avoid the often difficult challenge of grappling with the paradoxical twists of private and public reason. When the difficult nuances of ethics reduce to the cocoon of exclusion, thwarting the potential of identifying common ground.

The censorship I accepted to enact the constraints of my freedom as a professional differ from the censorship contemporary progressives demand from professors and peers. I agree with the defenders of liberalism that the distinction between private and public reason should collapse at the university. That the university should be a place where young minds are challenged, where we flex the muscles of transforming a gut reaction into an articulated response. Where being exposed to ideas different from one’s own is an opportunity for growth. Where, as dean of students Jay Ellison wrote to the incoming class of 2020 at the University of Chicago, “we do not support so called ‘trigger warnings,’ we do not cancel invited speakers because their topics might prove controversial,** and we do not condone the creation of intellectual ‘safe spaces’ where individuals can retreat from ideas and perspectives at odds with their own.” As an alumna of the University of Chicago, I felt immense pride at reading Bret Stephens’ recent New York Times op-ed about why Robert Zimmer is America’s best university president. Gaining practice in the art of argument and debate, in reading or hearing an idea and subjecting it to critical analysis, in appreciating why we’ve come to espouse some opinion given the set of circumstances afforded to us in our minute slice of experience in the world, in renting our positions until evidence convinces us to change our point of view, in deeply listening to others to understand why they think what they think so we can approach a counterargument from a place of common ground, all of these things are the foundations of being a successful professional. Being a good communicator is not a birthright. It is a skill we have to learn and exercise just like learning how to ride a bike or code or design a website. Except that it is much harder, as it requires a Stoic’s acceptance that we cannot control the minds or emotions of others; We can only seek to influence them from a place of mutual respect.

Given the ungodly cost of a university education in the United States, and our society’s myopic focus on creating productive workers rather than skeptical citizens, it feels horribly elitist to advocate for the liberal arts in this century of STEM, robots, and drones. But my emotions won’t have it otherwise: They beat with the proud tears of truth and meaning upon reading articles like Marilynne Robinson’s What Are We Doing Here?, where she celebrates the humanities as our reverence to the beautiful, to the possible, to the depth we feel in seeing words like grandeur and the sadness that results when imagine a world without the vastness of the Russian imagination or the elegance of the Chinese eye and hand.

But as the desire to live a meaningful life is not enough to fund the liberal arts, perhaps we should settle for a more pragmatic argument. Businesses are made of people, technologies are made by people, technologies are used by people. Every day, every person in every corporation faces ethical conundrums like the censorship example I outlined above. How can we approach these conundrums without tools or skills to break down the problem? How can we work to create the common ground required for effective communication if we’ve siphoned ourselves off into the cocoon of our subjective experience? Our universities should evolve, as the economic-social-political matrix is not what it once was. But they should not evolve at the expense of the liberal arts, which teach us how to be free.

*One of the stranger interviews James Damore conducted after his brief was leaked from Google was with the conservative radio host Stefan Molyneux, who suggested that conservatives and libertarians make better programmers because they are accustomed to dissecting the world in clear, black and white terms, as opposed to espousing the murky relativism of the liberals. It would be a sad world indeed if our minds were so inflexible that they lacked the ability to cleave a space to practice a technical skill.

**Sam Harris has discussed academic censorship and the tyranny of the progressives widely on the Waking Up podcast (and has met no lack of criticism for doing so), interviewing figures like Charles Murray, Nicolas Christakis, Mark Lilla, and others.

The featured image is from some edition of Areopagitica, a speech John Milton (yep, the author of Paradise Lost) gave to the British Parliament to protest censorship. In this speech, Milton argues that virtue is not innate but learned, that just as we have to exercise our self-restraint to achieve the virtue of temperance, so too should we be exposed to all sorts of ideas from all walks of life to train our minds in virtue, to give ourselves the opportunity to be free. I love that bronze hand.

Why Study Foreign Languages?

My ability to speak multiple languages is a large part of who I am.* Admittedly, the more I languages I learn, the less mastery I have over each of the languages I speak. But I decided a while back I was ok with trading depth for breadth because I adore the process of starting from scratch, of gradually bringing once dormant characters to life, of working with my own insecurities and stubbornness as people respond in English to what must sound like pidgin German or Italian or Chinese, of hearing how the tone of my voice changes in French or Spanish, absorbing the Fanonian shock when a foreign friend raises his** eyebrows upon first hearing me speak English, surprised that my real, mother-tongue personality is far more harsh and masculine than the softer me embodied in metaphors of my not-quite-accurate French.***

You have to be comfortable with alienation to love learning foreign languages. Or perhaps so aware of how hard it is to communicate accurately in your mother tongue that it feels like a difference of degree rather than kind to express yourself in a language that’s not your own. Ferdinand Céline captures this feeling well in Mort à Crédit (one of the few books whose translated English title, Death on the Installment Plan, may be superior to the original!), when, as an exchange student in England, he narrates the gap between his internal dialogue and the self he expresses in his broken English to strangers at a dinner table. As a ruthless self critic, I’ve taken great solace in being able to hide behind a lack of precision: I wanted to write my undergraduate BA thesis (which argued that Proust was decidedly not Platonic) in French because the foreign language was a mask for the inevitable imperfection of my own thinking. Exposing myself, my vulnerabilities, my imperfections, my stupidity, was too much for me to handle. I felt protected by the veil of another tongue, like Samuel Beckett or Nabokov**** deliberately choosing to write in a language other than their own to both escape their past and adequately capture the spirit of their present.

But there’s more than just a desire to take refuge in the sanctuary of the other. There’s also the gratitude of connection. The delight the champagne producer in a small town outside Reims experiences upon learning that you, an American, have made the effort to understand her culture. The curiosity the Bavarian scholar experiences when he notices that your German accent is more hessisch than bayerisch (or, in Bavarian, bairisch, as one reader pointed out), his joy at teaching you how to gently roll your r’s and sound more like a southerner when you visit Neuschwanstein and marvel at the sublime decadence of Ludwig II. The involuntary smile that illuminates the face of the Chinese machine learning engineer on his or her screening interview when you tell him or her about your struggles to master Chinese characters. Underlying this is the joy we all experience when someone makes an effort to understand us for who we are, to crack open the crevices that permit deeper connections, to further our spirituality and love.

In short, learning a new language is wonderful. And the tower of Babel separating one culture from another adds immense richness to our world.

To date, linguae francae have been the result of colonial power and force: the world spoke Greek because the Greeks had power; the world spoke French because the French had power; the world speaks English because the Americans have had power (time will tell if that’s true in 20 years…). Efforts to synthesize a common language, like Esperanto or even Leibniz’s Universal Characteristic, have failed. But Futurists claim we’re reaching a point where technology will free us from our colonial shackles. Neural networks, they claim, will be able to apply their powers of composition and sequentiality to become the trading floor or central exchange for all the world’s languages, a no man’s land of abstraction general enough to represent all the nuances of local communication. I’m curious to know how many actual technologists believe this is the case. Certainly, there have been some really rad breakthroughs of late, as Gideon Lewis-Kraus eloquently captured in his profile of the Google Brain team and as the Economist describes in a tempered article about tasks automated translators currently perform well. My friend Gideon Mann and I are currently working on a fun project where we send daily emails filtered through the many available languages on Google Translate, which leads to some cute but generally comprehensible results (the best part is just seeing Nepali or Zulu show up in my inbox). On the flip side, NLP practitioners like Yoav Goldberg find these claims arrogant and inflated: the Israeli scientist just wrote a very strong Medium post critiquing a recent arXiv paper by folks at MILA that claims to generate high-quality prose using generative adversarial networks.*****

Let’s assume, for the sake of the exercise, that the tools will reach high enough quality performance that we no longer need to learn another language to communicate with others. Will language learning still be a valuable skill, or will it be outsourced to computers like multiplication?

I think there’s value in learning foreign languages even if computers can speak them better than we can. Here are some other things I value about language learning:

  • Foreign languages train your mind in abstraction. You start to see grammatical patterns in how languages are constructed and can apply these patterns to rapidly acquire new languages once you’ve learned one or two.
  • Foreign languages help you appreciate how our experiences are shaped by language. For example, in English we fall in love with someone, in French we fall in love of someone, in German we fall in love in someone. Does that directionality impact our experience of connection?
  • Foreign languages force you to read things more slowly, thereby increasing your retention of material and interpretative rigor.
  • Foreign languages encourage empathy and civic discourse, because you realize the relativity of your own ideas and opinions.
  • Foreign languages open new neural pathways, increasing your creativity.
  • Foreign languages are fun and it’s gratifying to connect with people in their mother tongue!
  • Speaking in a foreign language adds another level of mental difficulty to any task, making even the most boring thing (or conversation) more interesting.

I also polled Facebook and Twitter to see what other people thought. Here’s a selection of responses:

Screen Shot 2017-06-10 at 9.20.42 AM

Screen Shot 2017-06-10 at 9.21.50 AMScreen Shot 2017-06-10 at 9.22.24 AMScreen Shot 2017-06-10 at 9.22.57 AM

Screen Shot 2017-06-10 at 10.22.12 AM

Screen Shot 2017-06-10 at 9.25.42 AMScreen Shot 2017-06-10 at 9.26.21 AMScreen Shot 2017-06-10 at 9.27.04 AMScreen Shot 2017-06-10 at 9.27.56 AMScreen Shot 2017-06-10 at 9.28.24 AMScreen Shot 2017-06-10 at 9.28.52 AMScreen Shot 2017-06-10 at 9.29.29 AM.png

The best part of this exercise was how quickly and passionately people responded. It was a wonderful testimony to open-mindedness, curiosity, courage, and thirst for learning in an age where values like these are threatened. Let’s keep up the good fight!

*Another perk of living in Canada is that I get to speak French on a regular basis! Granted, Québecois is really different than my Parisian French, but it’s still awesome. And I’m here on a francophone work permit, which was the fastest route to getting me legal working status before the fast-track tech visa program that begins today.

**Gender deliberate.

*** It really irritates me when people say French is an easy language for native English speakers to learn. It’s relatively (i.e., versus Chinese or Arabic) easy to get to proficiency in French, but extremely difficult to achieve the fluency of the language’s full expressive power, which includes ironical nuances for different concessive phrases (“although this happened…”), the elegant ability to invert subject and verb to intimate doubt or suspicion, the ability to couple together conditional phrases, resonances with literary texts, and so much more.

****A reader wrote in correcting this statement about Nabokov. Apparently Nabokov could read and write in English before Russian. Said reader entitled his email to me “Vivian Darkbloom,” a character representing Nabokov himself who makes a cameo appearance in Lolita. If it’s false to claim that Nabokov uses English as a protected veil for his psychology, it may be true that cameos in anagram are his means to cloack presence and subjectivity, as he also appears – like Hitchcock in his films – as the character Blavdak Vinomori “King, Queen, Knave.”

*****Here’s the most interesting technical insight from Goldberg’s post: “To summarize the technical contribution of the paper (and the authors are welcome to correct me in the comments if I missed something), adversarial training for discrete sequences (like RNN generators) is hard, for the following technical reason: the output of each RNN time step is a multinomial distribution over the vocabulary (a softmax), but when we want to actually generate the sequence of symbols, we have to pick a single item from this distribution (convert to a one-hot vector). And this selection is hard to back-prop the gradients through, because its non-differentiable. The proposal of this paper is to overcome this difficulty by feeding the discriminator with the softmaxes (which are differentiable) instead of the one-hot vectors.” Goldberg cites the MILA paper as a symptom of a larger problem in current academic discourse in the ML and technology community, where platforms like arxiv short circuit the traditional peer review process. This is a really important and thorny issue, as traditional publishing techniques slow research, reserve the privilege of research to a selected few, and place pay walls around access. However, it’s also true that naive readers want to trust the output of top tier research labs, and we’ll fall prey to reputation without proper quality controls. A dangerous recent example of this was the Chinese study of automatic criminality detection, masterfully debunked by some friends at Google.

The featured image comes from Vext Magazine’s edition of Jorge Luis Borges’s Library of Babel (never heard of Vext until just now but looks worth checking out!). It’s a very apt representation of the first sentence in Borges’s wonderful story: The universe (which others call the Library) is composed of an indefinite and perhaps infinite number of hexagonal galleries, with vast air shafts between, surrounded by very low railings. From any of the hexagons one can see, interminably, the upper and lower floors. Having once again moved to a new city, being once again in the state of incubation and potentiality, and yet from an older vantage point, where my sense of self and identity is different than in my 20s, I’m drawn to this sentence: Like all men of the Library, I have traveled in my youth; I have wandered in search of a book, perhaps the catalogue of catalogues…

The Utility of the Humanities in the 21st Century

I did my PhD in Comparative Literature at Stanford. There is likely no university in the US with a culture more antithetical to the humanities: Stanford embodies the libertarian, technocratic values of Silicon Valley, where disruptive innovation has crystallized into a platitude* and engineers are the new priestly caste. Stanford had massive electrical engineering and computer science graduate cohorts; there were five students in my cohort in comparative literature (all women, of diverse backgrounds, and quite large in contrast to the two- or three-student cohorts in Italian, German, and French). I had been accepted into several graduate programs across the country, but felt a responsibility to study at a university where the humanities were threatened. I didn’t want the ivory tower, the prestigious rare book collection, the ability to misuse words like isomorphism and polymorphic because they sounded scientific (I was a math undergrad), the stultified comfort that Wordsworth and Shelley were on the minds of strangers on the street. I wanted to learn what it would mean to defend a discipline undervalued by society, in an age where universities were becoming private businesses tailoring to undergraduate student consumers and the rising costs of education made it borderline irresponsible not to pursue vocational training that would land a decent job coding for a startup. Stanford’s very libertarianism also enabled me to craft an interdisciplinary methodology–crossing literature, history of science and mathematics, analytic philosophy, and classics–that more conservative departments would never entertain. This was wonderful during my coursework, and my Achilles heel when I had to write a dissertation and build a professional identity more conservative departments could recognize. I went insane, but mustered the strength and resilience required to complete my dissertation (in retrospect, I’m very grateful I did, as having a PhD has enabled me to teach as adjunct faculty alongside my primary job). After graduation, I left academia for the greener, freer pastures of the private sector.

The 2008-2009 financial crisis took place in the midst of my graduate studies. Ever tighter departmental budgets exacerbated the identity crisis the humanities were already facing. Universities had to cut costs, and French departments or film studies departments or German departments were the first to go. This shrank the already minuscule demand for humanities faculty, and exponentially increased the level of anxiety my fellow PhDs and I experienced regarding our future livelihoods. In keeping with the futurism of the Valley, Stanford (or at least a few professors at Stanford) was at the vanguard for considering alternative career paths for humanities PhDs: professors discussed shortening the time to degree, providing students with more vocational communications training so they could land jobs as social media marketers, extolling the values of academic administration as a career path equal to that of a researcher. Others resisted vehemently. There was also a wave of activity defending the utility of the humanities to cultivate empathy and other social skills. I’ve spent a good portion of my life reading fiction, but must say it was never as rich a moral training ground as actual life experience. I’ve learned more about regulating my emotions and empathizing with others’ points of view in my four years in the private sector than I had in the 28 years of life before I embraced work as a career (rather than just a job). Some people are really hard to deal with, and you have to face these challenges head on to grow.

All this is context for my opinions defending the utility of the humanities in our contemporary society and economy. To be clear, in proposing these economic arguments, I’m not abandoning claims for the importance of the humanities in individual personal and intellectual development. On the contrary, I strongly believe that a balanced, liberal arts education is critical to foster the development of personal autonomy and civic judgement, to preserve and potentially resurrect our early Republican (as political experiment, not party) goals that education cultivate critical citizens, not compliant economic agents.  I was miserable as a graduate student, but don’t regret my path for a minute. And I think there is a case to be made that humanities will be as–if not more–important than STEM to our national interests in the near future. Here’s why:

Technology and White-Collar Professions – In The Future of the ProfessionsRichard and Daniel Susskind demonstrate how technology is changing professions like medicine, law, investment management, accounting, and architecture. Their key insight is to structurally define white-collar professionals by the information asymmetry that exists between professional and client. Professionals know things it is hard for laymen to know: the tax code is complex and arcane, and it would take too much time for the Everyman (gender intentional) to understand it well enough to make judgments in her (gender intentional) favor. Same goes for diagnosing and treating an illness or managing finances of a large corporation. The internet, and, perhaps more importantly, the new machine learning technologies that enable us to use the internet to answer hard, formerly professional, questions, however, levels this information asymmetry. Suddenly, tools can do what trained professionals used to do, and at a much lower costs (contrast the billed hours of a good lawyer with the economies of scale of Google). As such, the skills and activities professionals need are changing and will continue to change. Working in machine learning, I can say from experience that we are nowhere near an age where machines are going to flat out replace people, creating a utopian world with universal basic income and bored Baudelaires assuaging ennui with opiates, sex, and poetry (laced with healthy doses of Catholic guilt). What is happening is that the day-to-day work of professionals is changing and will continue to change. Machines are ready and able to execute many of the repetitive tasks done by many professionals (think young associates reviewing documents to find relevant information for lawsuit – in 2015, the Second Circuit tried to define what it means to practice law by contrasting tasks humans can do with tasks computers can do). As machines creep ever further into work that requires thinking and judgment, critical thinking, creativity, interpretation, emotions, and reasoning will become increasingly important. STEM may just lead to its own obsoleteness (AI software is now making its own AI software), and in doing so is increasing the value of professionals trained in the humanities. This value lies in the design methodologies required to transform what were once thought processes into statistical techniques, to crystallize probabilistic outputs into intuitive features for non-technical users. It lies in creating the training data required to make a friendly chat bot. Most importantly, it lies in the empathy and problem-solving skills that will be the essence of professional work in the future.

Autonomy and Mores in the Gig Economy – In October, 2015, I spoke at a Financial Times conference about corporate sustainability. The audience was filled with executives from organizations like the Hudson Bay Company (they started by selling beaver pelts and now own department stores like Saks Fifth Avenue) that had stayed in business over literally hundreds of years by gradually evolving and adding new business lines. The silver-haired rich men on the panel with me kept extolling the importance of “company values” as the key to keeping incumbents relevant in today’s society. And my challenge to them was to ask how modern, global organizations, in particular those with large, temporary 1099 workforces managed by impersonal algorithms, could cultivate mores and values like the small, local companies of the past. Indeed, I spent a few years helping international law firms build centralized risk and compliance operations, and in doing so came to appreciate that the Cravath model, an apprenticeship culture where skills and corporate culture and mores and passed down from generation to generation, as there is very low mobility between firms, simply does not scale to our mobile, changing, global workforce. As such, inculcating values takes a very different form and structure than it did in the past. We read a lot about how today’s careers are more like jungle gyms than ladders, where there is a need to constantly revamp and acquire new skills to keep up with changing technologies and demand, but this often overlooks the fact that companies – like clubs and societies – used to also shape our moral characters. You may say that user reviews (the five stars you can get as an Uber rider or AirBnB lodger) take the place of what was formerly subjective judgment of colleagues and peers. But these cold metrics are a far cry from the suffering and satisfaction we experience when we break from or align with a community’s mores. This merits much more commentary than the brief suggestions I’ll make here, but I believe our globalized, gig economy requires a self-reliant morality and autonomy that has no choice but to be cultivated apart from the workplace. And the seat of that cultivation would be some training in philosophy, ethics, and humanities. Otherwise corporate values will be reduced to the cold rationality of some algorithm measuring OKRs and KPIs.

Ethics and Emerging Technologies – Just this morning, Guru Banavar, IBM’s Chief Science Officer for Cognitive Computing, posted a blog admonishing technologists building AI products that they “now shoulder the added burden of ensuring these technologies are developed, deployed and adopted in responsible, ethical and enduring ways.” Banavar’s post is a very brief advertisement for the Partnership on AI IBM, Google, Microsoft, Amazon, Facebook, and Apple have created to formalize attention around the ethical implications of the technologies they are building. Elon Musk founded OpenAI with a similar mission to research AI technologies with an eye towards ethics and safety. Again, there is much to say about the different ethical issues new technologies present (I surveyed a few a year ago in a Fast Forward Labs newsletter). The point here is that ethics is moving from a niche interest of progressive technologists to a core component of large corporate technology strategy. And the ethical issues new technologies pose are not trivial. It’s very easy to fall into chicken little logic traps (where scholars like Nick Bostrom speculate on worst-case scenarios just because they are feasible for us to imagine) that grab headlines instead of sticking with the discipline required to recognize how data technologies can amplify existing social biases. As Ted Underwood recently tweeted, doing this well requires both people who are motivated by critical thinking and people who are actually interested in machine learning technologies. But the and is critical, else technologists will waste a lot of time reinventing methods philosophers and ethicists have already honed. And even if the auditing of algorithms is carried out by technologists, humanists can help voice and articulate what they find. Finally, it goes without saying that we all need to sharpen our critical reading skills to protect our democracy in the age of Trump, filter bubbles, and fake news.

This is just a start. Each of these points can be developed, and there are many more to make. My purpose here is to shift the dialogue on the value of the humanities from utility in cultivating empathy and emotional character to real economic and social impact. The humanities are worth fighting for.

 

*For those unaware, Clayton Christensen coined the term disruptive innovation in The Innovator’s DilemmaHe contrasted it with sustaining innovation, the gradual technical improvements companies make to a product to meet market and customer demands. Inspired by Thomas Kuhn’s Structure of Scientific Revolutions, Christensen artfully demonstrates how great companies miss out on opportunities for disruptive innovation precisely because they are well run: disruptive innovations seize upon new markets with an unserved need, and only catch up to incumbents because technology can change faster than market preferences and demand. As disruption has crystallized into ideology, people often overlook that most products are sustaining innovations, incremental improvements upon an existing product or market need. It’s admittedly much more exciting to carry out a Copernican revolution, but if we consider that Trump may well be a disruptive innovator, who identified a latent market whose needs were underserved only to topple the establishment, we might sit back, pause, and reconsider our ideological assumptions.

The image is Jacques-Louis David’s The Death of Socrates from 1787. Plato sits at the front with his head down and his legs and arms peacefully and plaintively crossed.