Notes from Transform.AI

I spent the last few days in Paris at Transform.AI, a European conference designed for c-level executives managed and moderated by my dear friend Joanna Gordon. This type of high-quality conference approaching artificial intelligence (AI) at the executive level is sorely needed. While there’s no lack of high-quality technical discussion at research conferences like ICML and NIPS, or even part-technical, part-application, part-venture conferences like O’Reilly AI, ReWork, or the Future Labs AI Summit (which my friends at ffVC did a wonderful job producing), most c-level executives still actively seek to cut through the hype and understand AI deeply and clearly enough to invest in tools, people, and process changes with confidence. Confidence, of course, is not certainty. And with technology changing at an ever faster clip, the task of running the show while transforming the show to keep pace with the near future is not for the faint of heart.

Transform.AI brought together enterprise and startup CEOs, economists, technologists, venture capitalists, and journalists. We discussed the myths and realities of the economic impact of AI, enterprise applications of AI, the ethical questions surrounding AI, and the state of what’s possible in the field. Here are some highlights.*

The Productivity Paradox: New Measures for Economic Value

The productivity paradox is the term Ryan Avent of the Economist uses to describe the fact that, while we worry about a near-future society where robots automate away both blue-collar and white-collar work, the present economy “does not feel like one undergoing a technology-driven productivity boom.” Indeed, as economists noted at Transform.AI, in developed countries like the US, job growth is up and “productivity has slowed to a crawl.” In his Medium post, Avent shows how economic progress is not a linear substitution equation: automation doesn’t impact growth and GDP by simply substituting the cost of labor with the cost of capital (i.e., replacing a full-time equivalent employee with an intelligent robot) despite our — likely fear-inspired — proclivities to reduce automation to simple swaps of robot for human. Instead, Avent argues that “the digital revolution is partly responsible for low labor costs” (by opening supply for cheap labor via outsourcing or just communication), that “low labour costs discourage investments in labour-saving technology, potentially reducing productivity growth,” and that benefiting from the potential of automation from new technologies like AI costs far more than just capital equipment, as it takes a lot of investment to get people, processes, and underlying technological infrastructure in place to actually use new tools effectively. There are reasons why IBM, McKinsey, Accenture, Salesforce, and Oracle make a lot of money off of “digital transformation” consulting practices.

The takeaway is that innovation and the economic impact of innovation move in syncopation, not tandem. The consequence of this syncopation is the plight of shortsightedness, the “I’ll believe it when I see it” logic that we also see from skeptics of climate change who refuse to open their imagination to any consequences beyond their local experience. The second consequence is the overly simplistic rhetoric of technocratic Futurism, which is also hard to swallow because it does not adequately account for the subtleties of human and corporate psychology that are the cornerstones of adoption. One conference attendee, the CEO of a computer vision startup automating radiology, commented that his firm can produce feature advances in their product 50 times faster than the market will be ready to use them. And this lag results not only from the time and money required for hospitals to modify their processes to accommodate machine learning tools, but also the ethical and psychological hurdles that need to be overcome to both accommodate less-than-certain results and accept a system that cannot explain why it arrived at its results.

In addition, everyone seemed to agree that the metrics used to account for growth, GDP, and other macroeconomic factors in the 20th-century may not be apt for the networked, platform-driven, AI-enabled economy of the 21st. For example, the value search tools like Google have on the economy far supersedes the advertising spends accounted for by company revenues. Years ago, when I was just beginning my career, my friend and mentor Geoffrey Moore advised me that traditional information-based consulting firms were effectively obsolete in the age of ready-at-hand information (the new problem being the need to erect virtual dams - using natural language processing, recommendation, and fact-checking algorithms - that can channel and curb the flood of available information). Many AI tools effectively concatenate past human capital - the expertise and value of a skilled-services work - into a present-day super-human laborer, a laborer who is the emergent whole (so more than the sum of its parts) of all past human work (well, just about all - let’s say normalized across some distribution). This fusion of man and machine**, of man’s past actions distilled into a machine, a machine that then works together with present and future employees to ever improve its capabilities, forces us to revisit what were once clean delineations between people, IP, assets, and information systems, the engines of corporations.

Accenture calls the category of new job opportunities AI will unlock The Missing Middle. Chief Technology and Innovation Officer Paul Daugherty and others have recently published an MIT Sloan article that classifies workers in the new AI economy as “trainers” (who train AI systems, curating input data and giving them their personality), “explainers” (who speak math and speak human, and serve as liaisons between the business and technology teams), and “sustainers” (who maintain algorithmic performance and ensure systems are deployed ethically). Those categories are sound. Time will tell how many new jobs they create.

Unrealistic Expectations and Realistic Starting Points

Everyone seems acutely aware of the fact that AI is in a hype cycle. And yet everyone still trusts AI is the next big thing. They missed the internet. They were too late for digital. They’re determined not to be too late for AI.

The panacea would be like the chip Keanu Reeves uses in the Matrix, the preprogrammed super-intelligent system you just plug into the equivalent of a corporate brain and boom, black belt karate-style marketing, anomaly detection, recommender systems, knowledge management, preemptive HR policies, compliance automation, smarter legal research, optimized supply chains, etc…

If only it were that easy.

While everyone knows we are in a hype cycle, technologists still say that one of the key issues data scientists and startups face today are unrealistic expectations from executives. AI systems still work best when they solve narrow, vertical-specific problems (which also means startups have the best chance of succeeding when they adopt a vertical strategy, as Bradford Cross eloquently argued last week). And, trained on data and statistics, AI systems output probabilities, not certainties. Electronic Discovery (i.e., the use of technology to automatically classify documents as relevant or not for a particular litigation matter) adoption over the past 20 years has a lot to teach us about the psychological hurdles to adoption of machine learning for use cases like auditing, compliance, driving, or accounting. People expect certainty, even if they are deluding themselves about their own propensities for error.*** We have a lot of work to disabuse people of their own foibles and fallacies before we can enable them to trust probabilistic systems and partner with them comfortably. That’s why so many advocates of self-driving cars have to spend time educating people about the fatality rates of human drivers. We hold machines to different standards of performance and certainty because we overestimate our own powers of reasoning. Amos Tversky and Daniel Kahneman are must reads for this new generation (Michael Lewis’s Undoing Project is a good place to start). We expect machines to explain why they arrived at a given output because we fool ourselves, often by retrospective narration, that we are principled in making our own decisions, and we anthropormophize our tools into having little robot consciousnesses.  It’s an exciting time for cognitive psychology, as it will be critical for any future economic growth that can arise from AI.

It doesn’t seem possible not to be in favor of responsible AI. Everyone seems to be starting to take this seriously. Conference attendees seemed to agree that there needs to be much more discourse between technologists, executives, and policy makers so that regulations like the European GPDR don’t stymy progress, innovation, and growth. The issues are enormously subtle, and for many we’re only at the point of being able to recognize that there are issues rather than provide concrete answers that can guide pragmatic action. For example, people love to ponder liability and IP, analytically teasing apart different loca of agency: Google or Amazon who offered the opensource library like Tensorflow, the organization or individual upon whose data a tool was trained, the data scientist who wrote the code for the algorithm, the engineer who wrote the code to harden and scale the solution, the buyer of the tool who signed the contract to use it and promised to update the code regularly (assuming it’s not on the cloud, in which case that’s the provider again), the user of the tool, the person whose life was impacted by consuming the output. From what I’ve seen, so far we’re at the stage where we’re transposing an ML pipeline into a framework to assign liability. We can make lists and ask questions, but that’s about as far as we get. The rubber will meet the road when these pipelines hit up against existing concepts to think through tort and liability. Solon Barocas and the wonderful team at Upturn are at the vanguard of doing this kind of work well.

Finally, I moderated a panel with a few organizations who are already well underway with their AI innovation efforts. Here we are (we weren’t as miserable as we look!):

Screen Shot 2017-06-19 at 9.08.21 AM
Journeys Taken; Lessons Learned Panelists at Transform.AI

The lesson I learned synthesizing the comments from the panelists is salient: customers and clients drive successful AI adoption efforts. I’ve written about the complex balance between innovation and application on this blog, having seen multiple failed efforts to apply a new technology just because it was possible. A lawyer on our panel discussed how, since the 2009 recession, clients simply won’t pay high hourly rates for services when they can get the same job done at a fraction of the cost at KPMG, PWC, or a technology vendor. Firms have no choice but to change how they work and price matters, and AI happens to be the tool that can parse text and crystallize legal know how. In the travel vertical, efforts to reach customers on traditional channels just don’t cut it in the age where the Millenials live on digital platforms like Facebook Messenger. And if a chat bot is the highest value channel, then an organization has to learn how to interface with chat bots. This fueled a top down initiative to start investing heavily in AI tools and talent.

Exactly where to put an AI or data science team to strike the right balance between promoting autonomy, minimizing disruption, and optimizing return varies per organization. Daniel Tunkelang presented his thoughts on the subject at the Fast Forward Labs Data Leadership conference this time last year.

Technology Alone is Not Enough: The End of The Two Cultures

I remember sitting in Pigott Hall on Stanford Campus in 2011. It was a Wednesday afternoon, and Michel Serres, a friend, mentor, and âme soeur,**** was giving one of his weekly lectures, which, as so few pull off well, elegantly packaged some insight from the history of mathematics in a masterful narrative frame.***** He bid us note the layout of Stanford campus, with the humanities in the old quad and the engineering school on the new quad. The very topography, he showed, was testimony to what C.P. Snow called The Two Cultures, the fault line between the hard sciences and the humanities that continues to widen in our STEM-obsessed, utilitarian world. It certainly doesn’t help that tuitions are so ludicrously high that it feels irresponsible to study a subject, like philosophy, art history, or literature, that doesn’t guarantee job stability or economic return. That said, Christian Madsbjerg of ReD Associates has recently shown in Sensemaking that liberal arts majors, at least those fortunate enough to enter management positions, end up having much higher salaries than most engineers in the long run. (I recognize the unfathomable salaries of top machine learning researchers likely undercuts this, but it’s still worth noting).

Can, should, and will the stark divide between the two cultures last?

Transform.AI attendees exhibited few points in favour of cultivating a new fusion between the humanities and the sciences/technology.

First, with the emerging interest paid to the ethics of AI, it may not be feasible for non-technologists to claim ignorance or allergic reactions to any mathematical and formal thinking as an excuse not to contribute rigorously to the debate. If people care about these issues, it is their moral obligation to make the effort to get up to speed in a reasonable way. This doesn’t mean everyone becomes literate in Python or active on scikit-learn. It just means having enough patience to understand the concepts behind the math, as that’s all these systems are.

Next, as I’ve argued before, for the many of us who are not coders or technologists, having the mental flexibility, creativity, and critical thinking skills awarded from a strong (and they’re not all strong…) humanities education will be all the more valuable as more routine, white-collar jobs gradually get automated. Everyone seems to think studying the arts and reading books will be cool again. And within Accenture’s triptych of new jobs and roles, there will be a large role for people versed in ethnography, ethics, and philosophy to define the ethical protocol of using these systems in a way that accords with corporate values.

Finally, the attendees’ reaction to a demo by Soul Machines, a New Zealand-based startup taking conversational AI to a whole new uncanny level, channeled the ghost of Steve Jobs: “Technology alone is not enough—it’s technology married with liberal arts, married with the humanities, that yields us the results that make our heart sing.” Attendees paid mixed attention to most of the sessions, always pulled back to the dopamine hit available from a quick look at their cell phones. But they sat riveted (some using their phones to record the demo) when Soul Machines CEO Mark Sagar, a two-time Academy Award winner for his work on films like Avatar, demoed a virtual baby who exhibits emotional responses to environmental stimulai and showed a video clip of Nadia, the “terrifying human” National Disability Insurance Scheme (NDIS) virtual agent enlivened by Cate Blanchett. The work is really something, and it confirmed that the real magic in AI arises not from the mysteriousness of the math, but the creative impulse to understand ourselves, our minds, and our emotions by creating avatars and replicas with which we’re excited to engage.

Screen Shot 2017-06-18 at 11.04.30 AM
Actress Cate Blachett as a “trainer” in the new AI economy, working together with Soul Machines.

My congratulations to Joanna Gordon for all her hard work. I look forward to next year’s event!


*Most specific names and references are omitted to respect the protocol of the Chatham House Rule.

**See J.D. Licklider’s canonical 1960 essay Man-Computer Symbiosis. Hat tip to Steve Lohr from the New York Times for introducing me to this.

***Stay tuned next week for a post devoted entirely to the lessons we can learn from the adoption of electronic discovery technologies over the past two decades.

****Reflecting on the importance of the lessons Michel Serres taught me is literally bringing tears to my eyes. Michel taught me how to write. He taught me why we write and how to find inspiration from, on the one hand, love and desire, and, on the other hand, fastidious discipline and habit. Tous les matins - every morning. He listed the greats, from Leibniz to Honoré de Balzac to Leo Tolstoy to Thomas Mann to William Faulker to himself, who achieved what they did by adopting daily practices. Serres popularized many of the great ideas from the history of mathematics. He was criticized by the more erudite of the French Académie, but always maintained his southern soul. He is a marvel, and an incredibly clear and creative thinker.

*****Serres gave one of the most influential lectures I’ve ever heard in his Wednesday afternoon seminars. He narrated the connection between social contract theory and the tragic form in the 17th century with a compact, clever anecdote of a WW II sailor and documentary film maker (pseudo-autobiographical) who happens to film a fight that escalates from a small conflict between two people into an all out brawl in a bar. When making his film, in his illustrative allegory, he plays the tape in reverse, effectively going from the state of nature - a war of all against all - to two representatives of a culture who carry the weight and brunt of war - the birth of tragedy. It was masterful.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s