Progress and Relative Definitions

Over the past year,  I’ve given numerous talks attempting to explain what artificial intelligence (AI) is to non-technical audiences. I’d love to start these talks with a solid, intuitive definition for AI, but have come to believe a good definition doesn’t exist. Back in September, I started one talk by providing a few definitions of intelligence (plain old, not artificial - a distinction which itself requires clarification) from people working in AI:

“Intelligence is the computational part of the ability to achieve goals in the world.” John McCarthy, a 20th-century computer scientist who helped found the field of AI

“Intelligence is the use of information to make decisions which save energy in the pursuit of a given task.” Neil Lawrence, a younger professor at the University of Sheffield

“Intelligence is the quality that enables an entity to function appropriately and with foresight in its environment.” Nils Nilsson, an emeritus professor from Stanford’s engineering department

I couldn’t help but accompany these definitions with Robert Musil’s maxim definition of stupidity (if not my favorite author, Musil is certainly up there in the top 10):

“Act as well as you can and as badly as you must, but in doing so remain aware of the margin of error of your actions!” Robert Musil, a 20th century Austrian novelist

There are other definitions for intelligence out there, but I intentionally selected these four because they all present intelligence as related to action, as related to using information wisely to do something in the world. Another potential definition of intelligence would be to make truthful statements about the world, the stuff of the predicate logic we use to say that an X is an X and a Y is a Y. Perhaps sorting manifold, complex inputs into different categories, the tasks of perception and the mathematical classifications that mimic perception, is a stepping stone to what eventually becomes using information to act.

At any rate, there are two things to note.

First, what I like about Musil’s definition, besides the wonderfully deep moral commentary of sometimes needing to act as badly as you must, is that he includes as part of his definition of stupidity (see intelligence) a call to remain aware of margins of error. There is no better training in uncertainty than working in artificial intelligence. Statistics-based AI systems (which means most contemporary systems) provide approximate best guesses, playing Marco Polo, as my friend Blaise Aguera y Arcas says, until they get close enough for government work; some systems output “maximum likely” answers, and others (like the probabilistic programming tools my colleagues at Fast Forward Labs just researched) output full probability distributions, with affiliated confidence rates for each point in the distribution, which we then have to interpret to gauge how much we should rely on the AI to inform our actions. I’ll save other thoughts about the strange unintuitive nature of thinking probabilistically another time (hopefully in a future post about Michael Lewis’s latest book, The Undoing Project.)

Second, these definitions of intelligence don’t help people understand AI. They may be a step above the buzzword junk that litters the internet (all the stuff about pattern recognition magic that will change your business that leads people outside the field to believe that all machine learning is unsupervised, whereas unsupervised learning is an active and early area of research), but they don’t leave audiences feeling like they’ve learned anything useful and meaningful. I’ve found it’s more effective to walk people through some simple linear or logistic regression models to give them an intuition of what the math actually looks like. They may not leave with minds blown away at the possibilities, but they do leave with the confident clarity of having learned something that makes sense.

As it feels like a fruitless task to actually define AI, I (and my colleague Hilary Mason, who used this first) instead like to start my talks with a teaser definition to get people thinking:

“AI is whatever we can do that computers can’t…yet.” Nancy Fulda, a science fiction writer, on Writing Excuses

This definition doesn’t do much to help audiences actually understand AI either. But it does help people understand why it might not make sense to define a given technology category - especially one advancing so quickly - in the first place. For indeed, an attempt to provide specific examples of the things AI systems can and cannot do would eventually - potentially even quickly - be outdated. AI, as such, lies within the horizons of near future possibility. Go too far ahead and you get science fiction. Go even further an you get madness or stupidity. Go too behind and you get plain old technology. Self-driving cars are currently an example of AI because we’re just about there. AlphaGo is an example of AI because it came quicker than we thought. Building a system that uses a statistical language model that’s not cutting edge may be AI for the user of the system but feel like plain old data science to the builder of the system, as for the user it’s on the verge of the possible, and for the builder it’s behind the curve of the possible. As Gideon Lewis-Kraus astutely observed in his very well written exposé on Google’s new translation technology, Google Maps would seem like magic to someone in the 1970s even though it feels commonplace to us today.

So what’s the point? Here’s a stab. It can be challenging to work and live in a period of instability, when things seem to be changing faster than definitions - and corollary social practices like policies and regulations - can keep up with. I personally like how it feels to be work in a vortex of messiness and uncertainty (despite my anxious disposition). I like it because it opens up the possibility for relativist, non-definitions to be more meaningful than predicate truths, the possibility to realize that the very technology I work on can best be defined within the relative horizons of expectation. And I think I like that because (and this is a somewhat tired maxim but hey, it still feels meaningful) it’s the stuff of being human. As Sartre said and as Heidegger said before him, we are beings for whom existence precedes essence. There is no model of me or you sitting up there in the Platonic realm of forms that gets realized as we live in the world. Our history is undefined, leading to all sorts of anxieties, worries, fears, pain, suffering, all of it, and all this suffering also leads the the attendant feelings of joy, excitement, wonder (scratch that, as I think wonder is aligned with perception), and then we look back on what we’ve lived and the essence we’ve become and it feels so rich because it’s us. Each of us Molly Bloom, able to say yes yes and think back on Algeciras, not necessarily because Leo is the catch of the century, but because it’s with him that we’ve spent the last 20 years of our lives.

The image is Paul Klee’s Angelus Novus, which Walter Benjamin described as “an angel looking as though he is about to move away from something he is fixedly contemplating,” an angel looking back at the chain of history piling ruin upon ruin as the storm of progress hurls him into the future. 

7 thoughts on “Progress and Relative Definitions

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s