These days, innovation is not an opportunity but a mandate. By innovate, I mean apply technology to do old things differently (faster, cheaper, more efficiently) or do new things that were not previously possible. There are many arguments one could put forth to critique our unshaken faith in progress and growth. Let’s table these critiques and take it as given that innovation is a good thing. Let’s also restrict our scope to enterprise innovation rather than broad consumer or societal innovation.
I’ve probably seen over 100 different organizations’ approaches to “digital” innovation over the past year in my role leading business development for Fast Forward Labs. While there are general similarities, often influenced by the current popularity of lean startup methodology or design thinking, no two organizations approach innovation identically. Differences arise from the simple fact that organizations are made of people; people differ; individual motives, thoughts, and actions combine together in complex ways to create emergent behavior at the group level; amazingly interesting and complex things result (it’s a miracle that a group of even 50 people can work together as a unit to generate value that greatly exceeds that aggregate value from each individual contributor); past generations of people in organizations pass down behavior and habits to future generations through a mysterious process called culture; developments in technology (amidst other things) occur outside of the system* that is the organization, and then the organization does what it can to tweak the system to accept (or reject) these external developments; some people feel threatened and scared, others are excited and stimulated; the process never ends, and technology changes faster than people’s ability to adopt it…
Observing all these environments, and observing them with the keenness and laser-focused attention that only arises when one wants to influence their behavior, when one must empathize deeply enough to become the person one is observing, to adopt their perspective nearly completely - their ambitions, their fears, their surprises, their frustrations - so as to be able to then convince them that spending money on our company’s services is a sound thing to do (but it’s not only mercenary: people are ends in themselves, not a means to an end. Even if I fail to sell, I am always motivated and energized by the opportunity to get to know yet another individual’s mind and heart), I’ve come to accept a few axioms:
- Innovation is hard. Way harder than all the meaningless marketing makes it seem.
- There is no one way to innovate. The approach depends on organizational culture, structure, history, product, and context.
- Inventions are not solutions. Just because something is technically possible doesn’t mean people will appreciate its utility clearly enough to want to change their habits.
- Most people adopt new technologies through imitation, not imagination, often following the normal distribution of Geoffrey Moore’s technology adoption lifecycle.
- We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten. (Bill Gates)
Now, research in artificial intelligence is currently moving at a fast clip. At least once a month some amazing theoretical breakthrough (like systems beating Smash Brothers or Texas Hold’em Poker champions) is made by a university computer science department or the well-funded corporate research labs that have virtually replaced academia (this is worrisome for inequality of income and opportunity, and worthy of further discussion in a future post). Executives at organizations that aren’t Google, Facebook, Amazon, Apple, and Microsoft see the news and ask their tech leadership whether any of this is worth paying attention to, if it’s just a passing fad for geeks or if it’s poised to change the economy as we know it. If they do take AI seriously, the next step is to figure out how to go about applying it in their businesses. That’s where things get interesting.
There are two opposite ways to innovate with data and algorithms. The first is a top down approach that starts with new technological capabilities and looks for ways to apply them. The second is a bottom up approach that starts with an existing business problem or pain point and looks for possible technical solutions to the problem.
Top down approaches to innovation are incredibly exciting. They require unbridled creativity, imagination, exploration, coupled with the patience and diligence to go from idea to realized experiment, complete with whatever proof is required to convince others that a theoretical milestone has actually been achieved. Take computer vision as an example. Just a few years ago, the ability of computers to automatically recognize the objects in images - to be shown a picture without any metadata and say, That’s a cat! That’s a dog! That’s a girl riding a bicycle in the park on a summer day! (way more complicated technically) - was mediocre at best. Researchers achieved ok performance using classic classification algorithms, but nothing to write home about. But then, with a phase shift worthy of a Kuhnian scientific revolution, the entire community standardized on a different algorithmic technique that had been shunned by most of the machine learning community for some time. These artificial neural networks, whose underlying architecture is loosely inspired by the synapses that connect neurons in our brain’s visual cortex, did an incredible job transforming image pixels into series of numbers called vectors that could then be twisted, turned, and processed until they reliably matched with linguistic category labels. The results captivated popular press attention, especially since they were backed by large marketing machines at companies like Google and Facebook (would AI be as popular today if research labs were still tucked away in academic ivory towers?). And since then we at Fast Forward Labs have helped companies build systems that curate video content for creatives at publishing houses and identify critical moments in surgical processes to improve performance of remote operators of robotic tools.
That said, many efforts to go from capability to application fail. First, most people struggle to draw analogies between one domain (recognizing cats on the internet) and another (classifying whether a phone attempting to connect to wifi is stationary or moving in a vehicle). The analogies exist at a level of abstraction most people don’t think at - you have to view the data as a vector and think about the properties of the vector to evaluate whether the algorithm would be a good tool for the job - and that seem so different from our standard perceptual habits. Second, it’s technically difficult to scale a mathematical model to actually work for a real-world problem, and many data scientists and researchers lack the software engineering skills required to build real products. Third, drawing on Geoff Moore’s adoption lifecycle, many people lack patience to work with early technologies that don’t immediate do what they’re supposed to do. Applying a new technology often requires finding a few compassionate early adopters who are willing to give feedback to improve a crappy tool. Picking the wrong early users can kill a project. Fourth, organizations that are risk averse like to wait until their peers have tested and tried a new thing before they go about disrupting day-to-day operations. As they say, no one gets fired for hiring McKinsey or IBM. And finally, people are busy. It’s hard to devote the time and attention needed to understand a new capability well enough to envision its potential applicability. Most people can barely keep up with their current workload, and deprioritizing possibility to keep up with yesterday’s tasks.
Which is why it can often be more effective to approach innovation by solving a real problem instead of inventing a problem for a solution. This bottom up approach is more closely aligned with design thinking. The focus is on the business: what people do, how they do it, and where technology may be able to help them do it better. The approach works best when led by a hybrid business-technical person whose job it is to figure out which problems to solve - based on predicted impact and relatively low technical difficulty so progress can be made quickly - and muster the right technology to solve them - by building something internally or buying a third-party product. People tend to have more emotional skin in the game with innovation driven by problem solving because they feel greater ownership: they know the work intimately, and may be motivated by the recognition of doing something better and faster. The risk, particularly when using technology to automate a current task, is that people will fear changing their habits (although we are amazingly adaptable to new tools) or, worse, that they will be replaced by a machine.
The core difficulty with innovating by solving problems is that the best solution to the most valuable business problem is almost always technically boring. Technical research teams want to explore and make cool stuff; they don’t want to apply their time and energy to building a model using math they learned in undergrad to a problem that makes their soul cringe. While it seems exciting to find applications for deep learning in finance, for example, linear regression models are likely the best technical solution for 85% of problems. They are easy to build and interpretable, making it easier for users to understand why a tool outputs the answers it does, easier for developers to identify and fix bugs, and easier for engineers to deploy the model on existing commodity hardware without having to invest in new servers with GPUs. The other risk of starting with business problems is that in can lead to mediocre solutions. Lacking awareness of what’s possible, teams may settle for what they know rather than what’s best. If you’ve never seen an automobile, you’ll count how many horses you need to make it across the country faster.
As the title suggests, therefore, the best approach is to treat innovation as a dialectic capable of balancing creative capabilities with pragmatic solutions. Innovation is a balancing act uniting contradictory forces: the promise of tomorrow against the pull of today, the whimsy of technologists against the stubbornness of end users, the curiosity of potential against the calculation of risk. These contradictions are the growing pains of innovation, and the organizations that win are those that embrace them as part of growing up.
*Even an internal R&D department may be considered outside the system of the organization, particularly if R&D acts as a separate and independent unit that is not integrated into day-to-day operations. What an R&D team develops still has to be integrated into an existing equilibrium.
3 thoughts on “Innovation as a Dialectic”