We have a lot to think about when it comes to keeping up with advances in technology, but what if we have our priorities wrong? What if we started with imagination first?
I recently gave a talk at Samsung on the subject of Imagination-Powered Technology. This post is an edited, condensed version of that event. You can see the archived livestream with provocative questions from the audience here.
Everyone is talking about AI. The term has become a catch basin for all technologies that are perceived as artificial and intelligent. Instead, I want to challenge you to think about AI as Applied Imagination. I will describe what Applied Imagination is and how it can be practiced.
If we think about intelligence as artificial, it enables us to outsource our ethics to machines and software, which can be very dangerous. How can Mark Zuckerberg say that he couldn’t have possibly imagined the impact that Facebook would have on the election? The reason he is able to say that is because we know that we are not great at thinking about the extrinsic consequences of our actions.
The term artificial intelligence also enables us to think of technology as something that is separate from us. And right now, it is, though that feeling we get when our phone batteries die reveal that the feeling of separation is, increasingly, an illusion.
Technology used to be a differentiator. Of course, for many companies that still build their own technology, it still is. But increasingly, you don’t have to build your own. You can buy it. So what is the differentiator? At this point, it’s really only just imagination.
Along with founder James Jorasch, who is an inventor, investor and entrepreneur, I direct a strategic consultancy based in Manhattan. An an inventor, his specialty is innovation, and as a futurist, my specialty is imagination. At Science House, we say innovation without imagination is directionless, and imagination without innovation is philosophy. We are always struggling to find the pragmatic sweet spot between innovation and imagination.
When Brian Reich interviewed me for his book The Imagination Gap, I was at the tail end of the interview process, and he had a lot of his research done. He told me that I was the first person who talked about imagination as a tangible, practical skill. I asked him what people said, and I wasn’t surprised by the answer. Imagination is indefinable. It’s something creative people do. It’s daydreaming. It’s letting your mind wander. And those people are correct. Those are all things that our imaginations can do. Our imaginations can also conjure many dark visions. But that’s not how I define Applied Imagination.
Everyone is creative. Creativity is everywhere. The only difference is how people tap into it.
Think about a dial between two extremes, Fantasizers and Followers.
Fantasizers have wild imaginations. They can conjure up anything and they love to daydream. And there’s something beautiful about that. But fantasizers aren’t applying their imaginations to a specific problem, the way entrepreneurs need to do in order to stay in business. They don’t necessarily care what’s feasible or know how to execute, and sometimes they don’t think it’s fair that they should even be expected to. On the other side are Followers. Followers are people who like a path cut for them. They go with the herd. And they also aren’t applying their imaginations.
Applied imagination is between these two extremes. It is pragmatic. It is about thinking your way through a problem and sticking to what I call the tedium of creativity to make it real.
When you know you need to get from one place to another, Applied Imagination is the best way to get you there. I have a mental map I use. I want to share it with you.
For the sake of illustration, think of a path from where you are now to where you want to end up. From Point A to Point B. It could be from mind to market, or from the beginning of a massive software project to the end, or from your first to last day at university. The goal is to hit your target, even if your target changes along the way.
The tangible elements are easiest for us to understand because they are familiar. Unfortunately sometimes those familiar concepts are outdated, but we still prioritize them because they look and feel as if they belong there. The nebulous aspects are harder to see and understand. Because of this, we gravitate to the familiar, often to our own detriment, but it feels right because it makes sense.
The trick is to question assumptions about the tangible elements and learn more about the nebulous elements to constantly reprioritize your focus areas and update your thinking.
When I first started developing my framework around imagination more than a decade ago, it was because I noticed that my clients, mostly leadership teams across industries, all seemed to be having the same problem.
They all wanted to jump straight out of the Industrial Era and land in the Intelligence Era. The transition isn’t easy, though, because there are ten thousand little hooks in our brains. Leaving an era isn’t just as simple as moving into the future, because we have been trained in outdated ways of thinking, being and working.
The Industrial Era is particularly sticky because it was very easy for our brains to understand. Engines, looms, factories, ships, contributing to the whole one piece at a time when the conveyor belt moved along. People punched in and punched out.
In the Intelligence Era, by contrast, there’s no clear line that tells us when we’re at work and when we’re not. The products we create and consume are far more complex, and not at all easy to visualize. You might not understand how a combustion engine worked during the Industrial Era, but you could picture an engine and easily understand what the machine did. In the Intelligence Era, very few people truly understand how algorithms work, to use one of many examples. Even software engineers are having a hard time understanding how software architecture works now that companies are using new development methodologies to create software.
One of the most entrenched hooks remaining in the brains of modern companies is the idea that people need to work faster and faster. The conveyor belt is moving at maximum speed for most humans. The key now is to work smarter. But how?
To help create a period of transition in between, I invented a concept called the Imagination Age. You can read more about in other places, including here and here, and in these books. Here’s a list of Principles of Applied Imagination.
Imagination is necessary for working smarter but also for making sense of what’s happening in the world.
How many of you have heard of Sophia the robot? Just like Amazon’s Alexa, she doesn’t need to have a gender, which Sophia actually point out during an interview with Aaron Sorkin in Saudi Arabia, when she became the first robot granted citizenship there. This is not a woman. This is software and hardware. There’s another video of Sophia called Hot Robot at SXSW says she wants to destroy humans. Same Sophia.
How many of you watched Google’s DeepMind, AlphaGo, beat the human Go champion Lee Sedol? Go isn’t a big deal in the United States, but it is in Korea. The games took place in Seoul, and I stayed up all night to watch them. The excellent documentary AlphaGo shows the story behind the story. Do you know what Lee Sedol said? He learned from AlphaGo what was the art of the possible in his own game, a game that he, as a human, had dedicated his life to mastering. This is what I mean when I say we have hooks in our brains. He learned from other humans, with their limited understanding of their own field, just as we learn from our predecessors how to live and work.
Now DeepMind has moved on to other things, like creating AI that has imagination. DeepMind researchers created what they're calling "imagination-augmented agents," or I2As, that have a neural network trained to extract any information from its environment that could be useful in making decisions later on. These agents can create, evaluate and follow through on plans. To construct and evaluate future plans, the I2As "imagine" actions and outcomes in sequence before deciding which plan to execute. Now, Google has even created an AI that can create its own AI that outperforms any ever created by humans.
We debate about whether it might be possible to create an AI that can think for itself, like a human. Can humans think for themselves? We are so much more predictable than we think we are.
All of this creates an uncomfortable awareness that we are not maximizing our potential as human beings. Some people have an apocalyptic vision of robots stealing our jobs. Others have a utopian vision of robots finally giving us the the leisure that we deserve. But are we even equipped to give ourselves a sense of purpose to keep ourselves occupied? Even in the utopian vision, we still have shortcomings as human beings. But I think we can fix that. No matter what happens in the future, we have a golden period of humanity right now, upon us, we can put our imaginations first and ask ourselves what kind of companies and organizations can we create that connect humanity. I don’t mean to serve better ads, I mean to really connect us and help figure out what defines us as human beings.
Elon Musk and Stephen Hawking and a bunch of other smart people got together and tried to come up with principles for creating AI that serves humans instead of the other way around. These Asilomar Principles are a great starting point. But I question #10 and #11, both of which are focused on human values.
10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.
11) Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.
Let's stop for a moment and really use our imaginations to think about what kind of world we will create if intelligent systems, far more intelligent than we are even capable of intellectualizing, take our human values as the basis for their decisions. Look how we act now. I understand that this group has a specific set of aspirational values in mind, but much like Enron having "integrity" as a core value, we have to deal with reality, not wishful thinking, when it comes to the AI we create.
In Yuval Harari’s excellent book, Sapiens, the author points out that we didn’t domesticate wheat. Wheat domesticated us.
So will we learn from technology what it means to be human? Above all, we need imagination for the biggest future task no matter what future we find ourselves in, and that is to find purpose and meaning in this world. Either way we have a weird road ahead. There’s no downside to developing your imagination. We will face the need for purpose, and helping ourselves and more people achieve it.