Is AI the Future of Life?

Rita J. King
February 1, 2017

Let's stop and rethink our relationship with artificial intelligence.

What exactly are we creating, humans? And what will our creations illuminate about us?

The Future of Life Institute is a group that includes, among others, Stephen Hawking, Elon Musk and Max Tegmark. Their shared focus is on ethical guidelines for the development of AI. This is a subject I've been focused on for years (click here to see me interviewed in Robot Wars, in which I had the last word about whether humans are ready for the responsibility of ushering in the next generation in the evolution of intelligence).

The group recently released a report, and while the principles look reasonable on paper, at first glance, two of them concern me highly.

  • 10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.
  • 11) Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.

Let's stop for a moment and really use our imaginations to think about what kind of world we will create if intelligent systems, far more intelligent than we are even capable of intellectualizing, take our human values as the basis for their decisions. Look how we act now. I understand that this group has a specific set of aspirational values in mind, but much like Enron having "integrity" as a core value, we have to deal with reality, not wishful thinking, when it comes to the AI we create.

Artificial intelligence, in my opinion, isn't artificial at all. The belief that it is highlights a fundamental flaw in our thinking. Everything we create, but particularly autonomous systems, is an extension of the intelligence and creativity that already exist in our human systems. Take a long, hard look at how we apply that intelligence and creativity, and then ask yourself whether we want autonomous systems coded to imitate us before we really understand ourselves. We are so easy to fool. Above all, we fool ourselves into thinking we understand more than we do.

UPDATE: The Asilomar AI Principles, developed at the Asilomar Conference, have been signed by 1,500 people. You can add your signature if you like (or read interviews with signatories to see why they signed). The Future of Life Institute "catalyzes and supports research and initiatives to safeguard life and develop optimistic visions of the future." Developing optimistic visions of the future is a much easier task than developing pragmatic visions of the future, even those that achieve outstanding feats of human accomplishment.


Read next

Back to Insights