With all technology, there is a good side, a bad side, and a stupid side that you weren’t expecting. “Look at an axe—you can cut a tree down with it, and you can murder your neighbor with it… And the stupid side you hadn’t considered is that you can accidentally cut your foot off with it.” Margaret Atwood quoted in The New Yorker, 2017
In 2023 pessimists and optimists dispute all day long about the future of artificial intelligence. To call yourself a pessimist or an optimist is merely to express a personal feeling. Without facts to back it up you have no argument for or against anything.
And as scenario planners we know there no facts about the future.
So much of the talk about the future of AI is speculative if not intentionally provocative. It ranges from AI improving password security to AI wiping humans off the face of the earth. That’s quite a range.
At times it feels as though the conversation is dominated by pessimists, perhaps because the size of the change we see coming is so immense that it intimidates us—”a kind of intellectual vertigo,” as The Economist put it recently. Reflexively, we go into a defensive crouch.
The human tendency when confronted with something brand new in the world is to compare it to something familiar. That is how we make sense of the new.
When TV appeared as a consumer phenomenon after the Second World War it took a while to understand its potential as more than just radio with pictures. When Covid-19 arrived the world responded as if it were just another influenza outbreak. Wrapping our minds around its novel nature needed time.
We are doing the same thing with our thinking about artificial intelligence. The term was coined in 1956 and we still think about it in the categories we knew then, still fear the things we feared then. When we imagine AI’s applications we imagine it automating tasks that we did then like read medical records, manage a factory floor, teach a class of students, talk to customers, drive a car.
The problem with putting new things in old boxes, and with using old language to think about those boxes, is that we set ourselves up to be blindsided by change.
Dress Rehearsing AI Scenarios
An article of faith in scenario planning is that transformational change in any industry commonly comes from outside what we imagine is a stable operating environment. That will happen with AI. For better and worse, it will be bigger than our boxes.
It came as a surprise to many, for example, that a sticking point in the Hollywood writers’ strike this year is insertion of a provision in the standard contract protecting writers from competition from AI. Who imagined that conversation would arrive so soon?
In respect to AI scenarios, the biggest challenge might be alertness to the technology’s downside without being blind to its upside—an upside that we cannot imagine before the technology becomes more deeply rooted in our lives.
- A team from Carnegie Mellon University’s Robotics Institute has developed an autonomous robot to control the spread of spotted lanternflies. It uses a deep-learning model refined on an augmented image dataset created from 700 images of spotted lanternfly egg masses. The lanternflies probably never saw it coming.
- Eve, a system created by a consortium of European researchers, is using artificial intelligence to dramatically accelerate drug discovery by exploiting AI’s high-throughput characteristic. Eve quickly established that triclosan, an ingredient commonly found in toothpaste, can fight malaria parasites by undermining their ability to evolve resistance to conventional drugs. More recently, Eve turned its attention to the reproducibility crisis in cancer research with the goal of radically improving the reliability of knowledge.
- Recently former Google CEO Eric Schmidt said bluntly that “the 2024 elections are going to be a mess” because of real-seeming news generated by artificial intelligence. Schmidt might be right. The power of scenario planning as we practice it is that to conceive a problem is to begin addressing it. The problem Schmidt identifies is being attacked with a combination of blockchain and AI technologies to automate the process of disinformation detection.
The point of these optimistic examples is not to be reassuring. The point is to challenge our reflex to imagine the worst when confronted with the unfamiliar.
The summer of 2023 is the wrong time to make confident predictions about the meaning of AI for our futures. We are still learning what to do with it, and that places limits on our imaginations.
This is not to say we needn’t get out in front of AI’s risks. It is to acknowledge that we do not yet fully understand what those risks will be.
This is not a matter of being optimistic or pessimistic. It is a matter of having a rigorous imagination in considering future AI scenarios.