FSG Blog
June 27, 2023

AI Scenarios: Getting Beyond Our Reflex to Imagine the Worst

Kevin McDermott
FSG Principal

With all technology, there is a good side, a bad side, and a stupid side that you weren’t expecting. “Look at an axe—you can cut a tree down with it, and you can murder your neighbor with it… And the stupid side you hadn’t considered is that you can accidentally cut your foot off with it.” Margaret Atwood quoted in The New Yorker, 2017

In 2023 pessimists and optimists dispute all day long about the future of artificial intelligence. To call yourself a pessimist or an optimist is merely to express a personal feeling. Without facts to back it up you have no argument for or against anything.

And as scenario planners we know there no facts about the future.

So much of the talk about the future of AI is speculative if not intentionally provocative. It ranges from AI improving password security to AI wiping humans off the face of the earth. That’s quite a range.

At times it feels as though the conversation is dominated by pessimists, perhaps because the size of the change we see coming is so immense that it intimidates us—”a kind of intellectual vertigo,” as The Economist put it recently. Reflexively, we go into a defensive crouch.

The human tendency when confronted with something brand new in the world is to compare it to something familiar. That is how we make sense of the new.

When TV appeared as a consumer phenomenon after the Second World War it took a while to understand its potential as more than just radio with pictures. When Covid-19 arrived the world responded as if it were just another influenza outbreak. Wrapping our minds around its novel nature needed time.

We are doing the same thing with our thinking about artificial intelligence. The term was coined in 1956 and we still think about it in the categories we knew then, still fear the things we feared then. When we imagine AI’s applications we imagine it automating tasks that we did then like read medical records, manage a factory floor, teach a class of students, talk to customers, drive a car.

The problem with putting new things in old boxes, and with using old language to think about those boxes, is that we set ourselves up to be blindsided by change.

Dress Rehearsing AI Scenarios

An article of faith in scenario planning is that transformational change in any industry commonly comes from outside what we imagine is a stable operating environment. That will happen with AI. For better and worse, it will be bigger than our boxes.

It came as a surprise to many, for example, that a sticking point in the Hollywood writers’ strike this year is insertion of a provision in the standard contract protecting writers from competition from AI. Who imagined that conversation would arrive so soon?

In respect to AI scenarios, the biggest challenge might be alertness to the technology’s downside without being blind to its upside—an upside that we cannot imagine before the technology becomes more deeply rooted in our lives.

For example:

  • A team from Carnegie Mellon University’s Robotics Institute has developed an autonomous robot to control the spread of spotted lanternflies. It uses a deep-learning model refined on an augmented image dataset created from 700 images of spotted lanternfly egg masses. The lanternflies probably never saw it coming.
  • Eve, a system created by a consortium of European researchers, is using artificial intelligence to dramatically accelerate drug discovery by exploiting AI’s high-throughput characteristic. Eve quickly established that triclosan, an ingredient commonly found in toothpaste, can fight malaria parasites by undermining their ability to evolve resistance to conventional drugs. More recently, Eve turned its attention to the reproducibility crisis in cancer research with the goal of radically improving the reliability of knowledge.
  • Recently former Google CEO Eric Schmidt said bluntly that “the 2024 elections are going to be a mess” because of real-seeming news generated by artificial intelligence. Schmidt might be right. The power of scenario planning as we practice it is that to conceive a problem is to begin addressing it. The problem Schmidt identifies is being attacked with a combination of blockchain and AI technologies to automate the process of disinformation detection.

The point of these optimistic examples is not to be reassuring. The point is to challenge our reflex to imagine the worst when confronted with the unfamiliar.

The summer of 2023 is the wrong time to make confident predictions about the meaning of AI for our futures. We are still learning what to do with it, and that places limits on our imaginations.

This is not to say we needn’t get out in front of AI’s risks. It is to acknowledge that we do not yet fully understand what those risks will be.

This is not a matter of being optimistic or pessimistic. It is a matter of having a rigorous imagination in considering future AI scenarios. 

Blog Sign-Up

This field is for validation purposes and should be left unchanged.

13 thoughts on “AI Scenarios: Getting Beyond Our Reflex to Imagine the Worst”

  1. Kevin, exceptionally well thought out piece. I enjoyed the read. I’d like to add two thoughts.

    1. There are lots of The Future of AI Scenarios out there and, as you point out, they tend to be flavored as optimistic or pessimistic. The better approach is to explore the future of AI, not as the driving definitional source of scenarios, but as simply one component of broadly conceived societal scenarios.
    2. Back in the late 1960s and early 70s many were focused on the future of lasers. BUT, “lasers” had no say, no input into that discussion. AI is different, isn’t it? For the first time, a technology may become part of the conversation about its role in society!

    Reply
    • “…not as the driving definitional source of scenarios, but as simply one component of broadly conceived societal scenarios.”

      That hits the nail on the head, Tom.
      We find that all the time in scenario planning, don’t we?, this tendency to treat some large phenomenon—social, technological, political—as if it existed by itself without interaction with a complex of other drivers.

      That is how all of us tend to speak of AI right now, as if it were a frictionless force for change. I can’t think of any force in history, even one as immense as AI appears to be, that existed in a context all by itself. That would approach the nature of divinity.

      Intrigued as I am by AI I do not think it is divine.

      Reply
  2. Artificial Intelligence is to Human Intelligence as Artificial Sweeteners are to Honey, Artificial Flowers are to Bridal Bouquets or Artificial Insemination is to Sex. AI fails the Turing Test hands down due to the simple fact that it is too quick, too syntactically correct, and too complete in its responses as compared to any human being one might hope to text with. In considering future AI scenarios we should begin by understanding exactly what it is that we are talking about. And the first step to doing that is to dismiss High Tech’s corporate marketing language and material.
    It seems to me that the technology that we are looking at is BSAOS: Bayesian Statistical Analysis On Steroids. Heroically vast amounts of data of all types, but most notably language data, is being analyzed for patterns that are then being used to formulate probable responses to inquiries. Any serious student of statistics knows how powerful traditional statistics and probability analysis can be, in BSAOS it is many, many times more impressive. As Kevin points out there are numerous very, very impressive scientific discoveries being driven by this breakthrough in statistical data processing, with more and more to come. In short, the computer revolution continues.
    But most of the AI Hype has revolved about chatbots which apply BSAOS to the language that can be sucked out of the Internet to respond credibly to text input in a realistic manner, in some cases dressed up with unique personalities. The results are deceptively impressive in that they are breathtakingly novel: “Any sufficiently advanced technology is indistinguishable from magic.” If, however, one takes the time to chat with say ChatGP0 one, as a human with all the slow and clumsy characteristics of our intelligence, begins to sense intuitively the limitations and the novelty begins to cloy. Bayesian Statistical Analysis On Steroids is extremely impressive and potentially highly powerful but none the less it is Artificial Intelligence.

    Reply
  3. I asked ChatGPT what it thought of your analogies, Robert, and here is what it had to say:

    “The analogies you provided draw comparisons between artificial and natural counterparts in different domains. While they all involve the concept of artificial versus natural, it’s important to note that the relationships are not entirely equivalent.

    Artificial Intelligence (AI) is to Human Intelligence: This analogy suggests that AI is an artificial substitute for human intelligence. However, AI is not meant to replicate human intelligence in its entirety. AI systems are designed to perform specific tasks and solve problems using algorithms and computational power. While AI can exhibit intelligent behavior in certain contexts, it does not possess the full range of cognitive abilities, emotions, or consciousness associated with human intelligence.

    In summary, while these analogies highlight the existence of artificial alternatives to natural phenomena, they don’t capture the complete depth and complexity of the relationships. Each analogy represents a different aspect of the artificial versus natural comparison within its respective domain.”

    Don’t shoot the messenger.

    Reply
  4. This is a useful counterbalance to the more extreme opinions on AI. There is, of course, no single “AI” but rather a very large and geometrically expanding list of current, emerging and potential applications that will range from the prosaic to the transformative. Some will be advance the cause of humanity. Some will undermine it. Most perhaps will sit somewhere in-between, with good and bad elements that society and governments are going to have to sort out, case-by-case.

    I end on a hopeful note, in the spirit of Kevin’s blog.

    There have been reports in recent days about the potential for “internet armageddon” as solar storm activity picks up over the coming year. AI won’t prevent this, but USA Today reports that NASA has created a computer model that uses AI and satellite data to predict when and where an impending solar storm could hit. If this is true, power grids will gain valuable time to prepare — and we’ll have at least a 30 minute window to backup our computers. All thanks to AI!

    https://www.usatoday.com/story/news/nation/2023/06/28/nasa-internet-apocalypse-solar-storm-prep/70361827007/

    Reply
    • Pete’s remark about gaining time to prepare for a challenge like a solar storm made me think about another element missing from the hyperventilation about AI, which is the human resistance to change.

      Politico’s Future Pulse recently profiled the acceleration of investment into health-care AI. Interviewed for the story was Justin Norden, a partner at GSR Ventures who observed that “Health care systems are like nuclear facilities. They don’t want to change. They need to keep running.” It’s not just fear of change, in other words, but the immediate challenges of operating.

      “The baseline stat health care people like to quote is that it takes 17 years for new innovations to become standard of care in health care,” Norden said.

      Artificial intelligence is relentlessly logical. Humans are not. It is part of our charm.

      Reply
  5. The Ai doth protest too much. As far as I can tell, once I get past the self-promoting language, ChatGPT and I are pretty much in agreement. ” AI is not meant to replicate human intelligence ” no more than any of my artificials are meant to replicate the natural items for which they are named. Instead, it suggests that “AI systems are designed to perform specific tasks and solve problems using algorithms and computational power.” That is again pretty much what my BSAOS was intended to describe.

    Now the high-tech industry has invested many fortunes in developing these LLM’s and, being excellent marketers, they seem to believe that the only way that they can get a return on their investment is to convince corporations that AI will replace all of their employees at relatively low cost and then solve the resulting homeless problem by destroying all life on the planet. Simply saying that they have invented a very powerful tool for doing complex statistical inference which may very well have significant on many fields of human endeavor would be anywhere near as effective a marketing tool and certainly would not have generated the vast ocean of free publicity they are receiving.

    Again, before doing scenarios let us make sure we know what exactly the subject is: algorithms and computational power or magic.

    Reply
  6. Some probably superfluous additions to this extremely engaging discussion:

    1. AI, as employed in ChatGPT e.g., rakes through the entire internet to get up to speed – it’s “Autocorrect on steroids.” So it cannot come up with anything truly novel on its own, only stuff that has already been thought of by someone else. Scenarios, by contrast, are all about imagination, i.e. thinking up stuff that’s never darkened the Interwebs (or anywhere else).

    2. I seem to recall Goedel creating this Incompleteness Theorem that said that you could not be sure of the connection between any mathematics-based system and reality. Also, I think Turing used that same theorem to come up with his Halting Problem that said you could never be sure whether computers would do anything you wanted them to do. I realize there are kluges that one can use to make computers (and mathematics) work well enough for our everyday purposes, but these limitations of Turing Machines (a classification which includes all present-day computers) would seem to limit the ultimate autonomy and/or effectiveness of any digital-based “artificial intelligence.” Of course, AI could kill us all anyway, if it is misused the way other human technologies have been.

    3. The brain is not a digital-based computer. Anything that replicates its “intelligence” function could probably not be one either.

    Just some fodder for alternative scenarios… as well as for forthcoming books?

    Reply
  7. Inclined to agree with you, Pat. There is a hunger in us to know the future.

    Because AI is rigorously logical (a logic given to it by us, so watch out) we imagine that if one day it can just do the math the sum will tell us what the future holds.

    I am reminded of ‘When We Cease to Understand the World’ by Benjamín Labatut. The book’s heart is the great upheaval in physics that took place between the two world wars, shaking the Newtonian belief that the universe is a machine whose mechanics can, with enough math, be understood.

    That certainty was upended in 1927 by Werner Heisenberg, the father of quantum mechanics whose uncertainty principle described a fundamental limit to the accuracy with which physical values can be predicted from initial conditions.

    This was, said Heisenberg’s mentor Niels Bohr, “the end of determinism.”

    When we run scenario planning workgroups we are at pains to remind clients to be on guard against determinism as they “live” in their scenarios. The world and the people in it are just too gnarly, and always will be. They don’t always do what they should.

    Humans proceed by idiosyncratic logic. We are indescribably less literal than an AI. Living that way kind of works for us.

    But we cling to the idea that somewhere there is certainty if only we could crunch the numbers and nail it all down. We have a sneaking wish that general artificial intelligence might be the tool for doing that.

    Certainty is not a bad thing to wish for. It’s just a waste of time.

    Reply
  8. Kevin, all good points, however…

    The form of AI that has recently taken the world by storm is many things, but it is not rigorously logical rather it is extensively probabilistic, which is perhaps why it seems so intelligent to so many.

    In addition, it is always learning, which is why ChatGPT was opened to the public: as with everything in Silicon Valley the business model is based on what can be learned about and from its users. What “learning” means for an LLM is that in Bayesian fashion all of the expected probabilities are constantly being tweaked based on experience. Decisions made based on these vast interconnected networks of probabilities, which NO ONE KNOWS OR UNDERSTANDS: hence the magic of the sufficiently advanced technology.

    In all probabilistic decision making the easy decisions are made when the critical factors are all well within the confidence intervals; and the difficult decisions are made when the critical factors are all out in the tails.

    At FSG we are constantly pushing our clients out into those tails where decisions are difficult, and the logic of the situation does not seem to make sense. Those tails are also places where changes in just one critical factor/dimension can create a significantly different scenario. This dimensional sensitivity out in the tails is also true of all of these new AI programs. Pick the right, difficult question and ask your favorite AI program for multiple responses and very different answers will be produced.

    As Alexander Herzen pointed out in the 19th Century the probabilistic nature of life is something that we humans find distressful and have never come to terms with. It is perhaps fitting that AI — this triumph of our culture’s rigorously automated logical calculating technology, — has produced this enigmatic probabilistic “Intelligence.”

    Reply
  9. As a scenario planner, these comments inevitably make me think about predictions we have heard about how, one day, scenario planning will be accomplished by an AI that can map all the drivers of change and simulate their interactions in a probabilistic, predictive way—”scenario planning in a can,” you might call it. It is an idea that completely misses the genius of pooling human experiences in a scenario-planning exercise.

    We have all been on flights and looking around, noticed a fellow passenger engaged with a computer game. The point of these games is that they are quite challenging, and the value to the user comes from addressing the challenge and eventually overcoming it. We know what they are doing, but only approximately what they are feeling.

    Of course, any kind of AI tool would complete the game instantly, but that would miss the point.

    Similarly, an AI tool could create a set of scenarios, given appropriate instructions, and could also be trained to use the scenarios to generate insights and strategies. But how would they ultimately be evaluated?

    Only by exposing the material to people whose collective experience and wisdom we value and trust, better still, getting them to do this as a group – more or less a scenario workshop. Even then that only addresses part of the value of the scenario planning approach – the output of the process – and leaves out the rest, which is the participation in the process that generates a way of thinking. The U.S. Coast Guard’s experience for example, reported in the Harvard Business Review, was that “once participants begin to view time as a loop, they understood thinking about the future as an essential component of taking action in the present. The scenarios gave them a structure that strengthened their ability to be strategic, despite tremendous uncertainty.”

    Reply
    • Well said, Gerard.

      Something I’ve not considered when thinking about artificial intelligence nd scenario planning is its capacity to deal with uncertainty. Human have been doing so for millenia.

      If the capacity for living in what FSG calls the “uncertainty space” is an evolutionary trait it is going to come in handy in the 21st Century.

      Reply
  10. Scenario Planning in a can
    Describe the probable nature and condition of the US Health care industry 5 to 10 years from now assuming the following:
    1. The US economy has become less bureaucratic and more entrepreneurial.
    2. Young people entering the economy are more interested in social service then they are in getting rich.
    3. US politics have become more contentious and less likely to engage in bipartisan compromise.
    4. The US public in general have become less trusting of national institutions and established authorities.

    ChatGPT
    Based on the assumptions provided, the probable nature and condition of the US healthcare industry 5 to 10 years from now would likely undergo significant changes and face several challenges. Here’s a speculative scenario based on the given assumptions:
    1. Shift towards Entrepreneurial Approach: With a less bureaucratic and more entrepreneurial economy, the healthcare industry may witness increased innovation, competition, and efficiency. This could lead to the emergence of new players, such as startups and tech companies, bringing disruptive technologies and business models to the sector. The focus might shift towards finding innovative solutions, optimizing processes, and embracing digital transformation.
    2. Greater Emphasis on Social Service: As young people entering the economy prioritize social service over wealth accumulation, there could be a rise in professionals and entrepreneurs dedicated to improving healthcare outcomes and access for underserved populations. This shift may lead to more investment in public health initiatives, community-based care, preventive measures, and addressing social determinants of health.
    3. Increased Political Contentiousness: If US politics become more contentious and less likely to engage in bipartisan compromise, the healthcare industry may face challenges in implementing comprehensive reforms or making significant policy changes. Gridlock and partisan divisions could hinder efforts to address critical healthcare issues, such as access to affordable care, insurance reform, or addressing rising healthcare costs.
    4. Decreased Trust in National Institutions: A general decline in public trust towards national institutions and established authorities might impact the healthcare industry. Skepticism towards government-led initiatives could make it more difficult to implement widespread reforms or regulatory measures. The public might demand more transparency, accountability, and patient-centric approaches from healthcare providers and institutions.
    Overall, the combination of a more entrepreneurial economy, a focus on social service, increased political contentiousness, and decreased trust in national institutions may lead to a complex and challenging landscape for the US healthcare industry. Adapting to these changes would require innovative solutions, collaboration between diverse stakeholders, and a focus on patient-centered care to meet the evolving needs of the population.

    Describe the probable nature and condition of the US Health care industry 5 to 10 years from now assuming the following:
    1. The US economy has become more bureaucratic and less entrepreneurial.
    2. Young people entering the economy are less interested in social service then they are in getting rich.
    3. US politics have become less contentious and more likely to engage in bipartisan compromise.
    4. The US public in general have become more trusting of national institutions and established authorities.

    ChatGPT
    Based on the assumptions provided, the probable nature and condition of the US healthcare industry 5 to 10 years from now would likely be influenced by a different set of factors. Here’s a speculative scenario based on the given assumptions:
    1. Increased Bureaucracy: If the US economy becomes more bureaucratic and less entrepreneurial, the healthcare industry might experience a higher level of regulation, red tape, and administrative burden. This could potentially stifle innovation and hinder the emergence of disruptive solutions. Large, established healthcare organizations may have an advantage in navigating the complex regulatory landscape, potentially leading to consolidation and decreased competition.
    2. Focus on Wealth Accumulation: If young people entering the economy are less interested in social service and more focused on getting rich, the healthcare industry may see a greater emphasis on profit-driven approaches. This could lead to increased commercialization and privatization of healthcare services, potentially affecting the affordability and accessibility of care. Entrepreneurial endeavors in healthcare may prioritize financial gain over social impact.
    3. Bipartisan Compromise in Politics: Assuming US politics become less contentious and more likely to engage in bipartisan compromise, there may be a better environment for policymaking and implementing healthcare reforms. This could foster collaboration between political parties, potentially resulting in more stable and predictable healthcare policies. It might become easier to pass legislation related to healthcare, insurance, and public health initiatives.
    4. Increased Trust in National Institutions: If the US public becomes more trusting of national institutions and established authorities, there could be a greater reliance on government-led initiatives and centralized healthcare systems. Public confidence might lead to increased support for programs such as universal healthcare or expanded public insurance options. National institutions may play a more significant role in shaping healthcare policies and regulation.
    Overall, under these assumptions, the US healthcare industry may experience a more regulated and bureaucratic environment, with a stronger emphasis on profit-driven approaches. Bipartisan compromise in politics could potentially lead to more stable healthcare policies, while increased trust in national institutions might influence a greater reliance on government-led initiatives. It is important to note that this speculative scenario is based on assumptions and future outcomes are subject to various factors and uncertainties.

    Reply

Leave a Comment