AI in scenario planning was an inevitable development. Does it help or hinder the effectiveness of scenario use in strategy development and execution?
Many years ago a government client asked us when we expected machines to be able to write credible, useful future scenarios. We don’t recall our response. It was probably something glibly humorous, like “never.” But of course we were already writing scenarios about big data, quantum computing, and machine learning. Deep down we reckoned it was only a matter of time before something like what we now refer to commonly as AI would invade our scenario planning space.
Fast forward a couple of decades. Machines – Generative AI models – are indeed writing scenarios. So far, FSG scenario planners are unconvinced that they are as good (creative, rigorous, nuanced) as the scenarios we have developed over the years. But in time, with proper training, we believe GenAI models will be productive team members of FSG scenario planning projects.
Does AI obviate the need for scenario planning?
Organizations use scenario planning to inform strategy development. We know from our own experience that scenarios have helped firms, government departments and non-profit organizations make superior decisions in the face of future change. But there’s a cost to scenario-based decision-making – staff time, executive attention, workshops, and (least of all) consultants. So, many organizations are reluctant to commit to scenario planning. And now with AI tools available it’s not unreasonable to wonder if powerful AI models could eliminate the need to do scenario planning at all.
Perhaps predictably our view is that scenario planning is still relevant and arguably more vital than ever. Why? Because the future, more than ever, continues to be uncertain, complex, and shaped by forces we do not control. Traditional forecasting breaks down precisely when uncertainty matters most – and this applies as much to strategies developed by AI as it does to those developed by mere mortals.
Over the last couple of years, FSG has been exploring Generative AI applications to a variety of scenario-planning tasks. This is a progress report on what we’ve been learning. We hope our readers who value strategic foresight and scenario planning find some value in it.
The AI promise in scenario development
Scenarios are descriptions of alternative future operating environments to anticipate emerging disruptions and market changes and ultimately to inform strategies and operating plans. Scenarios are stimulus material, designed to challenge planners’ assumptions about what the future holds. Imagination – what we call “rigorous imagination” – is the critical ingredient of scenario development.
So, can AI help make scenario development more rigorous and imaginative?
Based on our initial investigations, we think so, at least in part. A basic ChatGPT model can ideate an indefinite number of relevant factors related to a strategic issue of interest. Using a simple test (mimicking a past FSG scenario planning project) we asked an AI to generate a list of strategic factors affecting the future success of a US pet food company. The output, which in pre-AI times would have taken a researcher several days to come up with and organize, was impressive. With no training, the model identified and organized a lengthy list of relevant consumer-based scenario drivers – e.g., consumer economics, brand trust, distribution and channel structure, regulatory environment, supply chain resilience, demographics, data, technology, etc.
It took some prompting for the model to elaborate on more subtle pet-related factors – e.g., the future of work, which determines the prevalence of remote work and which in turn is an important enabler of, say, dog ownership. But the model was unquestionably responsive.
Can AI create useful scenario themes from drivers?
Using another set of drivers in a sector we were familiar with we asked the AI to draft some scenario themes. Without any other guidance, it suggested four scenarios based on a 2 x 2 matrix. Predictably, perhaps, the scenario themes were bland and uninteresting. (FSG scenarios, in contrast, typically contain four dimensions which tend to stimulate more realistically complex scenario stories.) With prompting, the AI did come up with richer, four-dimension scenarios. But again, the themes came across as mechanical, somewhat lifeless and not particularly inviting for purpose of strategic investigation. Here again, human intervention was required.
Can an AI mimic the creative aspects of scenarios?
We were pleasantly surprised by what the AI created when it volunteered to develop a scenario-based, first-person narrative about medical practice. We had initially “fed” the AI a three-page description of a future U.S. health care model (derived from an existing FSG scenario) and asked it to identify issues, which it did immediately and extensively in an ordered, appropriately clustered outline. Then, at the end of the exchange, the AI asked if we would like a first-person narrative from a physician’s perspective. Here are two passages that resulted:
“The hardest part of practicing medicine now is the line I walk between what a patient needs and what the system will support. We live inside price caps, government-mandated outcomes reporting, and the ever-watching eyes of AI-driven utilization tools. If the predicted “EYAL gain — expected years of additional life — for a treatment is low, I must explain that it isn’t covered under the universal plan. Sometimes I can fight for exceptions; most of the time, I can’t.”
“Every time I say “the plan won’t cover that,” a tiny part of my training recoils. I remember my ethics professor telling us that we are always our patient’s last advocate. But now I am the interpreter of scarcity. I try not to let patients hear the guilt in my voice.”
These creative vignettes are impressive, especially based on the limited training required and the fact that the AI volunteered to create these scenario vignettes without any prompting.
Our Takeaway: AI is a useful scenario partner – with supervision
With limited testing, we come away with the belief that AI can be a highly productive partner in creating scenarios. It can supplement early phase ideation of scenario drivers and driver inter-relationships and, with skillful training, contribute to scenario narrative creation. As noted, the time savings in background research, driver creation and nomination of scenario themes could be significant. This is especially valuable in strategic planning projects with limited time and resources for early phase scenario R&D.
An important caveat is that effective AI training and prompting presupposes familiarity with scenario fundamentals in the first place. For instance, scenarios should always be plausible not merely possible and it’s generally not useful to select a set of scenarios in which everything is either fantastically good or numbingly bad. Real life’s not like that, and AI models will have to be reminded of that fact.
Remember: Strategic planning is ultimately about human judgment
In the current moment there is an irresistible temptation to push the boundaries of what AI can do in domains traditionally occupied by sentient beings. This includes strategy and planning.
FSG has written extensively about the importance of human factors in strategic planning in general and scenario planning in particular. We detect a worrying temptation to cede strategic authority to AI models which have no credible claim on special knowledge of the future – any more than do quantitative forecasting models, In both cases, prediction or judgments are made on historical data. And as we know, there are no data on the future.
So we continue to believe that because of their unique capabilities human beings have an important, enduring role in developing scenarios, even as AIs become increasingly powerful and useful research assistants.
Beyond scenario development tasks, the value of AI in scenario planning rapidly recedes, for the job of exploring the implications of alternative scenario futures is about judgment – human judgment. No CEO in his or her right mind would cede that role to an AI, no matter how clever the model. Moreover the power of scenario planning lies in the experience of the users – workshop participants who get to experience and explore alternative operating environments that challenge existing mental models and inform strategic judgment and priorities. Without the personal experience, there is no sense of ownership over the outcomes of the strategic process, and without ownership no real chance of draft strategies ever getting acted upon.
A recent MIT Sloan School article identifies five groups of human capabilities that AI does not possess. They are:
- Empathy and emotional intelligence
- Presence, networking and connectedness
- Opinion, judgment and ethics
- Creativity and imagination
- Hope, vision and leadership
This strikes us also as a pretty good list of critical attributes in scenario-based strategic planning. As a rule we don’t do predictions, but we have a collective hunch that future AI developments will only make these human factors that much more essential in scenario planning.
Fascinating.
If I am permitted a prediction, ten years from now we will joke ruefully about how undercooked AI was in 2026 and yet how eager large organizations were to put it to work before it was ready.
The moral hazard of AI is our tendency to become dependent on it, and so quickly. We want so badly for it to work. After all, it is cheap and fast. As you point out, for most senior executives time is a scarcer resource than money. Cheap and fast can seduce us into taking whatever an AI churns out even when the result is flawed, often seriously flawed and not in ways that are immediately obvious.
You are right that ongoing human engagement with the technology—asking good questions, thinking in a disciplined way about the answers—will be essential to avoiding junk results and the consequences of acting on those junk results.
Points to the two of you for giving the technology an honest shake-down cruise, and for raising the right questions.
Agree – I cannot help but think of Gartner’s hype curve.
Link: https://www.gartner.com/en/articles/hype-cycle-for-artificial-intelligence
Thanks, Eric. The Gartner “hype curve” is new to me, so thanks for the enlightenment. I see we’ve entered “the Trough of Disillusionment” as big users and investors come to grips with AI risks and limitations. And expense! We may be seeing this play out on Wall Street as we speak, with the present wobbling of software and AI tech stocks.
Thanks for your comment, Kevin. I think the challenge in separating AI substance from the hype is the fact that we way over-generalize when we’re talking about AI. There are functions that WE KNOW it does, and does well, today, especially automating a range of routine tasks. Autonomous vehicles are now safer than those driven by humans. But AI has not cured cancer or proven decisive in solving other pressing world problems. AI’s evolution will be lumpy, I suspect, with remarkably transformative impacts in some sectors and wholly disappointing (and even dangerous) effects in others.
Great piece. I also think Kevin’s point about moral hazard is important.
We had a good run.
I myself welcome our new insect overlords.
(Great piece. I fear AI will be used to turbo-charge predictive analytics far more than it will be used to develop true alternative scenarios. If I could be persuaded that the widespread myths that the universe of plausible futures can be mapped out exhaustively and its Bayesian percentage probabilities determined were going to be abandoned, I’d feel more optimistic. But I think the addiction to numbers will continue and AI may make it far worse.)
Thanks, Patrick. I don’t disagree — that AI could turbo-charge predictive analytics. But this may prove to be self-correcting, as the shortcomings of AI-fueled analytics and prediction become evident. There’s still the hope that future strategists (the wise ones, at least) will blend the augmentative value of AI with good, old-fashion human judgment. But I’ll concede that the temptation to go all Bayesian, with AI assistance, will in the near term be very great.
Another idea dislodged itself when I read the “crafted ambiguity” discussion below. That is the role of cause and effect in scenarios. A huge part of the power of scenarios is in suspension of our instinct to see the world in terms of (almost always oversimplified, and also not entirely conscious) mental models of cause and effect. E.g., “Budget deficits cause inflation.” (They didn’t from 2008 to 2020.) Scenarios simply posit a combination of conditions, often combinations that “don’t make sense.” It is in the logical storytelling that is required to bring us from the present to the end state that the most critical insights are often procured, because those stories show us how our crude mechanistic cause-effect models might be wrong, and entirely new possibilities for how the world might evolve are revealed.
On my first job I was doing spread sheet analysis on vast expanses of ruled light green paper using a pencil and a big Fridan Calculator. When I saw my first Apple II running VisaCalc I knew that the world had shifted. Suddenly, endless variations on the same tedious analysis were possible, and the damn thing would always foot and balance. I also remember typewriters and carbon paper, and hand drawn graphics. And, most fondly, I remember metal file cabinets and clerical staffs and the ease with which past work could be recovered.
In general, I would say, that the impact of virtually free computing power (in constant dollars a high-end lap top costs today less than a Friden calculator did) has made scenario thinking possible. Back then, staring at the piles of green Ampad sheets and the rows of Fridens, few executives had the courage to ask, “What would happen if we did it this way?”
I suspect that with AI everyone will get used to saying, “what does it look like if these types of things happen?” And, “Run a Monte Carlo against all combinations of driver ranges and see which approach is most robust.” “Then we can watch the videos.”
And all of this will be useful even if AI never masters the Sloan School’s EPOCH:
• Empathy and emotional intelligence.
• Presence, networking, and connectedness
• Opinion, judgment, and ethics
• Creativity and imagination
• Hope, vision, and leadership
However, AI might learn to do useful file retrieval.
Thanks, Robert. A lot of corporate strategy history to digest here, including the Friden calculator flashback! Very interesting point about PCs enabling scenario thinking. But you will no doubt recall a time when scenario scoping was done on matrix-filled white boards and background research was performed by librarians and interns with minimal digital tools. I do agree that computers have greatly expanded underlying knowledge pools and facilitated collaboration among scenario team members, in addition to many other practical benefits (including file retrieval!). But both you and Patrick Marren raise similar concerns about leaders surrendering their judgment to AI and ever more powerful analytical tools.
I thought this article most enlightening – and a bit refreshing in its balance. I may write more than one response (poor you), but just a couple of thoughts for now.
You mention the need for supervision and I agree, but not many readers may know How Much supervision might be needed. One example – in scenarios we always left some “facts” unclear or, more importantly, we often included ambiguity in the narrative and estate. But it was a kind of “crafted ambiguity” based on what we learned in interviews and how we wanted to challenge some conventional wisdom by “coming in the back door.” Those missing data and ambiguities are critical and are spread throughout the scenario. I don’t think an AI could manage that. But a writing partnership between human and AI might…might.
Second – research interviews are a critical first step in scenario building and we always followed a protocol UNLESS it seemed we might get something far more interesting by exploring something the interviewer just said (or did not say). Interviewing is still an art, I think. I would be uneasy with AI in that crucial role.
Bottom Line: as a partner, this could be fun and useful.
Excellent points Tom
Crafted ambiguity is a subtlety that AIs would need considerable training to adopt, and of course interviews are an important first step of great value – in a number of ways, and we failed to mention that in the piece.
I agree that client and stakeholder interviews are critical elements in scenario planning – and they do not, at least in 2026, lend themselves to AI substitution. There’s just way too much human interaction – and trust! – involved, both ways. Among other things, a skillfully conducted interview yields a wealth of insights into organizational capability and change readiness. These insights are absolutely invaluable in downstream strategy implementation work.
The Atlantic confirms my bias that “professional prognosticators” and “Superforecasters” remain utterly oblivious to the idea that having a high average prediction score is far less important than simply identifying what the truly important questions will be in one, five, or twenty years. Headline: “AI Gets Better Than Humans at Stuff You Can Never Use in Real Life to Make Decisions!”
https://www.theatlantic.com/technology/2026/02/ai-prediction-human-forecasters/685955/
Point taken. Though I imagine that if you’re, say, a hog futures trader a reliable AI prediction model could prove useful in narrow real-life buy/sell decisions. Thing is, once adoption becomes widespread, the competitive advantage of AI prediction models may dissipate.