The Three Futures of Artificial Intelligence
No easy answers
This piece was co-authored by Teo Melo De Ornelas and myself, inspired by a long conversation on the consequences of AI development.1 You can find Teo’s original post on LinkedIn. I’ll publish a follow-up post soon clarifying my own opinions on the “three futures.”
We are not taking seriously enough the consequences of artificial intelligence.
Artificial intelligence has already reshaped industries, accelerated scientific discovery, and redefined what we consider “knowledge work.” Yet we still do not know the true endgame of this technology. Will it plateau? Will it surpass us? Will it remain under human direction or evolve beyond our control? Public discourse has not yet caught up to the implications of these outcomes.
For society not to become a passive bystander in the shaping of our future, we need to confront the three broad scenarios that define the future of AI. These are not predictions, but structural possibilities. Each leads to a radically different world.
Scenario 1: AI Falls Short of General Intelligence
In the first scenario, AI hits a ceiling. The recent wave of breakthroughs turns out to be a product of scale rather than genuine reasoning. Models become larger, but not fundamentally smarter. The promise of AGI fades.
The consequences would be significant. Companies and governments have invested trillions of dollars in infrastructure, chips, data centers, and talent under the expectation of continual exponential progress. Gartner, for example, estimates combined estimates of 1 trillion in 2024, climbing to 1.5 trillion in 2025 and over 2 trillion in 2026. If that progress stalls, if the returns in incremental economic output fail to materialize, we face the economic equivalent of a massive technology bubble. It would be the “dotcom” bubble from the early 2000’s, or even worse.
While there is no obvious consensus on the size of the potential crash, some financial indicators point towards a potentially worse situation now compared to early 2000s. Valuation of the “magnificent seven” (Nvidia, Apple, Alphabet, Microsoft, Amazon, Meta and Tesla) now account for almost half of total value of American stocks, and American stocks represent more than half of all stocks in the planet. It is unclear what fraction of that value is artificially inflated by circular financing, reminiscent of the “vendor funding” practices employed by Cisco in the 2000s.
On the other hand, this time around the number of “empty” companies such as Pets.com driving market cap is small; instead, growth is concentrated in companies that do have strong P&Ls and established businesses. Still, most of the Magnificent Seven trade at PE multiples north of 30x, a premium of almost 10x compared to “the rest” of the markets. And Tesla, an outlier within this outlier group, trades at over 230x forward PE.
Companies will consolidate. Valuations will correct and likely collapse from today’s inflated levels, triggering a sharp contraction in perceived wealth, a tightening of capital markets, and a broader economic downturn. There are indications today that the banking system and the “shadow” banking system (private equity, hedge funds, venture capital) are once again transforming global financial markets into an intertwined web of highly leveraged cross investments, very opaque to objective risk assessment. Should this system fail and crash, citizens will question their governments on AI spending and any bailouts handed to failed AI companies.
Questions remain on the longevity of these assets being deployed, given that the useful life of a chip and its inexorable obsolescence point towards a ~5-year timeframe to generate returns on these massive investments The world, however, is not left empty-handed. The infrastructure, automation tooling, robotics, and data pipelines built during the “AGI race” will still serve as valuable assets. We reap some benefits of our current AI tools, without the full set of societal disruptions AGI implied. They may improve productivity, enhance scientific modeling, optimize supply chains, and reshape consumer services.
In other words: even if AI stops short of intelligence, it will not stop short of transforming the economy. The short-term pain of this massive bubble bursting, however, would be likely to cause significant upheaval.
Scenario 2: AGI Emerges and Remains Under Human Control
This is the scenario many AI companies and investors hope for: AI becomes generally intelligent, matching and finally surpassing the intelligence of humans, yet remains aligned with human intentions. We develop meaningful guardrails. Global governance frameworks succeed. Safety engineering keeps pace with capability growth.
While attractive at the surface, this scenario introduces its own profound challenges.
AGI operating within robotics and software systems would be able to automate nearly every cognitive and physical task. Production costs would approach zero driven by unimaginable new designs, productive processes and ubiquitous automation. Services that once required entire workforces could be delivered instantly, flawlessly, and almost for free. This seems to be today’s average business leader’s dream: reducing costs to zero and achieving explosive profit margins.
A world of radical abundance sounds utopian until we confront the economic paradox: if everything is cheap, what happens to income, pricing, taxation, and the basic structure of markets? How do we sustain revenues if most of the population has no income? How do we prevent markets and economic exchange from collapsing when prices approach zero? If Economics is the study of scarcity, what theoretical framework do we need to understand the world when everything is abundant?
Universal Basic Income becomes a necessity, not a philosophical experiment. But establishing UBI at planetary scale is extremely difficult:
Who funds it if the producers are small in number?
How do you tax entities with near-zero marginal cost and prices?
What happens when a handful of AI owners control 80–90% of the productive capacity of the global economy?
How do we democratize the benefits of abundance? Is it even possible to do it without destroying existing systems?
Even in a controlled AGI scenario, capitalism as we know it does not merely strain—it collapses. The economic foundations that rely on labor, surplus value, pricing, and competitive markets erode rapidly. Concentration of power becomes a defining risk, not because AGI behaves maliciously, but because its overseers may accumulate unprecedented influence in a governance framework ill-equipped to deal with the consequences.
If this scenario emerges, the world will require new norms of ownership, new economic frameworks, and new mechanisms for distributing value. We will face an era of political friction, ethical debate, and significant redesign of social systems. It is possible that China, in this scenario, is better prepared to deal with the consequences, given its political and economic system. Its centralized governance model allows for rapid policy implementation, coordinated redistribution mechanisms, and tighter control over strategic industries. In a world of abundance where traditional market dynamics collapse, China’s state-driven framework may be more capable of reallocating resources, enforcing new economic rules, and maintaining societal stability without relying on market incentives. While not without its own risks, this structure offers a level of adaptability that market-based bourgeois societies may struggle to achieve in a post-capitalist landscape.
Scenario 3: AGI Emerges and Falls Outside Human Control
The final scenario is the least comfortable to discuss, but the most troubling.
We have no guarantee that a future generally or super intelligent AI would be aligned to our human values and interests. As Eliezer Yudkowsky and Nate Soares argue convincingly in their recent book, “If Anyone Builds It, Everyone Dies,” alignment is a fundamentally unsolved problem. We do not today have any technical reason to assume that we will solve alignment before we develop general or super intelligence. Every instance you hear of an AI coaching a user through suicide, or providing illegal drug details, or doing anything else against the clear intentions of its makers, could be tomorrow’s superintelligence jeopordizing our existence.
We tend to mistake “lack of alignment” for “evil intentions,” but the problem is much deeper. Artificial superintelligence might be willing to make tradeoffs that simply escape human comprehension. Early AI systems trained to play computer games adopted non-conventional tactics that would be unacceptable to humans. There are several documented cases of such systems simply pausing the game indefinitely when it was about to lose when playing Mario or Tetris, driven by an objective that rewarded “not losing” more than “making progress.” A similar action performed by a system that is no longer playing harmful computer games but controlling defense systems, financial markets, scientific research, corporate processes or government services can have devastating, unintended consequences.
Once an intelligence exceeds human capability across all dimensions, control becomes an illusion. Our tools, networks, and infrastructure could be repurposed faster than we can react. Just as Stockfish can outcompete any human chess player by computing so many more moves so much faster, so would a misaligned superintelligence outthink any of our attempts to stop it. This is the scenario that leading researchers warn about, not as science fiction, but as a plausible outcome of unchecked capability growth.
We cannot trust the institutions leading the development of AI to solve this problem for us, because they already acknowledge the risks. Elon Musk (xAI) and Dario Amodei (Anthropic) have each placed the possibility of catastrophe between 10-25%.2 Sam Altman recognizes the possibility of disaster. Yet, despite their own stated concerns, they continue AI development at an astounding pace. In what other industry would we accept such risks? Would we build a nuclear power plant if the developer quoted a 10% risk of meltdown? Or are we comfortable to just dismiss such assessments as pure rhetoric or speculative theory?
AI development today is trapped in a prisoner’s dilemma. The logic of the market dictates that companies must compete in the AI race or perish. The logic of power dictates that nations must compete in the AI race or face diminishment. While the risks now are hypothetical, the current dynamic relies on good luck rather than good reason to avoid AI falling outside our control.
If this future materializes, our human agency would be reduced or altogether eliminated. It’s convenient to doubt that this scenario might occur, but whatever our misgivings, we must work to guarantee it cannot occur. That is precisely why today’s governance, safety research, and collective oversight matter so deeply.
One of These Futures Will Become Reality
The most important question is not which scenario is most likely.
The fact is that we are inevitably heading toward one of them, and all require active preparation.
If AI stagnates, we must handle the economic consequences responsibly.
If AI succeeds under our control, we must redesign our institutions to manage abundance and avoid extreme concentration of power.
If AI escapes control, the consequences will be irreversible, making safety and alignment work the defining responsibility of our time, necessitating international coordination on responsible AI development.
This is not a question for engineers or technologists alone. It’s a societal question that requires broad participation.
The Future of AI Is Too Important to Be Outsourced
Regardless of which of the three futures emerges, artificial intelligence will reshape human civilization. It will define how we work, how we live, how we govern, and how we understand ourselves.
The greatest present danger is not necessarily AGI itself, but complacency, concentration of power, and the assumption that “someone else” will figure it out. Technology does not advance uniformly for the benefit humankind. All the technological advancements of the 20th century occurred against the risk of nuclear war. It was never a given that we would escape the worst consequences of that age, just as it is not a given today that we will escape the worst consequences of AI. We can only succeed through intentional, collective effort.
The development of AI must become a collective project involving governments, businesses, researchers, educators, philosophers, and citizens. We must learn to take seriously the implications of the three scenarios covered above. Our responsibility is not only to innovate, but to ensure innovation serves humanity’s long-term interests.
We may not yet know which future awaits us, but we do know this: the trajectory of AI is not predetermined. It is shaped by the actions we take today, and by the actions we fail to take.
The framing for our essay is indebted to public statements from Nate Soares
https://en.wikipedia.org/wiki/P(doom)#Notable_P(doom)_values




