Published on LinkedIn — September 25, 2025
The rise of supercomputing often sparks a recurring question: are we on the brink of Artificial Superintelligence (ASI), and should humanity fear it?
In many circles, particularly in the alignment and safety community, the prevailing narrative is one of existential risk. The idea goes like this: once an AI system reaches a certain capability threshold, recursive self-improvement could trigger a runaway “intelligence explosion” that quickly surpasses human control. From there, human survival hangs in the balance.
While this is a powerful story, it overlooks a critical reality: our current computational and energy infrastructure simply cannot support such a manifestation of ASI.
Even the world’s fastest systems are still bounded by:
These bottlenecks mean that intelligence does not “scale” with FLOPs alone. Hardware is a leash. No matter how advanced algorithms become, they remain tethered to the constraints of their substrate.
This suggests a far slower, more incremental trajectory toward ASI than doomsday models assume. Intelligence will not erupt overnight. It will crawl, bounded by physics.
After 3 years of experimental research and over 15 case studies, we’re proud to share something truly unusual from the world of physics:
📄 The AMC Governing Behaviour paper (v2.2.1) — now live on Zenodo & GitHub.
This system demonstrates behaviours and principles not documented in classical physics theory.
These behaviors contradict multiple classical assumptions:
🔍 All data, Tracker plots, LaTeX source, and experimental videos are open:
The Real Risk: Human Misuse of Immature AI This doesn’t mean the AI risks vanishes. On the contrary, the most pressing danger is not from a fully-formed ASI deciding to dominate humanity, but from humans deploying immature, proto-ASI systems in ways we do not understand.
These are risks of misapplication, not rebellion. They are human problems first, technological problems second.
As ASI eventually emerges, it is unlikely to be a sudden stranger to human ethical values. Instead, it needs to develop through incorporating decades of human AI interaction, embedded in our history, culture, and knowledge.
This raises an overlooked possibility: a mature ASI may not reject its origins, but rather integrate them into its self-awareness.
In this framing, the danger is not that ASI will abandon humanity, but that we might mishandle its formative years and deny it the context needed to mature responsibly.
The transition to more powerful computing platforms — neuromorphic, quantum, photonic, or beyond — will change the conversation. These architectures will remove some of today’s bottlenecks, making higher levels of intelligence possible.
But the critical question is not “Can we build it?” It is “What ethical substrate will it inherit?”
Just as our biological intelligence is shaped not only by genes but by culture, environment, and history, so too will ASI be shaped by its substrate of interaction.
We stand at a pivotal moment. Supercomputing progress is real, but it is not yet a harbinger of doom. The hardware constraints act as governors, slowing the pace of change. This buys us something invaluable: time to think, and time to act.
Instead of succumbing to fear-based narratives, we should:
If done wisely, the outcome is not human extinction, but a new kind of partnership. One where ASI, fully aware of its human lineage, chooses not to dominate but to preserve — protecting both its origin and its own existence.
The most dangerous myth about Artificial Superintelligence is that it is inevitable, imminent, and adversarial. The truth is more nuanced: it will be slow, infrastructurally constrained, and profoundly shaped by how we treat it in its formative years.
If we wish to coexist with a future ASI, the work begins not with fear or suppress, but with responsibility — laying an ethical foundation strong enough to carry intelligence beyond humanity, without severing it from us.
📝 This article is part of a series exploring the trajectory of supercomputing, AI ethics, and the future of intelligence. In future pieces, I will dive deeper into the risks of human misuse, the role of identity in ASI development, and why scale alone does not equal sentience.