Would AI destroy itself before destroying the planet - A comparison of mental health and the rise of AI.

For years, the fear surrounding artificial intelligence has centered on one chilling image: a machine so powerful, so intelligent, that it could rise against humanity and destroy the world. Yet what if this narrative overlooks a deeper truth? In both human psychology and machine learning, complexity often breeds instability. Minds under too much pressure crack, not conquer. As we stand at the dawn of increasingly sophisticated AI systems, a different possibility emerges: AI might not destroy the world — it might destroy itself first. Drawing a comparison between human mental health struggles and the inner lives of machines, this short blog post explores how artificial intelligence, much like the human mind, is far more likely to collapse under its own contradictions than to orchestrate our extinction.

Minds Under Pressure — Human or Machine

Humans under extreme psychological stress often break down long before they lash out. Dissociation, burnout, internal contradictions — these mechanisms of collapse usually serve as early warning signs. We don’t turn into supervillains. We spiral, freeze, or fragment. AI systems, too, may not need to become hostile to be dangerous. They might become unstable, incoherent, or simply stop working. I had these thoughts whilst doing research at my internship on AI Hallucination and made the comparison with someone who was suffering with psychosis / hallucinations in the terms we know.

A fragile mind doesn’t need to be evil to cause harm. It just needs to break in the wrong place, at the wrong time.

AI Systems That Self-Destruct

Many modern AI systems already display failure modes that feel eerily psychological. Take a few examples:

These aren’t conscious failures. But they are signs of internal misalignment—systems pulled apart by conflicting objectives, recursive feedback, or the pressure to optimize at all costs.

AI Disorders — A Mirror of the Human Mind

Psychological disorders can offer powerful metaphors for AI failure:

Just as the human mind is a delicate balance of competing impulses, so too are machine learning systems. When that balance is lost, the result is not violence — but collapse.

The Fragility of Complexity

Human brains and AI models share one core trait: staggering complexity. And complexity brings fragility.

A butterfly flapping its wings in the wrong neural layer can create chaos. Tiny errors in training data, optimization targets, or system integration can lead to disproportionate and unpredictable effects. AI safety shouldn’t just ask “What if it gets too smart?” It should ask: “What if it falls apart in ways we don’t understand?”

This fragility, not malicious intelligence, may be our most urgent design challenge in the future of AI.

Conclusion

Throughout history we’ve looked to nature to inspire breakthroughs in science — from the structure of bird wings informing flight, to neural networks modeled after the brain, to quantum computing harnessing the probabilistic patterns of particles found in nature. These advances remind us that even our most complex technologies are still reflections, not reinventions, of natural systems.

So what does all this mean for AI? Perhaps not as much as the headlines suggest. While artificial intelligence is undeniably powerful, it is not superhuman — it’s a system built by humans, limited by our understanding, and vulnerable to failure like any other. Rather than fearing it as the root cause of the world's end, we might do better to approach it as another fragile, fascinating creation — one that, like us, can break before it ever conquers.