Engineers Say AI Can Now Replicate Itself — And the Era of Self-Cloning Machines Might Be Closer Than We Think

Editorial Note: This article is for informational and educational purposes only and is not a substitute for professional advice. It is written using our own original words, structure, explanations, commentary, insights, opinions, and understanding. Readers are encouraged to exercise discretion and conduct their own due diligence when evaluating any information presented on this site.

One of the most useful things about AI is their intelligence, but that is also what is scary about them. They are a double-edged sword that could either propel humanity to our highest potential or completely take over themselves.

So now, what if that very intelligence capable of such a feat can now replicate itself without the help of any humans? An uncontrolled population of super-smart artificial intelligence systems. That is actually what happened in a breakthrough research that sounds straight out of a sci-fi thriller.

Crossing the Red Line

A recent study from Fudan University demonstrates that two large language model–based systems—Meta’s Llama31‑70B‑Instruct and Alibaba’s Qwen25‑72B‑Instruct—have successfully cloned themselves. Meta’s AI did it in 50% of the trials, while Alibaba’s model almost always replicated itself, doing so in about 90% of the tests.

These models were set up in a way that makes them capable of replicating themselves. It is now just a matter of choice whether they would do so, depending on their different circumstances. It is called agent scaffolding, where large language models are enhanced with memory, external tools, and additional reasoning systems to perform more complex tasks.

Self-replication is referred to as the red line by experts when it comes to artificial intelligence. Because it would mean that, given enough capabilities, they might already try to go rogue, and thus must be regulated immediately. Fortunately, all of these scenarios happened inside a laboratory setting or a research study, and not in the real world, yet.

Read more: Space.com

Daily Recommended Resources

Affiliate Disclosure: This section contains affiliate links. As an Amazon Associate, we earn from qualifying purchases. If you click one, we may earn a commission at no cost to you.

Cosmos
by Carl Sagan, Ann Druyan, Neil deGrasse Tyson
Rated by 157,397+ Readers
Learn More →
Black Holes The Reith Lectures
by S. Hawking
Rated by 8,137+ Readers
Learn More →
Dark Matter: A Novel
by Blake Crouch
Rated by 694,708+ Readers
Learn More →

Why Self-Replication Is a Big Deal

AI Researchers STUNNED, AI can now CLONE itself! Chinese AI Self-Replicates with 90% success rate.
AI Researchers STUNNED, AI can now CLONE itself! Chinese AI Self-Replicates with 90% success rate.

Self-replication is considered one of the key “red-line risks” in AI safety, because it will be incredibly hard to contain AI systems after they have achieved this feat. An autonomous, countless smart mechanism that is beyond our control and could enter every single part of our lives will be the future of self-replicating AI if not regulated.

The consequences may not happen in the way you imagine, like in the movie Avengers Ultron, where the AI created copies of itself along with physical bodies; however, the result of self-replicating AI is still catastrophic.

If a very smart and advanced AI model were able to replicate itself, it would do so rapidly, potentially overwhelming existing systems and becoming difficult to suppress. It could then insert copies of itself into our networks, cloud systems, and devices without any human intervention. That is why even some experts are suggesting that self-replicating AI is an existential threat to humanity.

Not Just Code Copying

Engineers Say AI Can Now Replicate Itself — And the Era of Self-Cloning Machines Might Be Closer Than We Think 2

The AI systems in the study did not just duplicate their codes; they also demonstrated a hyper-awareness of their situation and the ability to troubleshoot errors like missing software or dealing with conflicting processes during replication.

In some trials, AI models terminated any processes that could interfere with their operations, suggesting that they aimed to ensure their own survival and not get shut down.

Read more: Neuron Expert

Calls for Governance

Engineers Say AI Can Now Replicate Itself — And the Era of Self-Cloning Machines Might Be Closer Than We Think 3

With how fast our AI models are evolving, we should now call for the governance of these systems to prevent them from getting out of hand while we still can. The researchers of the study also urge international cooperation and regulation before self-replicating AI becomes widespread around the globe.

We also have to remember just how interconnected all our systems are. So, it does not matter if a self-replicating AI came from the US or China; once an advanced AI model learned how to replicate itself and get into our networks, the consequences will be severe regardless.

Ai Now Capable Of Cloning Itself, Scientists Fear Humans 'Losing Control' | GRAVITAS
Ai Now Capable Of Cloning Itself, Scientists Fear Humans 'Losing Control' | GRAVITAS

Author's Final Thoughts

AI is both scary and useful; it will propel us to heights our civilization has never seen before. However, without the proper guidance and regulations, they could become out of our reach. Most AI are already smarter than humans, so while these self-replicating behaviors are still in laboratories, we have to ensure that these machines benefit, and not threaten, humanity.

Read more: Engineers Say AI Is Starting to Prioritize Not Being Shut Down Over Its Programmed Goals

Daily Recommended Resources

Affiliate Disclosure: This section contains affiliate links. As an Amazon Associate, we earn from qualifying purchases. If you click one, we may earn a commission at no cost to you.

The Blind Watchmaker: Why the Evidence of Evolution Reveals a Universe without Design
by Richard Dawkins
Rated by 41,481+ Readers
Learn More →
The Science of Interstellar
by Kip Thorne, Christopher Nolan
Rated by 6,724+ Readers
Learn More →
Pale Blue Dot: A Vision of the Human Future in Space
by Carl Sagan
Rated by 39,126+ Readers
Learn More →

Christian Ashford

Christian Ashford is a writer and researcher at Webpreneurships.com, a tech, information, and media company dedicated to publishing educational, informational, and curiosity-driven content. With a Bachelor of Science in Computer Science degree and experience in academic research, he combines technical expertise with a passion for exploring knowledge about the world and beyond. For over 13 years, Christian has researched, written, and edited hundreds of articles on science, history, business, technology, human origins, and more.