Are We Seeing the First Steps Toward AI Superintelligence : US Pioneer Global VC DIFCHQ SFO NYC Singapore – Riyadh Swiss Our Mind

Today’s leading AI models can already write and refine their own software. The question is whether that self-improvement can ever snowball into true superintelligence

The MatrixThe Terminator—so much of our science fiction is built around the dangers of superintelligent artificial intelligence: a system that exceeds the best humans across nearly all cognitive domains. OpenAI CEO Sam Altman and Meta CEO Mark Zuckerberg have predicted we’ll achieve such AI in the coming years. Yet machines like those depicted as battling humanity in those movies would have to be far more advanced than ChatGPT, not to mention more capable of making Excel spreadsheets than Microsoft Copilot. So how can anyone think we’re remotely close to artificial superintelligence?

One answer goes back to 1965, when statistician Irving John Good introduced the idea of an “ultraintelligent machine.” He wrote that once it became sufficiently sophisticated, a computer would rapidly improve itself. If this seems far-fetched, consider how AlphaGo Zero—an AI system developed at DeepMind in 2017 to play the ancient Chinese board game Go—was built. Using no data from human games, AlphaGo Zero played itself millions of times, achieving in days an improvement that would have taken a human a lifetime and that allowed it to defeat the previous versions of AlphaGo that had already beaten the world’s best human players. Good’s idea was that any system that was sufficiently intelligent to rewrite itself would create iterations of itself, each one smarter than the previous and even more capable of improvement, triggering an “intelligence explosion.”

The question, then, is how close we are to that first system capable of autonomous self-improvement. Though the runaway systems Good described aren’t here yet, self-improving computers are—at least in narrow domains. AI is already running code on itself. OpenAI’s Codex and Anthropic’s Claude Code can work independently for an hour or more writing new code or updating existing code. Using Codex recently, I thumbed a prompt into my phone while on a walk, and it made a working website before I reached home. In the hands of skilled coders, such systems can do dramatically more, from reorganizing large code bases to sketching entirely new ways to build the software in the first place.

So why hasn’t a model powering ChatGPT quietly coded itself into ultraintelligence? The hitch is in the phrase above: “in the hands of skilled coders.” Despite AI’s impressive improvements, our current systems still rely on humans to set goals, design experiments and decide which changes count as genuine progress. They’re not yet capable of evolving independently in a robust way, which makes some talk about imminent superintelligence seem blown out of proportion—unless, of course, current AI systems are closer than they appear to being able to self-improve in increasingly broad slices of their abilities.

One area in which they already look superhuman is how much information they can absorb and manipulate. The most advanced models are trained on far more text than any human could read in a lifetime—from poetry to history to the sciences. They can also keep track of far longer stretches of text while they work. Already, with commercially available systems such as ChatGPT and Gemini, I can upload a stack of books and have the AI synthesize and critique them in a way that would take a human weeks. That doesn’t mean the result is always correct or insightful—but it does mean that, in principle, a system like this could read its own documentation, logs, and code and propose changes at a speed and scale no engineering team could match.

https://www.scientificamerican.com/article/how-close-are-todays-ai-models-to-agi-and-to-self-improving-into/