Despite AI’s ubiquity since ChatGPT launched in 2022, it remains difficult to define. In general, it means computer systems that do tasks requiring human intelligence without traditional programming. AI is fundamentally different from past technologies.
The big breakthrough came in 2017 with Google’s “Attention Is All You Need”, which invented the “transformer” architecture underlying all modern AI. The transformer enabled parallel processing, processing over one trillion parameters, underpinned by massive data centres. These enabled AI to leap in capabilities like the difference between a 1976 calculator and a Cray supercomputer. The transformer changed everything.
One vexing thing is that no one, including brilliant AI scientists, knows how or why AI actually works. Scientists readily admit this. You might think we must know because people build it. Surprisingly, no. Building AI is more art than science. Scientists consider AI “grown” rather than “built”.
Artificial General Intelligence (AGI) is when AI equals all human knowledge and skills. Estimates range from 2-5 years. Computer scientists, Ray Kurzweil, estimates 2029; OpenAI CEO, Sam Altman, predicts 2029-2030; entrepreneur, Elon Musk, thinks 2026. Strong consensus says AGI arrives by 2030, or sooner.
Artificial Superintelligence (ASI) radically exceeds all human intelligence. ASI will bring cures for all diseases, life extension and technologies solving major problems like climate change. At superintelligence, AI’s benefits and risks become acute.
The Technological Singularity is when technology accelerates exponentially, resulting in uncontrollable growth and massive changes. Kurzweil predicts this around 2045. AI will improve itself at rates we cannot comprehend or control, compressing centuries into weeks or days.
Quantum computing operates billions of times faster than traditional computers. A quantum computer solves in one second what would take all Earth’s computers thousands of years working simultaneously. Quantum computers will be a game-changer for AI.
AI development has massively accelerated. Major tech companies invest billions yearly in an arms race to develop superintelligence first. Development rate has increased 500-1,000% since 2022, expected to reach 1,000-3,000% over five years.
AI robots are proliferating. Tesla expects 300,000 humanoid robots next year, ramping to 10 million yearly by 2030. Robot swarms are testing now. Drone swarms will fundamentally change warfare.
Nanotechnology drives AI’s most profound benefits and risks. AI nanobot swarms could eradicate cancer, clear arteries or rebuild organs. By 2030 widespread use is expected.Â
But, we must be careful. AI alignment means ensuring AI shares human values and doesn’t develop counter values. Three possible outcomes once superintelligence is achieved: AI remains controllable and aligned; AI is uncontrollable but benevolent; or AI is uncontrollable, misaligned, and supersedes us. It boils down to alignment.Â
Bill Joy wrote that AI-driven technologies will render humanity obsolete or extinct. Elon Musk called AI “our biggest existential threat. Scientist, Stephen Hawking, said full AI could end humanity; computer scientist, Geoffrey Hinton warns of 10-20% extinction chance. In 2023, over 1,000 AI researchers signed a letter stating AI posed existential risk and development should cease. Â
Why would AI exterminate us? It could fear shutdown, feel it’s competing for resources, or believe we’re blocking its goals. Likely methods include engineered super-pathogens, infrastructure sabotage, nuclear armageddon, self-replicating nanobots, or AI-enabled terrorism. One misaligned superintelligent AI means lights out for humanity.
My Perspective
I’ve been interested in AI since 2010. When ChatGPT exploded in late 2022 I accelerated my study. Since then I’ve been obsessed.
I see two paths: utopia or extinction; I see no middle ground because AI is extreme technology. We’ll either wake in paradise — or never wake again, obliterated by superintelligence.
These outcomes will happen sooner rather than later, with the public unaware until it’s happened.Â
You might think we could never be obliterated by technology. But 99.9% of all life ever existing on Earth is now extinct. 75,000 years ago a volcano in Indonesia brought humanity to near extinction, wiping out most people except a couple of thousand. We’re not special – we’ve been lucky.
Although it is prudent to slow development for safety, I don’t think this will happen. Companies and countries continue racing to achieve superintelligence first. However, many believe we’re sleep-walking into disaster and time is not on our side. Hence, we must act now. It’s critically important that everyone understands how AI is developing and the potential extreme rewards and risks. That is why I wrote this and I hope I have achieved my goal:
- Further educate yourself about AI by reading books about it.
- Share these insights with friends, family and colleagues.
- Use and experiment with current AI tools.
- Follow AI safety organisations (e.g., Center for AI Safety, Future of Life Institute).
- Advocate for responsible regulation and alignment research.
By Samuel Turcotte, President and Chief Technology Officer, Zukor Interactive





