Exponential A.I.
THE FUTURE OF ARTIFICIAL INTELLIGENCE
THE ZEITGEIST HAS CHANGED
A strange thing happened this year. Enough people got a taste of ChatGPT that everyone – not just the specialists and the experts and the futurists but the moms and pops and the kids too – caught up and realised that AI was not just big in the conventional sense of “this could change how I do stuff,” but in the transformative, “this-could-change-everything” sense. A mania developed. Investors clamoured. Startups multiplied. Giants like Microsoft and Google swiftly accelerated their AI releases. The zeitgeist changed!
That’s a good thing. It also feels like … relief. For the past 7 years I’ve been working hard to convince audiences of the boundless AI-driven opportunities coming to re-think processes, products and services for the better, across all industries. I think (hope) that task just got a whole lot easier.
Audiences have also been asking bigger questions about the overall AI trajectory, human-equivalence, threats, implications for humanity and so forth.
Here is a brief update on my thinking with respect to the overall trajectory of AI, next 5-10 years.
THE FIRST ONE PERCENT
As big as you think this is … you ain’t seen nothing yet! In recent years I’ve had the privilege of meeting and listening to Turing Award laureate Yann LeCun from Meta, Prof Andrew Ng from Stanford, Profs Patrick Winston Daniela Rus and Josh Tannenbaum from MIT, Prof Paul Newman from Oxford, Prof William “Red” Whittaker at Carnegie Mellon, Prof Hod Lipson from Columbia, and a host of other AI experts, and everywhere I go, every lab, I see furious R & D and an explosion of development pathways. It has gradually dawned on me too, that the leading scientists are not fully aware of what the others are working on. They can’t be. It’s moving too fast and on too many fronts. The staggering reward potential will sustain the pace for another 10 years … perhaps another 30.
The AI technology problems that are grabbing the headlines today – hallucinations, opacity, biased inputs, racism, no chain-of-reasoning (show your working), models that are weakened by corrections and back-propagation, and so on – will be progressively reduced or eliminated. An army is working these problems. No one is calling these problems unsolvable.
A hugely important driver is the parallel field of neuroscience, where developments also continue at a stunning pace. As we push back the biological frontiers of the brain, so too can we expect more cross-overs between brain-function metaphors and computer science, just as happened with the neural network breakthroughs that are powering the current AI revolution. We can expect AI computer architectures to become ‘more biological.’ Eventually, as we become fully proficient in understanding and manipulating the building blocks of life, AI computing may become biological (ie exploiting living cells to store and process information).
One scientist summed up all AI goals to me thus: “learn like babies, learn forever,” by which he meant, get AIs learning by a combination of models they are ‘born’ with and data taught to them and by exploring the world and finding data for themselves, and of course and unlike babies, never dying.
So one thing we can be absolutely certain of is this: AI will become vastly more capable. What we are seeing and experiencing today is baby steps. I like to say, we are seeing only the first one percent.
THE #1 TECHNOLOGICAL DRIVER OF OPPORTUNITY
That message translates to opportunity. We’ve barely scratched the surface of the overall opportunity. The horizons are expanding at warp speed. Each time we approach the limits of today’s AI, the next step-change will open up new layers of opportunity. No matter how big you imagine the future impact of AI will be, it will be bigger. Everywhere I look, every industry, every business, every job function, I come up with endless lists of useful AI applications to boost efficiency, save money and improve outcomes. This is why AI remains my biggest technological driver of change and opportunity across all industries, all sectors, through the next twenty years. (CRISPR and gene-editing is second. Climate change is the biggest non-technical driver, as well as the biggest overall).
MULTIMODAL, MULTILAYERED, MODULAR
Here is how I think about the next 5-10 years in AI technology in three words. It’s more my mental model of important trajectories than a summation of the computer science, but perhaps you’ll also find it helpful:
AI becomes increasingly multimodal and multilayered and modular:
Multimodal in that systems will utilize language models, physical and emotional models, vision systems, other sensory models, curated and non-curated modes of learning, experimentation and so on, in a fashion analogous to how babies learn, ie by using multiple senses simultaneously, by drawing on certain models they are born with and through combinations of exploration and experimentation, by attending lessons, by internalising through stories, etcetera.
Multilayered in that systems will combine higher and lower order functions in a fashion analogous to cortices in the human brain. Some of the functions will be ‘instinctive,’ some will align with modes of learning, some will have superior coordinating functions. The uppermost levels will likely be highly transparent and instil governing principles and human values, perhaps even by embodying clunky old rules-based AI (as opposed to the learning-by-example breakthroughs that have powered the current revolution). I draw inspiration here from Yann LeCun’s dreamings of future AI systems using cortical metaphors, albeit his are more complex and nuanced than I’ve just described.
Modular in that I see a future where many AIs make ad-hoc service calls to many other AIs, building on the service-oriented principles that powered the software as a service revolution of the past two decades. 1,000 or 10,000 or perhaps 100,000 specialist AI modules will eventually be accessible as a service online, with best-of-breed modules run by specialised curators with high regard for quality control, bias issues, continuous refinement and so on.
ELECTRICITY INHIBITOR?
AI energy demand will also increase exponentially (AI processing demands a lot of energy) and thus inevitably come into direct conflict with climate imperatives and the many competing demands for renewable energy supplies.
Neuromorphic chip architectures, already employed in “edge-AI” and niche energy-anaemic applications such as small flying robots, MAY offer a transitional pathway to vastly more energy-efficient AI computation at the centre, but when exactly that transition might begin, and how quickly it can scale, needs further exploration. If neuromorphic architectures don’t do this, businesses and citizens may also be so bedazzled by the benefits that AI runs roughshod over climate goals and pushes net-zero out by more years. Yes, I know, but based on what humans have tolerated with crypto, which is energy-guzzling and offers NO intrinsic value to humanity, this is a realistic possibility.
HEALTHCARE AND AI-ASSISTED ENGINEERING
The two categories of AI-driven opportunity that excite me most, in terms of benefit to humankind, are AI in healthcare, and AI to accelerate the engineering of new molecules and materials.
AI is already revolutionizing healthcare from end to end and will continue to revolutionize hospital operations, neurological interventions, prosthetic interfacing, predictive epidemiology and much more. If AI saves nurses and physicians a mere twenty percent of the time they spend documenting, that will translate to a $55 billion per annum productivity dividend (a benefit most likely to be realised in additional patient coverage). Just that one tiny application! Add what AI-assisted home diagnostics will save in unnecessary doctor visits and we’ve saved a couple of percentage points of the entire US healthcare budget and we’ve hardly started! And what about the 30,000 lives a year we can save from earlier and more accurate diagnosis of cancer? What is that worth? The scale of benefit to humankind will be immense.
On the engineering side, AI is already accelerating the discovery of new drugs, new batteries, new polymers and more efficient enzymes for all kinds of manufacturing and recycling processes. The impacts of those will be measured not in billions, but in trillions.
Still on energy, AI will make our electricity grids far more resilient and adaptive and predictive (all weather forecasting will soon be done by AI) and help design better plasma controls to make fusion reactors commercially viable (in fact AI has a role to play accelerating ALL other technologies). In transportation, today’s AI is capable of giving us the same passenger service levels while cutting 80 percent of the traffic from our roads, if we let it. And we haven’t even talked about the long list of fully-autonomous vehicles coming and the myriad flavours of robots!
I could list ten-thousand AI opportunities and I’d still be adding more. You get the point: it’s boundless. But anyway, those are my two biggies.
HUMAN EQUIVALENCE
We can neatly side-step the metaphysical questions of what intelligence is and whether AIs will become conscious or not by simply accepting that AI is already, for practical purposes, indistinguishable to human intelligence in narrow contexts, and will continue exceed human capabilities on ever-broader fronts (funny how the Turing test, once the fanfare for a new era, slipped by with barely a squeak).
Importantly for predicting the future, human behaviors are shaped by day-to-day experiences of AIs (perceptions) not by any detailed knowledge of what’s really going on under the hood. AIs don’t need to be conscious for people to believe they are conscious. Which is to say, I expect a lot of people will come to believe they are conscious. It’s going to get weird ...
AI WILL NOT SOLVE OUR BIGGEST PROBLEMS
We cannot expect magic. AI will not eliminate wars or greed or famine. AI will not defy the laws of physics to produce clean energy at zero dollars, or find a way to vacuum the sky of greenhouse gases without requiring mindless amounts of electricity to power it. AI will not lead humanity, Moses-like, out of climate change. It will not do these things for the simple reason expressed by Yann LeCun, that we have all the solutions and we’ve already had them laid out for us for decades, and we choose not to take them. The deficiency is not lack of solutions; the deficiency is our lack of will.
AI NEGATIVES
And of course there are some corrosive and downright scary AI-driven effects coming too, some of which I explore here.
FROM THE ARCHIVES: My 2007 PREDICTIONS
Here are some of the predictions I made way back in 2007 in The Future of Business – How Information Technology Will Transform Industry, Organizations and Work. The ‘forecast horizon’ referred to in the text was 2008-2018. Did I get it right?
Computers that are effectively indistinguishable from humans will be a practical reality in specialised domains within the forecast horizon, especially in online and over the phone customer service, but machines that intelligently reason as humans do (as opposed to simulating humanlike intelligence) will not eventuate within the forecast horizon.
All types of computer systems will become progressively better at learning and generating their own information models instead of being constrained to those supplied by humans. Decision support systems will progressively incorporate the ability to identify gaps in their own knowledge.
Biological computing will significantly improve the ability for machines to emulate ‘learning’ and ‘thinking’ processes more closely, delivering computers that are more task flexible and vastly more usable in business, but most of the impact will come beyond the forecast horizon. Both quantum and biological computing have the potential to produce revolutionary change in processing capabilities and artificial intelligence beyond the forecast horizon.
Computing progress will be interdependent with progress in biological sciences: deeper understanding of the biology of neurons, synapses and brain cells will underpin advances in biological computing, and computer experimentation will advance understanding of natural biology.
Autonomous vehicles and robots will become more common and considerably more capable, with deployments concentrated in roles unsuited to human workers because of unsafe or harsh conditions, and in process work where they can greatly reduce labour costs. Drivers will include the falling cost of components such as processors, cameras, sensors, servos, wireless chips and GPS; improved methods for programming simultaneous processing; more standardised interfaces; and growing experimental communities accelerating innovation. General purpose robots that closely simulate human characteristics and perform multiple roles will only account for a small percentage of commercial deployments within the forecast horizon.
All good so far. I’m proud of that first paragraph especially. We are now achieving exactly that, and I began spotting applications before 2018. High-five, baby! … But then I kind of ruined it by including the following comment on artificial general intelligence (AGI) in my endnotes:
Machine intelligence may never surpass human intelligence because intelligence and the workings of the human brain are so deeply complex and poorly understood. Simulating its capabilities goes well beyond fast computation. It requires a much deeper understanding than we have today of learning, information retrieval, pattern matching and sensory capabilities. If human equivalent intelligence is ever achieved for machines, the timeframe will be beyond 2060. Such capabilities, if realised, would be more disruptive to the conduct of business than any other development described in this report. Note that many people disagree with me and are confident that machine intelligence will surpass human intelligence in a shorter timeframe. The futurist Ray Kurzweil, for example, predicts that we will reach ‘singularity’—when computers achieve smarter-than-human intelligence—in 2029.
Since then I’ve - ahem - reconceptualized what artificial general intelligence is. And 2060? Really? Yeah, I think Mr Kurzweil was closer to the mark than I was! In my defense, he didn’t build his arguments on the pathways AI actually took via the efforts of Geoff Hinton, Yoshua Bengio and Yann Lecun – but hell, that’s just my battered ego talking. Also against me is how long it took me to decide AGI was inevitable. Kurzweil wrote it in the 1990s, and I thought he was completely bonkers. I said it for the first time twenty-eight years later, in a 2018 keynote to University of Technology Sydney alumni entitled Rethinking Artificial General Intelligence. Respect, Mr Kurzweil! Respect!
WHAT ABOUT YOU?
Now is the perfect time to be opening our minds to new AI-driven opportunities and possibilities.
One of my greatest joys is imagining all the big new AI opportunities for a specific industry. All my keynotes are positive and opportunity-focused because this is the best way I know to activate people to make a positive difference to their own future. I’ve had the privilege of helping lawyers, insurers, doctors, nurses, commercial realtors, public servants, manufacturers, transport-operators, police, investors, teachers, bankers, energy personnel and more connect with hundreds of game-changing AI opportunities.
Reach out, and let’s explore the implications of AI for your industry at your next event!