On nearly all my trips to Silicon Valley, I begin with an early morning trip back to the future.
These days everyone believes that artificial intelligence (AI) is the big new thing that will change the world and determine the wealth and power of nations. In Washington, many heavy-breathing fearmongers think that if China takes the lead in AI, it’s all over for the US.
That is, unless we fight back with “quantum computing” or vorpal blades, or 7G wireless, or some other perpetual motion miracle machine.
Understanding the Truth Behind AI
Before I left to shake up the MoneyShow last week with my own views on AI, 5G, and crypto – I was sure to get up at 5am to catch technologist supreme Nick Tredennick’s breakfast séance.
Fresh from a Computer Museum interview, Nick was the guy who designed the 68000 microprocessor that made the first Apple Macs hum. Then he pioneered in-field programmable gate arrays for telecom. Now as CEO of Jonetix, he is providing hardware security for the internet of things (IoT).
But every Thursday, long before the sun rises over Santa Cruz, he breakfasts at a Paleo-American diner in Los Gatos with a group of sages from the days before chemophobic rich people and their lawyers banned the fabrication of silicon chips in the valley.
Nick’s wizened wise men are the guys who designed the microprocessor chips and contrived the new systems in information technology that made the internet and personal computer as new-fangled and fashionable back then as artificial intelligence is today.
In fact, back then, as these guys can tell you, AI was already hatched and used to frighten little children that there would be no jobs left when they grew up.
Now, with a thousand-fold acceleration of switching speeds on chips and with parallel processing on graphics processors and other devices, the new generation alleges that AI will not only take our jobs but also blow away our minds. We’ll all be left as mere carbon slaves to AI super brains.
But these guys were having none of it. No sooner had we taken our seats in the diner and ordered piles of eggs and bacon and began imbibing emergency caffeine than Keith Diefendorff began dissing AI hype.
“AI isn’t working for general purpose computing,” Diefendorff opened up. He commands an array of patents in microprocessor architecture, reduced instruction set computing (RISC), optical interface technology, and other advances. He led the team that created the PowerPC micro-family for IBM and Motorola and later Apple.
He also served for nearly a decade as editor in chief of the Microprocessor Report. He knows his stuff. He had been hanging out with guys doing AI research for major companies. “Don’t quote me on their names George!” he barked at me across the table. “But they are getting nowhere with general purpose tasks.”
“AI is proving good for specific niches.” And they are an important niche’s recognising faces, interpreting speech, implementing an advertising algorithm. But beyond that niche they are as at sea as a shark at a chess game.
The rest of the sages offered a chorus of learned agreement. AI is just another advance in computer technology, like the other ones. It is not creating rivals for the human brain.
“Games, in fact, are what they do best.” AI gained its reputation by beating chess masters and world Go champions at their forte. If they can play the complex Asian strategic game of Go, so the logic went, AI machines could soon be writing novels and discovering new laws of physics.
Then they would begin programming themselves, making ever more powerful machines and taking over the Universe.
Don’t Believe the Mainstream Media
Having successful programs pair off against one another or against themselves researchers have forced a spiral of improvement.
Because a game of Go offers an unfathomable immensity of possible solutions, comparable to the number of molecules in the Universe, these successes are believed to portend the creation of superhuman machines. These devices can easily displace human jobs of all kinds with their limited goals and challenges.
Fed with ever bigger data collected by sensory IoT, computers will first displace lawyers (the breakfast sages were rather happy with that prospect). Then they would usurp accountants and teachers.
To observers of such trends, it is easy to imagine a future in which the role of humans steadily shrinks.
The basic problem with these ideas is their misunderstanding of what computers do.
Computers shuffle symbols. As philosopher Charles Peirce observed more than a century ago, the links between computational symbols and their objects are indefinite and changing. The map is not the same as the territory.
The links between symbols and objects have to be created by human minds.
Therefore, computations at the map level do not translate to reliable outcomes on the territorial level.
For the game of Go or chess or some routinised task, the symbols and objects are the same. The white and black stones on the Go board or the pieces on the chess board are both symbols and objects at once. The map is the territory.
As my breakfast companions know well, in order to have correspondence between logical systems and real world causes and effects, engineers have to interpret the symbols rigorously and control them punctiliously and continuously. Programmers have to enforce an interpretive scheme between symbols and objects that banish all slippage.
There can be no butterfly effects, black swans, entrepreneurial surprises, or other novelties.
Big data from billions of sensors and sources do not begin to comply with these requirements. Software maps will never enable a car safely and reliably to drive itself without major hardware advances in vision systems.
And contrary to popular belief, AI will create jobs rather than destroy them, like all computer technology has done throughout history.