I’ve been reading and writing about the big breakthrough that’s happened in computer science. Last week I hinted at exactly what it is and how it works, but I stopped short of fully explaining it. That’s today’s job.
Last week I went back to the beginning of the story. I talked about Moore’s Law, and how today’s computers are smaller cheaper and faster than ever before.
Yes, computers have gotten better. But my point was that they’re basically the same as they were fifty years ago. They’re still dumb counting machines. They can count faster than before and they can remember more stuff than before. But the basics haven’t changed.
The point applies to software too. Software has obviously gotten fancier, faster, and more important over time. But it’s still made in the same way as fifty years ago. Software code is still a list of explicit step-by-step instructions for the machine to follow. The code has gotten longer and more complicated. But it’s still basically a list of instructions.
I’ve been calling it a breakthrough – but what exactly “broke through”? What’s new here?
Artificial intelligence researchers have been tinkering away for forty or fifty years. They’ve tried lots of different approaches to creating a computer that can think for itself. They’ve tried systems based on statistics, systems based on rules, systems based on analogy.
One of the ideas, which was fashionable for a spell in the 1980s, is called “neural nets”. Neural nets try to mimic the structure of the human brain in a computer. They solve problems – like how to recognise a chair – by strengthening and weakening the connections between layers of neurons.
Anyway neural nets have been around for a while. But in order for them to work, they need to be “trained” using lots of data (photographs of a chair, in our example). And they need super-powerful computers to run them.
Back in the 1980s, data was too rare and computers were too slow to make neural nets work. But that’s not the case any more. And in 2006, a scientist called Geoffrey Hinton realised that these neural nets work a lot better when you increase the number of “hidden layers” of neurons. He called the technique “deep learning”. He made a tweak to the old neural net algorithm. It worked.
Those three things – deep learning algorithms, scads of data, and faster computers – have made machine learning possible. Now everyone’s at it. Governments and companies are throwing money at it. It’s getting cheap enough that small technology businesses can use it. It’s changing the world, and making real money for businesses who’ve learned how to put it to work.
As Jeff Dean, a senior fellow at Google, put it: “computers are starting to open their eyes.”
P.s. This might read like a lot of boring computer science. But deep learning and neural nets are going to change the world! The White House’s annual report last year said that automation (in other words, deep learning algorithms) have an 83% chance of replacing the jobs of those who make $20 per hour. Machine learning is coming, whether we like it or not!
What do you think? You can get me at Sean@agora.co.uk