The Latest Mind-Bending AI Trick

If you pay attention, there’s just been staggering progress in computer science over the last few years.

Not long ago I was blankly staring around the departures area at Copenhagen airport when I noticed something odd was dangling from the ceiling across the room. A small, white, plastic device… tiny holes on the end…  hanging by a rubber wire.

A microphone!

It turns out they’re all over the airport.

A story in the MIT Technology Review this week reminded me of those sinister microphones: scientists at Oxford have taught computers how to read lips, with 46.8% accuracy. By comparison, the best human lip readers are 12.4% accurate.

Great news for the hearing impaired… and state security forces. Computers will soon be able to sort through millions of hours of surveillance footage in real time to weed out troublemakers, and God knows what else.

The method, and the tricks

If you pay attention, there’s just been staggering progress in computer science over the last few years. Computers have learned how to read lips, recognise pictures, diagnose diseases, drive cars, translate languages, understand speech, paint, and write music.

Any of those would be exciting in their own right. But remember – they’re all just different applications of one important new technology: machine learning.

It’s important to distinguish between the tricks computers have learned (driving a car) and the way they’ve learned (machine learning algorithms). The method is more interesting than the tricks.

I was reading a fascinating new book about this recently – The Exponentialist by Nick O’Connor. The book tries to do three things: help you understand important new technologies; project where the technologies are headed in the near future; and show you how to invest in them.

Nick gives a great overview of this topic. He gets to the point with the minimum of fuss. Here’s how he describes what’s going on:

It shifts our thinking away from asking “how do we design this machine to do what we want it to?” to asking “how do we teach this machine to achieve that?

To grossly oversimplify things – and apologies if I’m doing a disservice to any computer programmers out there – it works by analysing vast data sets over time, and developing an almost intuitive understanding of the patterns and relationships at work.

Show it enough pictures of a dog in the park, and it’ll learn to spot that dog in a totally different location, in a context that it hasn’t come across before. Eventually, it’ll be able to spot and differentiate between different species of dogs, even if it’s never seen them before. It starts to learn patterns and relationships beyond the immediate sphere of what you’ve shown it.

That’s the dog-spotting industry turned on its head, then. The world will never be the same.”

In other words, computers armed with modern machine learning algorithms could soon be taught to do anything, as long as we have lots of data to train it.

The Oxford Researchers teamed up with Google, whose “DeepMind” is considered the state-of-the-art when it comes to machine learning algorithms. To train it, they gave it 100,000 video clips (which had already been subtitled) from BBC television. The computer learned basically by guessing at what was being said, then testing its guesses against the BBC subtitles. Having learned a little bit in the process it had another guess. And on and on.

Coming down the track

I feel like we’re in the calm before the storm. Machine learning has caught on incredibly quickly. Computer labs are teaching computers new skills all the time.

But computer labs move a bit quicker than society at large. We still don’t have self driving cars on the streets, medical diagnosis software in the NHS, automatic lawyers or creepy state surveillance. The rest of the world is only a couple of years behind the labs, but so much has happened in the labs in the last few years.

Nick O’Connor’s book has raised a lot of questions for me. I’ll bring you a bit more from it tomorrow.

You may like

In the news
Load More