Smart AI coders have it good.
Every big company on the planet wants them, and there’s only a small number of them to go around.
Not long ago they would have spent their days in the research labs of obscure universities. Now General Motors, Google, IBM, General Electric and Apple are scrambling to sign them up on huge salaries. They can pretty much name their price.
For example, in 2014 Google bought a London AI company called DeepMind for £378m. DeepMind was just an early stage startup at the time with no revenue and no product. But it did have a dozen top drawer AI experts. Google shelled out £378m for DeepMind to get its hands on the talent.
Now Google, and other Silicon Valley companies, are circling around a London AI startup called weave.ai. Like DeepMind, it’s an early stage startup which isn’t making any money. Like DeepMind, Google is really after the four AI experts who work there. It’s likely to pay millions to get its hands on them.
The weave.ai deal says a lot about the state of AI – and where the money is.
A simulated mind
When I talk about AI, I’m really talking about a new computer programming trick called machine learning. Before machine learning – as in from the 1940s up to about 2010 – all software was more or less made in the same way. As I wrote last week,
Software always has been a set of detailed instructions for the computer to carry out. In the sixties it involved punching holes in a stack of cards, and feeding them into the machine. Today the instructions are issued in lines of code – the Microsoft Windows operating system is made up of 50 million lines of it.
Machine learning is a totally different way to program computers. Instead of telling the computer exactly what to do – say, to recognise a cat – they show the computer a million examples of a cat, and a million examples of things that aren’t a cat. Then the computer figures out how to recognise a cat for itself.
Machine learning is inspired by the human brain. The brain learns things by strengthening and weakening the connections between neurons. And machine learning works by adjusting the connections between layers of simulated neurons.
That’s great because it allows computers to do things they never could do before, like driving cars, writing music or speaking in natural language. But the problem with it is that it’s impossible to tell exactly how the computer solved the problem. Like the working of the human brain, the exact working of these algorithms is a mystery. Instead of building the algorithm like an engineer, machine learning coders tend the algorithm like a gardener.
That can be a problem. For example last month, a Tesla driver was tragically killed when his car crashed while on autopilot. If autopilot were programmed in the old-fashioned way, the software engineers would dig into the code and find the bug which led to the accident. But self driving cars are coded using machine learning. They’re a black box. So we’ll never really know what happened to that man, and why his car didn’t spot the truck which collided with his vehicle.
Like peering into a brain
Weave.ai is working on that exact problem – the “black box” nature of machine learning algorithms. It’s trying to come up with an AI that can not only, say, drive a car, but also explain how it drives a car. Weave.ai wants to make machine learning more transparent.
It’s an important problem when you start to think about the way machine learning will be used in the future. Just “solving the problem” often isn’t enough. Sometimes you need context. For example, if a computer tells your doctor you’re sick, your doctor will need to know why the computer thinks that. The diagnosis alone isn’t enough.
Or another example: Google released its latest big product this week: a personal assistant called… Assistant. Assistant can be summoned in Google’s messaging app or in a special speaker you keep in your home. The idea is that you talk to Assistant using natural language and it replies, like a person. If Google goes ahead with the weave.ai deal, you might be able to ask the Assistant what’s so good about the restaurant it recommended, or why it’s sending you off your usual route to the office today.
The thing is the likes of Google and Amazon aren’t the only companies making use of machine learning. Machine learning is getting incredibly cheap and easy to implement. It’s at the point where’s its becoming embedded in all sorts of random products, from insurance policies to railway networks.