Import AI: Issue 5: The Not-So-Crazy Neural Lace, Robot Problems and Solutions, and Neural Phones.
by Jack Clark
Welcome to Import AI, a newsletter about artificial intelligence. Subscribe here.
Cyborgs Are Closer Than You Think: Elon Musk says it would be a good idea for people to get some machinery wired into their brain to make them smarter and better able to compete with robots and AI. It turns out this is easier to do than you’d assume. A group of researchers published a paper yesterday that described “a lace-like electronic mesh that ‘you could literally inject’ into three-dimensional synthetic and biological structures like the brain.” This technology could eventually be used to deal with medical conditions and/or to enhance cognitive performance. “I think our goal is to do something, and I think it’s possible to, number one, correct deficiencies. And I wouldn’t mind adding a terabyte of memory,” said Professor Mark Hyman in an interview with Nautilus.
Please build this for me: Most modern AI techniques require a vast amount of labelled training data. If you talk to experts you’ll find that they each have intuitions about exactly how much data you’d need for a given task, whether that is a few thousand pictures for detailed classification, or a few hundred thousand words for text generation. Is it possible to build a small application that can, given a rough outline of the task (say, classify images at resolution Y with Z accuracy), estimate how much data you’ll need?
Neural phones: Samsung gave a talk at Hot Chips last week in which it said it was using neural networks for branch prediction in the M1 processor cores inside its S7 and S7 Edge smartphones. “If your CPU can predict accurately which instructions an app is going to execute next, you can continue priming the processing pipeline with instructions rather than dumping the pipeline every time you hit a jump. ‘The neural net gives us very good prediction rates,’ said Brad Burgess, who is Samsung’s chief CPU architect”, reports The Register. (As many have subsequently pointed out, people may have been using similar techniques for many years, but instead of calling it a neural network, they called it a perceptron. Marketing!)
Unleash the robots (with free data)! Right now, making truly smart robots is a challenge. That’s because the common way of developing modern AI software involves spending hours training your computers on test data. Modern computers are very fast so this means the computer can play a hundred games of Tetris in a few minutes. This approach doesn’t work very well for robotics. That’s because the simulators the robots are being trained on don’t fully reflect the fizzing complication of the real world. (New simulation environments are being developed, though, including Google DeepMind’s ‘Labyrinth’.)
So, if simulation is inefficient, what else can you do? The answer, if you have lots of money, time, and access to an office containing 14 robot arms, is to train your robots in the real-world. That’s what Google did earlier this year, when it created what Googlers called the ‘arm farm’. Over the course of several months its robots learned to pick up a variety of different objects through a process of (smart) trial and error. Data from one real-world robot was transferred to the others, letting the Mountain View, California company’s dexterous servants learn in the same networked way that Tesla’s self-driving cars do. So it was with a pleasant surprise that we saw Googlerelease the data from the experiments last week. This data gives researchers around 650,000 examples ofrobot grasping attempts and 59,000 examples ofpushing motions.
Transfer Learning: It’s likely that clever robots will be created through a combination of training in the virtual world and the real world. Being able to take insights gleaned from one environment, like a simulator, and apply them to another, like a real-world disaster zone, is one of the grand challenges in AI. Google DeepMind recently published a paper on ‘progressive networks‘, which lets it take a neural network that has learned to tackle one problem and daisy-chain it via gradient descent to a new network. This lets the new network tap into insights learned in other networks, reducing training time. This means you can train a bunch of networks in a simulator, then attach those to a neural network that is trying to tackle problems on a real-world robot, and learn to do things in less time than usual.
Computer, Enhance! Last week I asked for a service that could let me upscale my pictures using neural networks. It exists! Now I’ve found another nice example on Github with code. For an example of how upscaling can go wrong please look at the third-from-bottom picture on the Github repo.
Hands Across The Human-Machine Divide: Berkeley professor Stuart Russell will lead the Center for Human-Compatible Artificial Intelligence, which launched this week. “The center will work on ways to guarantee that the most sophisticated AI systems of the future, which may be entrusted with control of critical infrastructure and may provide essential services to billions of people, will act in a manner that is aligned with human values,” says Berkeley News. Funding for the center comes from the Open Philanthropy Project and the Leverhulme Trust and the Future of Life Institute.