Import AI: Issue 9: Virtual worlds for virtual brains, better Hutter Prize performance, Microsoft’s new FPGA cloud

by Jack Clark

First the words, then the worlds: The recent surge of interest in artificial intelligence prompted many companies to build and release free AI programming frameworks; these include Google’s TensorFlow, Microsoft’s CNTK, Amazon’s DSSTNE, Baidu’s PADDLE, Skymind’s DL4J, Nervana’s (now, Intel’s) NEON, and others. Now we’re onto the next thing: environment interfaces. Just as an AI framework gives programmers a reasonably high-level language to use to get computers to perform the sorts of complex commands suited to AI development, environment interfaces will make it relatively simple for programmers to hook an AI system up to an environment(s) for it to learn and grow in. OpenAI released OpenAI Gym earlier this year to do just that. Now others are investing in the space, creating more free tools for the global AI community. Facebook this week announced CommAI-env (Environment fgor Communication-based AI), to make it easy for programmers hook agents up to a text-based communication layer, which gives them a way to talk to a text-based environment where they need to solve a variety of challenging tasks.

(These tasks range from being silent for a period of time determined by the CommaAI software-based teacher, to reading a list and manipulating it, to navigating through a virtual world described by text, to many others. “While the tasks might appear almost trivial (but try solving them in the scrambled mode we support, where your knowledge of English won’t be of help!), we believe that most of them are beyond the grasp of current learning-based algorithms, and that a Learner able to solve them all would have already made great strides towards the level of communicative intelligence required to interact with, and learn further from human teachers,“ the Facebook researchers write.)

After environment interfaces it’s likely they’ll be an arms race to release the environments themselves. Deepmind will probably release its 3D in-development ‘Labyrinth’ world. Microsoft and Facebook have already released their Minecraft-based Malmo, and maze-based Mazebase environments.

Free lectures! Next time you’re hankering for something to listen to while performing a tedious but necessary activity, like cleaning your living room while a neural net is trained, you might want to listen to the roughly 20 hours of lectures from September’s Deep Learning School. Replays available here.

Second edition of the bible (of reinforcement learning) from Richard Sutton and Andrew Barto is out now. Read it online here (PDF).

Smart summarization: being able to read something and remember only the salient components is a hallmark of intelligence. So it’s encouraging to see new AI papers report better results at compressing language against the dataset used in the Hutter Prize. Two recent papers seek to do this in two different ways. The ‘Multiplicative LSTM for sequence modelling’ paper from researchers at the University of Edinburgh and the Toyota Technological Institute pairs a recurrent neural network with an LSTM to achieve good results (pg 11, PDF), while the ‘hierarchical multiscale recurrent neural networks’ paper from the University of Montreal builds RNNs that develop a hierarchy of different representations of the underlying data (pg 8, PDF).

Special chips for AI: Microsoft has given more details on its ‘Catapult’ FPGA-based AI hardware, which will let developers load software onto its Windows Azure cloud and speed it up by running it on specially designed chips. The company first announced the Catapult system years ago and said in August of this year that it would make it available as a service other developers could rent. Meanwhile, a new company called Cornami has decloaked, with plans to build FPGA-liker chips with thousands of simple cores, which sound like they’ll be a good fit for common AI tasks (hello, matrix multiplication.)

Measure ten thousand ways, run once: As more chips are developed for AI systems and developers invest more hours in writing complicated code to parallelize and chain-together these various FrankenClusters, it’s going to be important to measure the performance of standard deep learning procedures on different hardware substrates. A new free tool from Baidu, called DeepBench, may help.

Monkeys for better robots: a reasonable article about robotics startup Kindred, which a bunch of ex-D-Wave (the quantum computer company) are involved in. Kindred appears to be betting that a person (or monkey!) can be an effective remote operator of a robot and could provide the raw data to train a computer to move a robot all by itself. Tantalising.

Better robots, no monkeys required: But there may be easier ways to train robots. DeepMind has published a paper on progressive nets which can help with transfer learning, and now a team from the University of California at Berkeley have come up with their own way of training multiple neural networks to work across multiple robots. This will make it easier to get robots to learn from each other in the same way Tesla’s vehicles are able to use ‘fleet learning’ to make sure cars don’t make the same mistake twice. It could also help robots tackle never-before-seen tasks by fusing together previously seen ideas. “In some cases, previously untrained combinations might generalize immediately to the new task, while in other cases, the composition of previously trained modules for a new previously unseen task can serve as a very good initialization for speeding up learning,” they write (PDF). Check out the video and paper here.

Freaky, nightmarish, convnet faces: Nice overview of how neural networks can be used to generate objects. Come for the accessible descriptions and stay for the hall-of-mirrors face transformations.