Import Ai: Issue 17: The academic brain drain, parallel invention, and a royally impressive AI screw-up

by Jack Clark

From the department of: I Really Hope This Is Satire, but Given It Is 2016 I Cannot Be Sure: Researchers at Shanghai University published a paper called ‘automated inference on criminality using face images‘. The paper uses deep learning to explore correlations between someone’s appearance and the chance of them being a criminal. It’s modern phrenology – 19th century junk science where people believed you could measure someone’s skull and use it to infer traits about their intelligence (the Nazi’s were influenced by this). I can see no merit to this paper whatsoever and am mystified that the researchers were not warned off of publishing this absurd paper. If someone feels my views are wildly wrong here I’d love to hear from you and will (if you’re comfortable with it) put the correspondence in the next newsletter.

How to judge which jobs will be automated: If you can collect 10,000 to 100,000 times as much data on a given job as someone would reasonably generate during the course of their professional life, then you can automate it. This explains why jobs where you can gather lots of aggregate data (eg, insurance actuaries, legal e-discovery, radiology, repeatable factory work, drivers) are already seeing massive automation.

If another professor leaves for AI, and there are no academic’s left who aren’t in industry, do people notice? Another significant move from Academia to Industry as Stanford professor Fei-Fei Li takes up a full-time gig at Google. Fei-Fei Li is both astonishingly important and an astonishingly patient, wise person, so it’s a great get for Google. Li and her team of grad students and collaborators practically kick-started the deep learning boom by creating the ‘ImageNet’ dataset and associated competition. Geoff Hinton & co won ImageNet in 2012 with an approach that relied on deep learning and this precipitated the immense flood of interest and investment that followed. Li is the latest in a long, long line of deep learning academics who have opted to move to spend (most of) their time working in industry rather than academia. Others include Geoff Hinton (University of Toronto > Google), Yann Lecun (NYU > Facebook), Russ Salakhudinov (CMU > Apple), Alex Smola (CMU > Amazon), Neil Lawrence (U Sheffield > Amazon), Nando De Freitas (Oxford > DeepMind), and many more. The main holdout remains Yoshua Bengio who maintains a charming academic fortress in the frozen music-strewn town of Montreal, Quebec. It’s wonderful that industry gets to benefit from the wisdom of academics, but it does lead me to wonder as to whether AI organizations are going to cannibalise the academic ecosystem to the point that they damage the ultimate supply of graduate students. (Note: OpenAI is guilty of this as well, as Pieter Abbeel currently spends most of his time with us rather than at UC Berkeley.) On the other hand, it’s nice to see academics making money off of their ideas, whether by taking up well-paid jobs or selling their startups to big firms. (Congratulations to Berkeley’s Joshua Bloom and the rest of the Wise.io team on selling to GE, by the way.)

Parallel Invention Alert!: television was invented by multiple people at roughly the same time. The same happened for telephones. Ditto Crispr. Technology isn’t mysterious – sometimes there are ideas floating around in the general scientific hivemind and a few people will transmute them into reality at the same time. This phenomenon of Multiple Discovery is worth paying attention to as each occurrence indicates that the idea has some general utility, given the fact that multiple scientists with different perspectives have glommed onto it at the same time…

…AI is rife with parallel invention, and part of the way I model the acceleration in AI development is by the increasing frequency of these cases of parallel invention. So it’s interesting that both OpenAI and Google DeepMind have published remarkably similar papers within a very short (~ two week) timespan of each other. First, OpenAI published a paper called RL^2 Fast Reinforcement Learning Via Slow Reinforcement Learning, then DeepMind followed this with Learning to Reinforcement Learn. (Note: both of these research efforts took many months of work, so the order of publication is not significant. They also test approach on different facets of learning problems.)

The idea behind both techniques is that rather than investing time in getting AI to optimize a specific learning algorithm for a given task, you can instead get an AI to optimise its own learning machinery for a set of many tasks. Technically speaking, the idea is that you structure a reinforcement learning agent itself as a recurrent neural network and feed it extra information about its performance on the current task. The agent learns how to create policies to solve a broad range of tasks by using the information about each solution to each problem to alter and augment its own problem solving abilities. Different weights in the RNN correspond to different learning algorithms performed by the agent, and different activations in the RNN correspond to different policies specialized to different tasks faced by the agent…

general purpose brains, versus tuned brains: These approaches are analogous to the difference between putting an uncalibrated, specific piece of machinery into the brain of an AI and letting it calibrate the machine through interacting with a certain set of environments to solve a specific problem, versus instead putting a more general-purpose bit of machinery into the brain of AI and getting the AI to optimize the machinery for solving many different tasks in many different environments. This ascendance towards greater flexibility, learning, and independence by AI agents is a key point on our march towards creating smarter machines…

This is not an isolated occurrence. Other examples of parallel invention include: Facebook & DeepMind both pioneering memory-augmented AI systems (neural turing machines, memory networks), Google Brain & DeepMind producing papers on Gumbal Softmax within a week of eachother, multiple people inventing aspects of variational autoencoders, Deepmind and the University of Oxford pioneering methods for lip-reading networks, Stanford&National Research Council Canada&Amsterdam publishing on the TreeLSTM within three months of each other, and many more. If you have examples please email me at jack@jack-clark.net – I’d like to compile these instances in a separate, continuously updated document.

The Deep Learning iceberg lurking in consumer products: the software we use on a day-to-day basis is becoming suffused with deep learning, with much of it lurking beneath the surface. For example, a new Google product called ‘PhotoScan‘ uses neural network-based image analysis and inference techniques to let you quickly scan your family photos, using AI to stitch together the different sections of the photograph to improve quality and correct for glare and spatial distortions. But most importantly it ‘just works’ and the consumer doesn’t need to know it is made possible by a baroque stack of neural networks. Similarly, a Kickstarter for a fancy baby monitor called ‘Knit‘ promises to create a device that uses DL to better monitor the state of the baby (eg, its breathing, wakefulness, and so on), giving parents information about their child through the computer observing its visual appearance and making some assumptions. These products are pretty amazing given that in 2012 image recognition was broadly an unsolved problem.

Welcome to the era of the ultra-lego-block-AI. That’s the message from a new DeepMind paper outlining its latest RL agent. The agent consists of multiple different tried-and-tested AI components (CNNs for vision,a network to enhance the agent’s ability to explore by rewarding it for increasing the variety of the views it perceives, and network to predict rewards and check against what actually happened, and so on – more detail on page 2 of the paper [PDF]) which combine to create a smart, capable system capable of beating DeepMind’s own records on a large number of environments, including tricky games like Montezuma’s revenge (which was broadly unsolved by AI two years ago, and which now sees this AI agent achieve 69% of a human baseline). This kind of multi-system omniagent will be increasingly significant, and it’s something that people like Facebook, OpenAI, and other academics are all working on as well.

A royally good AI blooper: Another fun example of the many ways in which AI algorithms can horribly fail from Tom White’s reliably entertaining ‘Smile Vector‘. This time, the neural network tries to make Princess Kate smile and instead applies the approach to her husband, William. “Kate accidentally landed on William,”he explains, which sounds like a euphemism for many, many things.

Geocities still lives… in the form of this Very Important and Trustworthy AI website: One Weird Kernel Trick.

Given a few hundred million words and a hammer made of a globe-spanning network of computers, can I translate between languages without knowing anything about language? Google’s answer appears to be ‘yes’. In a new paper outlining Google’s Multilingual Translation System the company describes a system that is trained on multiple languages and translates between them.This creates a single, giant network that contains a crude understanding of not only how to translate between pairs of languages, but how to categorize broad concepts across sets of language it doesn’t have explicit, paired sets for. This is significant as it shows the network has learned some essential information about language that it wasn’t given explicit labels for. That shows how modern AI systems can not only map A>B, but can also infer the existence of C and D and map between them as well. Most tantalising thing? The evidence that this approach yields “a universal interlingua representation in our model”. Universal Interlingua!

OpenAI bits&pieces:

Better PixelCNNs for everyone! We published a paper and code for PixelCNN++, a souped-up version of some technology that DeepMind invented. Code here.

Why AI technology is moving so quickly, and why predicting the future is hard: interview with OpenAI’s co-founder & research director Ilya Sutskever.

Crazy&Weird:

Backstory: I wondered if people would like to see something ‘crazy&weird’ in this newsletter and the votes told me ‘Yes’. So here we go:
AI & THE FUTURE OF MANUFACTURING.

[2025: A factory in China. There are no lights and thousands of industrial robots work in a complex, symphony. All you hear is the steady fizz of robotic movement.]

Machine view: multiple reinforcement-learning agents run simulations of the line in a large data center attached to the factory. They explore multiple perturbations of the manufacturing process, endlessly simulating the workload. When they discover more efficient approaches they initiate a High Priority Resource Call to the hardware scheduler in the data center and are assigned a chunk of computing resources to attempt to transfer their knowledge from the simulation into the real robots on the line. After the transfer is complete the robotic line reconfigures itself to account for the new simulation. Any errors are spotted by a thousand cameras staring down at the line. If the AI can diagnose the error it re-runs the simulation and comes up with a fix. If it can’t it sends the images&data of the flaw out to a large Mechnical Turk marketplace where human engineers observe the fault, come up with a fix, and send it back to the line. The system re-optimizes. Meanwhile, one line of the factory attempts to come up with perturbations of the assembled product, inventing wholly new versions of the devices by navigating through the latent feature space of the products. When new ‘Candidate Products’ are found it runs it through a series of tuned, expert systems and, if it gets a high enough score, simulates the product in a high-fidelity simulation. If it passes those tests then a Candidate Product is produced, airlifted by drone to a nearby human focus group and, if it satisfies their criteria, is sold on an EBay-like auction site frequented by the factory’s thousands of distributors. A bidding process takes place and in a few days/weeks data comes back about how the product succeeds in the market. If it does better than the existing product more parts of the factory are dedicated to creating new products in this style and the improvisation line begins exploring the latent space of the new products. In this way the manufacturing process begins to evolve according to a Cambrian evolution process, with AI automating much of the product R&D process.