Import AI: Issue 16: Changes at Twitter Cortex, Catastrophic Forgetting, and a $1000 bet

by Jack Clark

AI Term of the week: Catastrophic Forgetting: when neural nets completely and abruptly forget previously learned information upon learning new information

Reusability: One of the reasons why AI progress is accelerating is the community is creating more and more reusable components that can be plugged into different domains, frequently attaining performance equivalent to or better than hand-designed algorithms. The 2015 ImageNet challenge was won by a Microsoft system built out of Residual Networks, then in 2016 Microsoft made a speech processing breakthrough via a system that also relied on Residual Networks. Similarly, DeepMind’s WaveNet system has been slightly tweaked and re-applied to the domain of neural machine translation (PDF). This kind of re-use is a good thing as it suggests we are beginning to create the right sorts of low-level primitives that general intelligences can be built out of.

Domain conversion: We’re also seeing researchers work to convert hard problems into domains where we can use neural networks to learn to solve aspects of the problems. This brings more problems into a scope where we can train computers to learn how to solve problems that we’d have trouble programming by hand. For example, a recent OpenAI paper called Third Person Imitation Learning (PDF) is able to convert the problem of copying the actions of another entity into one amenable to generative adversarial networks, which let you learn a mapping between the behaviors of a teacher and a student without specific labels or additional data.

The Rude Goldberg AI Factory: startup claims to have developed software lets you train robots to operate your 3D printers and associated genius-in-garage gear. The company says its software works with any robot, any webcam and any gripper, to imbue machines with the smarts to read the display and press the buttons on common factory items, like 3D printers. This could lead to a world where the mythological internet startup in a garage becomes the mythological internet-startup-slash-steampunk-robot-manufactory in a garage.

Who wants to make $1000? Venture capitalist Keith Rabois says AI is getting so good that it’ll soon be able to write screenplays and articles that are indistinguishable from those produced by humans. This is unlikely to happen in the short or medium-term — language remains one of the great unsolved challenges in AI. Designing good language systems requires an AI that can tie concepts in language to things it knows about the world, which requires an immense amount of what experts call ‘grounding’. We aren’t there yet. So AI researchers may want to take Keith Rabois up on his bet with Salesforce/Metamind’s Stephen Merity that he can “send you a manuscript that passes a Turing test.” Merity is in, as is Quora’s Xavier Amatriain. Anyone else?

Changes at Twitter: Twitter’s AI strategy is changing, judging by the recent departures of Hugo Larochelle, Ryan Adams, and others. The company had been trying to build an academic research group similar to those at Google, Facebook, and Microsoft. With their departures it seems the emphasis is now firmly on applied AI.

Gender bias in astronomy: researchers from the Swiss Federal Institute of Technology in Zurich use machine learning techniques to analyze 200,000 papers from 1950 to 2015. The key result: a paper whose first author is a woman is likely to get about 10 percent fewer citations than those that have men as the first authors. On the plus side, the fraction of papers where the first author is female has grown from less than 5% in the 1960s to about 25% today.

A vast machine intelligence landscape: Bloomberg Beta has published its third landscape of machine intelligence. The chart includes a third more companies than a year before and depicts a teeming menagerie of companies and organizations striving to apply ML to every component of the technology stack. “It feels even more futile to try to be comprehensive, since this just scratches the surface of all the activity out there,” they write.

Salesforce starts publishing AI papers: It’s normal for the research labs operated by the consumer companies to publish Ai research papers. Now that same spirit of openness seems to be moving into the enterprise as well. Recent Salesforce acquisition Metamind has published a number of papers recently on areas like question answering, natural language processing, recurrent neural networks, and more. It has also topped the leaderboard on the Stanford Question Answering Dataset (SQuAD) competition.

OpenAI bits&pieces:

How open should companies like Google be with their AI systems, what kinds of monopolies could we see emerge as a consequence of the AI revolution, and how do these issues relate to Japanese cucumbers? Those are some of the things Tim Hwang of Google and myself for OpenAI discussed in Toronto last month, which you can see in this video.