Import AI: Issue 13: Microsoft’s speech breakthrough, how to make neural nets that resist interrogation, and AI-generated Halloween art

by Jack Clark

Teaching old neural nets new tricks: Congratulations to Geoff Hinton, a self-described “machine learning fossil” who, along with collaborators, has fleshed out an idea he has been working on since 1973. The research, ‘using fast weights to attend to the recent past’ tries to make neural networks a little bit more brain-like through the use of a ‘fast weight’. “These “fast weights” can be used to store temporary memories of the recent past and they provide a neurally plausible way of implementing the type of attention to the past that has recently proved very helpful in sequence-to-sequence models,” according to the  research paper’s abstract.There’s a good, thorough lecture of the approach by Geoff Hinton here. Along with being quite smart Hinton is also reasonably funny and, had he not squandered his life on AI, could have been a very good stand-up comic. This represents yet another name on the lineup of the current AI-Memorypalooza, and will be appearing besides Memory networks, differentiable neural computers, LSTMs, and other memory-oriented systems in a workshop near you soon.

My fair Microsoft: Microsoft claims to have exceeded human performance at recognizing speech. This has taken a few experts by surprise. “I must confess that I never thought I would see this day,” says British-American linguist Geoff Pullum. The system beat human experts at identifying sounds on Switchboard, a long-in-the-tooth audio dataset consisting of over 2,400 telephone conversations among 543 people from the United States. There’s evidence that this data doesn’t capture all the nuances and difficulties of speech, though it does contain salt-of-the-earth phrases like ‘it really chips really easy” and “adept at doing things, it’s just…”. The true test, though, is on real-world datasets. Chinese search giant Baidu, for instance, gathered more than eight thousand hours of audio data to test and train its Deep Speech system, and Google harvests vast amounts of audio information to build its own classifiers as well. Perhaps we need a new ASR dataset?

Machine learning & fraud: one of the areas where machine learning is going to be widely deployed is in fraud identification. Fraud is almost the perfect problem for ML approaches because it involves visualizing odd permutations in a vast stream of data. So it’s unsurprising to see the launch of Stripe Radar. The bigger implication of services like this is that it gets more effective as more data gets plugged into it, so as a customer grows the predictive capabilities of Radar will grow as well, and there’s also a good chance that the entire platform will benefit from insights gleaned from each individual customer. This is why it currently looks like ML will turbocharge the benefits that companies extract from operating widely used platforms.

Open research: Francois Chollet, the creator of Keras, appears to really, really enjoy working every hour in the day, given that his new side project is the ‘Artificial Intelligence Open Network’. AI ON lists open problems in AI and gives people the opportunity to work on real, meaningful problems with senior AI researchers. It’s somewhat like OpenAI’s Requests for Research, though has a more well-defined open research process. This fits with the general tendency towards openness and transparency in AI. Long may it continue!

AI & the re-evaluation of intellectual property: spare a tear for the lawyers who will soon grapple with the myriad intellectual property issues brought about by the rise of AI. “How will we protect the intellectual property embodied in those products and services, if anyone can reverse engineer their core IP simply by using them and feeding their output into commodity machine learning systems”? writes Daniel Tunkelang. The answer could be to design systems that are resilient to such model-scraping attacks — there’s already research being done here, including a contribution by OpenAI (more on that at the bottom of this letter).

Brain augmentation is closer than you thinksays Braintree founder Bryan Johnson who is pouring $100 million of his money into Kernel, a company that aims to create ‘the world’s first neural prosthetic for human intelligence enhancement’. Between that and recent work on ‘neural lace’ technology it seems that the era of brain-fiddling is upon us. Let’s hope we can avoid the situation rendered in Black Mirror S3:E2 ‘Playtest’.

Mo’ AI, Mo’ Macroeconomic Demand-Side Problems: “A significant number of tasks now performed by humans will be performed by machines and artificial intelligence. We could very well see 5 million jobs eliminated by the end of the decade because of technology,” says Andy Stern, former president of the Service Employees International Union (SEIU).

Normalization-on-normalization-on-normalization: First there was batch normalization, then layer normalization, and now there is streaming normalization (PDF). These techniques make it more computationally efficient to train neural networks. Though batch normalization was only outlined in 2015 it has already become quite widely used. We may expect the same for layer and streaming normalization. “Machine learning systems often get confused by stimuli that vary wildly but in uninteresting ways, for example, a visual recognition system might get confused by a room where the lights get randomly brighter and dimmer but nothing else about the room changes. Batch normalization is a technique that helps us to ignore such uninteresting variation, both at the level of the network’s raw inputs, and its higher level judgements,” says OpenAI’s Dario Amodei.

Chinese cash for UK AI talent: A Chinese private equity firm will partner with UK-based Founders Factory to invest in UK AI startups, giving China access to some of the UK’s excellent AI talent, and the UK companies access to China. In Beijing, entire city blocks have been converted from focusing on cloud computing to instead focus on AI research, according to Miles Brundage.

Halloween AI: Welcome to MIT’s ‘Nightmare Machine”, some dastardly AI-fiddlers have taken it upon themselves to use style transfer techniques – a way of getting a neural network to interpret the aesthetic style of one picture and apply it to another one – to create pictures of ‘haunted faces’ and ‘haunted places’. Take a tour of the ghoulish AI creations, but remember ‘images on this website are generated by deep learning algorithms and may not be suitable for all users. They contain scary content’. Scary content! If you’ve always thought the poster for the film Jaws could be improved by the addition of loads of skulls then this will be right up your alley. Perhaps we’ll soon see a similarly spooky filter arrive in popular app Prisma?

/// OpenAI bits&pieces ///

Developing machine learning systems that can ingest sensitive data and output anonymized answers is a challenge. In an ideal world, you want to limit the distribution of the personal data – say, someone’s medical information – so that people can’t try to exploit flaws in your ML model to uncover private information. That was the motivation behind research from Penn State’s Nicolas Papernot,  OpenAI’s Ian Goodfellow, and several people from Google. The research relies on a system where multiple ‘teacher’ networks are trained on a dataset, then give slightly garbled predictions to a ‘student’ network, which learns to classify things without having ingested any identifiable data, making it resilient to model-extraction attacks. “The approach combines, in a black-box fashion, multiple models trained with disjoint datasets, such as records from different subsets of users. Because they rely directly on sensitive data, these models are not published, but instead used as teachers for a student model. The student learns to predict an output chosen by noisy voting among all of the teachers, and cannot directly access an individual teacher or the underlying data or parameters,” they explain in the paper: ‘Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data’.

Reinforcement learning presentation: OpenAI researcher John Schulman gave a presentation at Galvanize in SF this week on deep reinforcement learning through policy optimization. Slides of the (non-recorded) talk are available here.

Korean robots in OpenAI Gym: A Korean organization is training a robot to walk using our open source OpenAI Gym software. Facebook’s translation isn’t quite up to the task of providing a non-garbled translation, though…

/// Administrative Note:Two weeks ago I asked for advice about what people would like to see from an OpenAI newsletter. I got lots of helpful responses and am now vigorously digesting them. Thank you! ///