Import AI 129: Uber’s POET creates its own curriculum; improving old games with ESRGAN; and controlling drones with gestures via UAV-CAPTURE

by Jack Clark

Want 18 million labelled images? Tencent has got you covered:
…Tencent ML-Images merges ImageNet and Open Images together…
Data details: Tencent ML-Images is made of a combination of existing image databases such as ImageNet and Open Images, as well as associated class vocabularies. The new dataset contains 18 million images across 11,000 categories; on average, each image has eight tags applied to it.
  Transfer learning: The researchers train a ResNet-101 model on Tencent ML-Images, then finetune this pre-trained model on the ImageNet dataset and obtain scores in line with the state-of-the-art. One notable score is a claim of 80.73% top-1 accuracy on ImageNet when compared to a Google system pre-trained on an internal Google dataset called JFT-300M and fine-tuned on ImageNet – it’s not clear to me why the authors would get a higher score than Google, when Google has almost 20X the amount of data available to it for pre-training (JFT contains ~300 million images).
  Why this matters: Datasets are one of the key inputs into the practice of AI research, and having access to larger-scale datasets will let researchers do two useful things: 1) Check promising techniques for robustness by seeing if they break when exposed to scaled-up datasets, and 2) Encourage the development of newer techniques that would otherwise overfit on smaller datasets (by some metrics, ImageNet is already quite well taken care of by existing research approaches, though more work is needed for things like improving top-1 accuracy).
  Read more: Tencent ML-Images: A Large-Scale Multi-Label Image Database for Visual Representation Learning (Arxiv).
  Get the data: Tencent ML-Images (Github).

Want an AI that teaches itself how to evolve? You want a POET:
Uber AI Labs research shows how to create potentially infinite curriculums…
What happens when machines design and solve their own curriculums? That’s an idea explored in a new research paper from Uber AI Labs. The researchers introduce Paired Open-Ended Trailblazer (POET), a system that aims to create machines with this capability “by evolving a set of diverse and increasingly complex environmental challenges at the same time as collectively optimizing their solutions”. Most research is a form of educated bet, and that’s the case here: “An important motivating hypothesis for POET is that the stepping stones that lead to solutions to very challenging environments are more likely to be found through a divergent, open-ended process than through a direct attempt to optimize in the challenging environment,” they write.
  Testing in 2D: The researchers test POET in a 2-D environment where a robot is challenged to walk across a varied obstacle course of terrain. POET discovers behaviors that – the researchers claim – “cannot be found directly on those same environmental challenges by optimizing on them only from scratch; neither can they be found through a curriculum-based process aimed at gradually building up to the same challenges POET invented and solved”.
   How POET works: Unlike human poets, who work on the basis of some combination of lived experience and a keen sense of anguish, POET derives its power from an algorithm called ‘trailblazer’. Trailblazer works by starting with “a simple environment (e.g. an obstacle course of entirely flat ground) and a randomly initialized weight vector (e.g. for a neural network)”. The algorithm then performs the following three tasks at each iteration of the loop: generates new environments from those currently active, optimize paired agents with their respective environments, and try to transfer current agents from one environment to another. The researchers use Evolution Strategies from OpenAI to compute each iteration “but any reinforcement learning algorithm could conceivably apply”.
  The secret is Goldilocks: POET tries to create what I’ll call ‘goldilocks environments’, in the sense that “when new environments are generated, they are not added to the current population of environments unless they are neither too hard nor too easy for the current population”. During training, POET creates an expanding set of environments which are made by modifying various obstacles within the 2D environment the agent needs to traverse.
  Results: Systems trained with POET learn solutions to environments that systems trained with Evolution Strategies from scratch are not able to do. The authors theorize that this is because newer environments in POET are created through mutations of older environments and because POET only accepts new environments that are not too easy not too hard for current agents, POET implicitly builds a curriculum for learning each environment it creates.”
  Why it matters: Approaches like POET show how researchers can essentially use compute to generate arbitrarily large amounts of data to train systems on, and highlights how coming up with training regimes that involve an interactive loop between an agent, an environment, and a governing system for creating agents and environments, can create more capable systems than those that would be derived otherwise. Additionally, the implicit ideas governing the POET paper are that systems like this are a good fit for any problem where computers need to be able to learn flexible behaviors that deal with unanticipated scenarios. “POET also offers practical opportunities in domains like autonomous driving, where through generating increasingly challenging and diverse scenarios it could uncover important edge cases and policies to solve them,” the researchers write.
  Read more: Paired Open-Ended Trailblazer (POET): Endlessly Generating Increasingly Complex and Diverse Learning Environments and Their Solutions (Arxiv).

Making old games look better with GANs:
…ESRGAN revitalises Max Payne…
A post to the Gamespot video gaming forums shows how ESRGAN – Enhanced Super Resolution Generative Adversarial Networks – can improve the graphics of old games like Max Payne. ESRGAN gives game modders the ability to upscale old game textures through the use of GANs, improving the appearance of old games.
  Read more: Max Payne gets an amazing HD Texture Pack using ESRGAN that is available for download (Dark Side of Gaming).

Google teaches AI to learn to semantically segment objects:
Auto-DeepLab takes neural architecture search to harder problem domain…
Researchers with Johns Hopkins University, Google, and Stanford University have created an AI system called Auto-DeepLab that has learned to perform efficient semantic segmentation of images – a challenging task in computer vision, which requires labeling the various objects in an image and understanding their borders. The system developed by the researchers uses a hierarchical search function to both learn to come up with specific neural network cell designs to inform layer-wise computations, as well as figuring out the overall network architecture that chains these cells together. “Our goal is to jointly learn a good combination of repeatable cell structure and network structure specifically for semantic image segmentation,” the researchers write.
  Efficiency: One of the drawbacks of neural architecture search approaches is the inherent computational expense, with many techniques demanding hundreds of GPUs to train systems. Here, the researchers show that their approach is efficient, able to find well-performing architectures for semantic segmentation of the ‘Cityscapes’ dataset in about 3 days of one P100 GPU.
   Results: The network comes up with an effective design, as evidenced by the results on the cityscapes dataset. “With extra coarse annotations, our model Auto-DeepLab-L, without pretraining on ImageNet, achieves the test set performance of 82.1%, outperforming PSPNet and Mapillary, and attains the same performance as DeepLabv3+ while requiring 55.2% fewer Multi-Adds computations.” The model gets close to state-of-the-art on PASCAL VOC 2012 and on ADE20K.
  Why it matters: Neural architecture search gives AI researchers a way to use compute to automate themselves, so the extension of NAS from helping with supervised classification, to more complex tasks like semantic segmentation, will allow us to automate more and more bits of AI research, letting researchers specialize to come up with new ideas.
   Read more: Auto-DeepLab: Hierarchical Neural Architecture Search for Semantic Image Segmentation (Arxiv).

UAV-Gesture means that gesturing at drones now has a purpose:
Flailing at drones may go from a hobby of lunatics to a hobby of hobbyists, following dataset release…
Researchers with the University of South Australia have created a dataset of people performing 13 gestures that are designed to be “suitable for basic UAV navigation and command from general aircraft handling and helicopter handling signals. These actions include things like hover, move to left, land, land in a specific direction, slow down, move upward, and so on.
  The dataset: The dataset consists of footage “collected on an unsettled road located in the middle of a wheat field from a rotorcraft UAV (3DR Solo) in slow and low-altitude flight”. The dataset consists of 37,151 frames distributed over 119 videos recorded in 1920 X 1080 formats at 25 fps. The videos contain videos of each gesture with different human actors, and eight different people are filmed overall.
  Get the dataset…eventually: The dataset “will be available soon”, the authors write on GitHub. (UAV-Gesture, Github).
  Natural domain randomization: “When recording the gestures, sometimes the UAV drifts from its initial hovering position due to wind gusts. This adds random camera motion to the videos making them closer to practical scenarios.”
  Experimental baseline: The researchers train a Pose-based Convolutional Neural Network (P-CNN) on the dataset and obtain an accuracy of 91.9%.
  Why this matters: Drones are going to be one of the most visible areas where software-based AI advances are going to impact the real world, and the creation (and eventual release) of datasets like UAV-Gesture will increase the amount of people able to build clever systems that can be deployed onto drones, and other platforms.
  Read more: UAV-GESTURE: A Dataset for UAV Control and Gesture Recognition (Arxiv).

Contemplating the use of reinforcement learning in improve healthcare? Read this first:
…Researchers publish a guide for people keen to couple RL to human lives…
As AI researchers start to apply reinforcement learning systems in the real world, they’ll need to develop a better sense of the many ways in which RL approaches can lead to subtle failures. A new short paper published by an interdisciplinary team of researchers tries to think through some of the trickier issues implied by deploying AI in the real world. It identifies “three key questions that should be considered when reading an RL study”, these are: is the AI given access to all variables that influence decision making?; How big was that big data, really?; and Will the AI behave prospectively as intended?
  Why this matters: While these questions may seem obvious, it’s crucial that researchers stress them in well known venues like Nature – I think this is all part of normalizing certain ideas around AI safety within the broader research community, and it’s encouraging to be able to go from abstract discussions to more grounded questions/principles that people may wish to apply when building systems.
  Read more: Guidelines for reinforcement learning in healthcare (Nature).

AI Policy with Matthew van der Merwe:
…Matthew van der Merwe has kindly offered to write some sections about AI & Policy for Import AI. I’m (lightly) editing them. All credit to Matthew, all blame to me, etc. Feedback: jack@jack-clark.net

What does the American public think about AI?
Researchers at the Future of Humanity Institute have surveyed 2,000 Americans on their attitudes towards AI.
  Public expecting rapid progress: Asked to predict when machines will exceed human performance in almost all economically-relevant tasks, the median respondent predicted 54% chance by 2028. This is considerably sooner than recent surveys of AI experts.
AI fears not confined to elites: A substantial majority (82%) believe AI/robots should be carefully managed. Support for developing AI was stronger among high-earners, those with computer science or programming experience, and the highly-educated.
  Lack of trust: Despite their support for careful governance, Americans do not have high confidence in any particular actors to develop AI for the public benefit. The US military was the most trusted, followed by universities and non-profits. Government agencies were less trusted than tech companies, with the exception of Facebook, who were the least trusted of any actor.
  Why it matters: Public attitudes are likely to significantly shape the development of AI policy and governance, as has been the case for many other emergent political issues (e.g. climate change, immigration). Understanding these attitudes, and how they change over time, is crucial in formulating good policy responses.
  Read more: Artificial Intelligence: American Attitudes and Trends (FHI).
  Read more: The American public is already worried about AI catastrophe (Vox).

International Panel on AI:
France and Canada have announced plans to form an International Panel on AI (IPAI), to encourage the adoption of responsible and “human-centric” AI. The body will be modeled on the Intergovernmental Panel on Climate Change (IPCC), which has led international efforts to understand the impacts of global warming. The IPAI will consolidate research into the impacts of AI, produce reports for policy-makers, and support international coordination.
  Read more: Mandate for the International Panel on Artificial Intelligence.

Tech Tales:

The Propaganda Weather Report

Starting off this morning we’re seeing a mass of anti-capitalist ‘black bloc’ content move in from 4chan and Reddit onto the more public platforms. We expect the content to trigger counter-content creation from the far-right/nationalist bot networks. There have been continued sightings of synthetically-generated adverts for a range of libertarian candidates, and in the past two days these ads have increasingly been tied to a new range of dreamed-up products from the Chinese netizen feature embedding space.

We advise all of today’s content travelers to set their skepticism to high levels. And remember, if someone starts talking to you outside of your normal social network, make all steps to verify their identify and if unsuccessful, prevent the conversation from continuing – it takes all of human society to work together to protect ourselves from subversive digital information attacks.

Things that inspired this story: Bot propaganda, text and image generation, weather reports, the Shipping Forecast, the mundane as the horrific and the horrific as the mundane, the commodification of political discourse as just another type of ‘content’, the notion that media in the 21st century is fundamentally a ‘bot’ business rather than human business.