Import AI 125: SenseTime trains AIs to imitate human AI architects; Berkeley researchers fuse AI for FrankenRL system; and fake images from NVIDIA cross the uncanny valley.
by Jack Clark
Berkeley researchers create Franken-RL, fusing hand-engineered systems and RL-based controllers:
…Use hand-engineered controllers for the stuff they’re good at, and use RL to learn the tricky things…
Researchers with the University of California at Berkeley, Siemens Corporation, and the Hamburg University of Technology have combined classical robotics control techniques with reinforcement learning to create robots that can deal with complex tasks like block-stacking.
The technique, which they call Residual Reinforcement Learning, uses “conventional feedback control theory” to learn to control the robot, and reinforcement learning to learn how to interact with the objects in the robot’s world. The described technique mushes both of these techniques together. “The key idea is to combine the flexibility of RL with the efficiency of conventional controllers by additively combining a learnable parametrized policy with a fixed hand-engineered controller”, the researchers write.
Testing on a real robot: The researchers show that residual RL approaches are more sample-efficient than those without it, with these traits verified in both simulation learning as well as in tests on a real robot. They also show that systems trained with Residual RL can better deal with confounding situations, like working out how to perform block assembly when the blocks have been moved into situations designed to confused the hand-written controller.
Why it matters: Approaches like this show how today’s contemporary AI techniques, like TD3 trained via RL as in the experiments here, can be combined with hand-written rule-based systems to create powerful AI applications. This is a trend that is likely to continue, and it suggests that the distinctions between systems which contain AI and which don’t contain AI will become increasingly blurred.
Read more: Residual Reinforcement Learning for Robot Control (Arxiv).
NVIDIA researchers show how fake image news is getting closer:
…Synthetic faces roll through the uncanny valley, with a little help from GANs and the use of noise…
Researchers with NVIDIA have shown how to use techniques cribbed from style transfer work on image generation, to create synthetic images of unparalleled quality. The research indicates that we’re now at the point where neural networks are capable of generating single-frame synthetic images of a quality sufficient to trick (most) humans. While this paper does include a brief discussion of bias inherent to training images (good!) it does not at any point discuss what the policy implications are of systems capable of generating customizable fake human faces, which feels like a missed opportunity.
How it works: “Our generator starts from a learned constant input and adjusts the “style” of the image at each convolution layer based on the latent code, therefore directly controlling the strength of image features at different scales”, the researchers explain. They also inject noise into the network at various different and find that the addition of noise helps create complex and coherent structures in subtle facial features like hair, earlobes, and so on. “We hypothesize that at any point in the generator, there is pressure to introduce new content as soon as possible, and the easiest way for our network to create stochastic variation is to rely on the noise provided.”
Why it matters: These photorealistic faces are especially striking when we consider that ~4 years ago the best things AI systems were capable of was generating smeared, flattened, black&white pixelated faces, as seen in the original generative adversarial networks paper (Arxiv). I wonder how long it will take us till we can generate coherent videos over lengthy time periods.
Read more: A Style-Based Generator Architecture for Generative Adversarial Networks (Arxiv).
Get more information and the data: NVIDIA has said it plans to release the source code, some pre-trained networks, and the FFHQ dataset “soon”. Get them from here (NVIDIA placeholding Google Doc).
Attacking AWS and Microsoft with ‘TextBugger’ adversarial text attack framework:
…Compromising text analysis systems with ‘TextBugger’…
Researchers with the Institute of Cyberspace Research and College of Compute Science and Technology in Zheijiang University; the Alibaba-Zhejiang University Joint Research Institute of Frontier Technologies; the University of Illinois Urbana-Champaign; and Leheigh University, have published details on TextBugger “a general attack framework for generating adversarial texts”.
Adversarial texts are chunks of text that have been manipulated in such a way that they don’t set off alarms when automated classifiers look at them. For example, simply by altering the spelling and spacing of some words (eg, terrible becomes ‘terrib1e’, weak become ‘wea k’), the researchers have shown they can confused a deployed commercial classifier. Similarly, they show how you can change a chunk of text from being classified with 92% as being Toxic to 78% chance of non-toxix by changing the spelling of ‘shit’ to ‘shti’, ‘fucking’ to ‘fuckimg’, and ‘hell’ to ‘helled’.
Attacks against real systems: TextBugger can perform both white-box attacks (where the attacker has access to the underlying classification algorithm), and black-box attacks (where the precise inner details of a targeted system are now known). The researchers show that their approach works against deployed system, including: Google Cloud NLP, Microsoft Azure Text Analytics, IBM Watson Natural Language Understanding and Amazon AWS Comprehend. The researchers are able to use TextBugger to easily break Microsoft Azure and Amazon AWS NLP systems with a 100% success rate; by comparison, Google Cloud NLP holds up quite well, with them only able to get a 70.1% success rate against the system.
To conduct the black box attacks, the researchers use the spaCy language processing framework to help them automatically identify the important words and sentences within a given chunk of text, which they then add adversarial examples to.
Defending against adversarial examples: The researchers find that it’s possible to better defend against adversarial examples by spellchecking submitted text and using this to identify adversarial examples. Additionally, they show that you can train models to automatically spot adversarial text, but this requires details of the attack.
Why it matters: Now that companies around the world have deployed commercial and non-commercial AI systems at scale, it’s logical that attackers will try to subvert them. As is the case with visual adversarial examples, today’s neural network-based systems are quite vulnerable to subtle perturbations; we’ll need to make systems more robust to deploy AI more widely with confidence.
Read more: TextBugger: Generating Adversarial Text Against Real-world Applications (Arxiv).
Training AI systems to build AI systems by copying people:
…Teaching AI to copy the good parts of human-designed systems, while still being creative…
Researchers with Chinese computer vision giant SenseTime and the Chinese University of Hong Kong have published details on IRLAS, a technique to create AI agents that learn to design AI architectures inspired by human-designed networks.
The technique, Inverse Reinforcement Learning for Architecture Search (IRLAS) works by training a neural network with reinforcement learning to design new networks based on a template derived from a human design. “Given the architecture sampled by the agent as the self-generated demonstration, the expert network as the observed demonstration, our mirror stimuli function will output a signal to judge the topological similarity between these two networks,” the researchers explain.
The motivation for all of this is that the researchers believe “human-designed architectures have a more simple and elegant topology than existing auto-generated architectures”.
Results: The researchers use IRLAS to design a network that obtains a 2.60% test error score on CIFAR-10, showing “state-of-the-art performance over both human-designed networks and auto-generated networks”. The researchers also train a network against the large-scale ImageNet dataset and show that IRLAS-trained networks can obtain greater accuracies and lower inference times when deployed in a mobile setting.
Why it matters: Automating the design of increasingly large aspects of AI systems lets us arbitrage (expensive) human brains for (cheap) computers when designing new neural network architectures. Economics suggest that as we gain access to more powerful AI training hardware the costs of using a neural architecture search approach versus a human-driven one will change enough for the majority of networks to be found via AI systems, rather than humans.
Read more: IRLAS: Inverse Reinforcement Learning for Architecture Search (Arxiv).
AI Policy with Matthew van der Merwe:
…Matthew van der Merwe has kindly offered to write some sections about AI & Policy for Import AI. I’m (lightly) editing them. All credit to Matthew, all blame to me, etc. Feedback: jack@jack-clark.net…
Microsoft calls for action on face recognition and publishes ethics principles:
Microsoft have urged governments to start regulating face recognition technology, in a detailed blog post from company president Brad Smith. The post identifies three core problems to be addressed by governments: avoiding bias and discrimination; protecting personal privacy; and protecting democratic freedoms and human rights. For each issue, they makes clear recommendations about measures required to address them, and identify relevant legal precedents.
In the same post, Microsoft announce six principles which will guide their use of face recognition: (1) Fairness; (2) Transparency; (3), Accountability; (4) Nondiscrimination; (5) Notice and consent; (6) Lawful surveillance.
Why this matters: This is a detailed and sensible post, which places Microsoft at the forefront of the discussion around face recognition. This issue is important not only because of the imminent deployment of these technologies, but because it is likely just the first of many AI technologies with far-reaching societal impacts. Given this, our response to face recognition will shape our approach to future developments, and is an important test of our response to changes brought about by AI.
Read more: Facial recognition: It’s time for action (Microsoft).
EU releases coordinated AI strategy:
The EU have released plans to coordinate member states’ national AI strategies under a common strategic framework. Earlier this year, the EU announced a target of €20bn/year in AI investments over the next decade. Core aspects of the Europe-wide plan include a new industry-academia partnership on AI, a strengthened network of research centres, skills training, and a ‘single market for data’. The plan affirms Europe’s commitment to participating in the ethical debate, through the publication of their AI ethics principles in 2019. The EU reiterates its concerns with lethal autonomous weapons, and will continue to advocate for measures to ensure meaningful human control of weapons systems.
Read more: Coordinated Plan on Artificial Intelligence (EU).
OpenAI Bits & Pieces:
Want to predict the upper batch size to use during model training? Predict the noise scale:
New research from OpenAI shows how we can better predict the parallelizability of AI workloads by measuring the noise scale during training and using this to predict aspects of how AI training will scale into the future.
I think measures like this may be surprisingly useful within AI policy. “A central challenge of AI policy will be to work out how to use measures like this to make predictions about the characteristics of future AI systems, and use this knowledge to conceive of policies that let society maximize the upsides and minimize the downsides of these technologies”, we write in the blog post.
Read more: How AI Training Scales (OpenAI Blog).
Tech Tales:
The servants become the flock and become alive and fly.
20%? Fine. 30%? You might have some occasional trouble, but it’ll be manageable. 40%? OK, that could be a problem. 50%? Now you’re in trouble. Once more than 50% of the cars on a road at any one time are self-driving cars, then you run into problems. Overfitting doesn’t seem like such an academic problem when it involves multiple tons of steel each traveling at 50kmh+, slaloming along a freeway.
The problem is that the cars behave too similarly. Without the randomness caused by human drivers, the robotic self-driving cars full into their own weird pathologies. Local minima. Navigation anti-patterns. Strange turning conventions. Emergent cracks in an otherwise perfect system.
So that let to the manufacturers coming together half a decade ago and conceived of the ‘chaos accords’ – an agreement between all automotive makers about the level of randomness they would try to inject into their self-driving car brains. The goal: recreate a variety of different driving styles in a self-driving car world. The solution: different self-driving cars could now develop different ‘driving personalities’, with the personalities designed to fit within rigorous safety constraints, while offering a greater amount of variety than had been present in previous systems.
Like most epochal events, we didn’t see it coming. Instead, markets took over and as the companies developed more varieties of car with a greater breadth of driving styles, people started to desire more variety in their own cars. This led to the invention of ‘personality evolution’, which would let a self-driving car slowly learn to drive in way that pleased its majority user. Soon after this the companies implemented the same system for themselves, giving many cars the ability to learn from eachother and pursue what was called in the technical literature ‘idiosyncratic evolution strategies’.
It seemed like a great thing at first; faster, smarter, safer cars. Cars moving together in fleets through traffic, with the humans inside waving at each other (especially the children); and new services like AI-fleet-driven ‘joyrides’ on souped up vehicles whose designs came in part from the sensor data of the AI machines. Cars themselves became economic actors, able to assess the ‘unique individual characteristics’ of their own particular driving style, spot the demand for any of their styles or skills on the consumer market, and sell their services to other cars.
None of this looks at all like intelligence, because none of it is. But the outcome of enough nodes in the network with enough emergent property propensity, and this growth through time, leads to things that do intelligent things, even if the parts aren’t smart.
Things that inspired this story: Imitation learning; overfitting; domain randomization; fleet learning; federated learning, evolution; emergent failure.
[…] Berkeley researchers create Franken-RL, fusing hand-engineered systems and RL-based controllers:…Use hand-engineered controllers for the stuff they’re good at, and use RL to learn the tricky things…Researchers with the University of California at Berkeley, Siemens Corporation, and the Hamburg University of Technology have combined classical robotics control techniques with reinforcement learning to create robots that can deal with complex tasks like block-stacking. “The key idea is to combine the flexibility of RL with the efficiency of conventional controllers by additively combining a learnable parametrized policy with a fixed hand-engineered controller”, the researchers write. Testing on a real robot: The researchers show that residual RL approaches are more sample-efficient than those without it, with these traits verified in both simulation learning as well as in tests on a real robot. Read More […]
Hi Jack,
thanks for your nice newsletter. I suggest you add a link in the email to retweet it, so that the mailing list subscribers (such as me) can easily retweet a post if they desire to do so.
Best Regards and Happy Holidays,
Andrea Panizza
Il giorno mar 18 dic 2018 alle ore 08:41 Import AI ha scritto:
> Jack Clark posted: “Berkeley researchers create Franken-RL, fusing > hand-engineered systems and RL-based controllers: …Use hand-engineered > controllers for the stuff they’re good at, and use RL to learn the tricky > things… Researchers with the University of California at Berk” >
Good suggestion – will do!