Import AI 134: Learning old tricks on new robots; Facebook improves translation with noise; Google wants people to train fake-audio detectors

by Jack Clark

Why robots are the future of ocean maintenance:
…Robot boats, robot copters, and robot underwater gliders…
Researchers with Oslo Metropolitan University and Norwegian University of Science and Technology are trying to reduce the cost of automated sub-sea data collection and surveillance operations through the use of robots, and have published a paper outlining one of the key components needed to build this system – a cheap, lightweight way to get small sub-surface gliders to be able to return to the surface.

  Weight rules everything around me: The technical innovations here involve simplifying the design to reduce the number of components needed to build a pressure-tolerant MUG, which in turn reduces the weight of the systems, making it easier for them to be deployed and recovered via drones.

“Further development will add the ability to adjust pitch and yaw, improve power efficiency, add GPS and environmental sensors, as well as UAV deployment/recovery strategies”, they write.

  Why this esoteric non-AI-heavy paper matters: This paper is mostly interesting for the not-too-distant future it portends; one where robot boats patrol the oceans, releasing underwater gliders to gather information about the environment, and serving as a homebase for drones that can collect the gliders and transmit them back to the robot boat, and serve as a kind of airborne antenna to relay radio signals between the boats and the gliders. Now, just imagine what you’ll be able to do with these systems once we get cheaper, more powerful computers and better autonomous control&analysis AI systems that can be deployed onto them – the future is a world full of robots, sensing and responding to minute fluctuations in the environment.

   Read more: Towards autonomous ocean observing systems using Miniature Underwater Gliders with UAV deployment and recovery capabilities (Arxiv).

+++

Sponsored: The O’Reilly AI Conference – New York, April 15–18:

…What do you need to know about AI? From hardware innovation to advancements in machine learning to developments in ethics and regulation, join leading experts with the insight you need to see where AI is going–and how to get there first.
Register soon. Early price ends March 1st, and space is limited. Save up to $800 on most passes with code IMPORTAI20.

+++

DeepMind shows how to teach new robots old tricks:
…Demonstrates prowess of SAC-X + augmented data approach via completion of a hard simulated and real world robotics task…
Researchers with DeepMind are going backwards in time – after using reinforcement learning to solve a series of Atari games a few years ago, they’re now heading to the beginning of the 20th century, as they try to teach robots to place a ball on a string inside a wooden cup. This is a challenging, worthwhile task for real-world robotics, as it involves complex movement policies, the need to predict the movements of the ball, and demands a decent interplay between perception and action to solve the task.

  How they do it: To solve this, DeepMind uses an extension of its Scheduled Auxiliary Control (SAC-X) algorithm, which lets them train across multiple tasks with multiple rewards. Their secret to solving the tasks robustly on physical robots is to use additional data at training time, where the goal is “simultaneously learn control policies from both feature-based representation and raw vision inputs in the real-world – resulting in controllers that can afterwards be deployed on a real robot using two off-the-shelf cameras”.

   Results: They’re able to learn to solve the task in simulation as well as on a real robot. They’re able to learn a robust, successful policy on the robot: “The swing-up is smooth and the robot recovers from failed catches. With a brief evaluation of 20 runs, each trial running for 10 seconds, we measured 100% catch rate. The shortest catch time being 2 seconds.” They also tested out the robot with a smaller cup to make the task more difficult – “there were a slight slow-down in learning and a small drop in catch rate to 80%, still with a shortest time to catch of 2 seconds,” they write. They’re able to learn the task on the real robot in about 28 continuous hours of training (so more like ~40 hours when you account for re-setting the experiment, etc).

  Why it matters: Getting anything to work reliably on a real robot is a journey of pain, frustration, pain, tedium, and – yes! – more pain. It’s encouraging to see SAC-X work in this domain, and it suggests that we’re figuring out better ways to learn things on real-world platforms.

  Check out the videos of the simulated and real robots here (Google Sites).
  Read more: Simultaneously Learning Vision and Feature-based Control Policies for Real-world Ball-in-a-Cup (Arxiv).

+++

Want better translation models? Use noise, Facebook says:
…Addition of noise can improve test-time performance, though it doesn’t help with social media posts…
You can improve the performance of machine translation systems by injecting some noise into the training data, according to Facebook AI Research. The result is models that are more robust to the sort of crappy data found in the real world, the researchers write.

  Noise methods: The technique uses four noise methods: deletions, insertions, substitutions, and swaps. Deletions are where the researchers delete a character in a sentence; insertions are where they insert a character into a random position; substitutions are where they replace a character with another random character, and swaps are where two adjacent characters change position.

   Results: They test the approach on the IWSLT machine translation benchmark by training over datasets with varying amounts of noise injected into the test data, and observing how they can influence the BLEU score of models trained against this data by injecting synthetic noise into the dataset. “Training on our synthetic noise cocktail greatly improves performance, regaining between 20% (Czech) and 50% (German) of the BLEU score that was lost to natural noise,” they write.

  Where doesn’t noise help: This technique doesn’t help when trying to perform translations on text derived from social media – this is because social media errors tend to stem from content on having a radically different writing and tonal style to what is traditionally seen in training sets, rather than spelling errors.

  Observation: Conceptually, these techniques seem to have a lot in common with domain randomization, which is where people generate synthetic data designed to explore broader variations than would otherwise be found. Such techniques have been used for a few years in robotics work, and typically improve real world model performance by increasing the robustness to the significant variations introduced by reality.

  Why this matters: This is another example of the ways in which computers can be arbitraged for data: instead of needing to go and gather datasets with real-world faults, the addition of synthetic noise means you can instead algorithmically extend existing datasets through the augmentation of noisy data. The larger implication here is that computational resources are becoming an ever-more-significant factor in AI development.

Read more:
Training on Synthetic Noise IMproves Robustness to Natural Noise in Machine Translation (Arxiv).

+++

In the future, neural networks will be bred, not created:
General-purpose population training for those who can afford it…
Population Based Training (PBT) is a recent invention by DeepMind that makes it possible to optimize the weights and hyperparameters of a set of neural networks by periodically copying the weights of the best performers and mutating their parameters. This is part of the broader trend of the industrialization of artificial intelligence, as researchers seek to create automated procedures for doing what was otherwise previously done by patient graduate students (eg, fiddling with weights of different networks, logging runs, pausing and re-starting models, etc).

The DeepMind system was inspired by Google’s existing ‘Vizier’ service, which provides Google researchers with a system to optimize existing neural networks. In tests, population-based training can converge faster than other approaches, while utilizing hardware resources more efficiently, the researchers say.

  Results: “We conducted a case study of our system in WaveNet human speech synthesis and demonstrated that our PBT system produces superior accuracy and performance compared to other popular hyperparameter tuning methods,” they write. “Moreover, the PBT system is able to directly train a model using the discovered dynamic set of hyperparameters while traditional methods can only tune static parameters. In addition, we show that the proposed PBT framework is feasible for large scale deep neural network training”.

   Read more: A Generalized Framework for Population Based Training (Arxiv).

+++

Google tries to make it easier to detect fake audio:
…Audio synthesis experts attempt to secure world against themselves…
Google has created a dataset consisting of “thousands of phrases” spoken by its deep learning text-to-speech models. This dataset consists of 68 synthetic ‘voices’ across a variety of accents. Google will make this data available to participants in the 2019 ASVspoof challenge, which “invites researchers all over the globe to submit countermeasures against fake (or “spoofed”) speech, with the goal of making automatic speaker verification (ASV) systems more secure”.

   Why it matters: It seems valuable to have technology actors discuss the potential second-order effects of technologies they work on. It’s less clear to me that the approach of training increasingly more exquisite discriminators against increasingly capable generators has an end-state that is stable, but I’m curious to see what evidence competitions like this help generate regarding this.

   Read more: Advancing research on fake audio detection (Google blog).

+++

AI Policy with Matthew van der Merwe:
…Matthew van der Merwe has kindly offered to write some sections about AI & Policy for Import AI. I’m (lightly) editing them. All credit to Matthew, all blame to me, etc. Feedback: jack@jack-clark.net

Structural risks from AI:
The discussion of AI risk tends to divide downsides into accident risk, and misuse risk. This obscures an important source of potential harms that fits into neither category, which the authors call structural risk.

  A structural perspective: Technologies can have substantial negative impacts in the absence of accidents or misuse, by shaping the world in important ways. For example, the European railroad system has been suggested as an important factor in the outbreak and scope of WWI, by enabling the mass transport of troops and weapons across the continent. A new technology could have a range of dangerous structural impacts – it could create dangerous safety-performance trade-offs, it could create winner-takes-all competition. The misuse-accident perspective focuses attention on the point at which a bad actor uses a technology for malicious ends, or a system acts in an unintended way. This can lead to an underappreciation of structural risks.

  AI and structure: There are many examples of ways in which AI could influence structures in a harmful way. AI could undermine stability between nuclear powers, by compromising second-strike capabilities and increasing the risk of pre-emptive escalation. Worries about AI’s impact on economic competition, the labour market, and civil liberties also fit into this category. Structures can themselves increase AI-related risks. Without mechanisms for international coordination, countries may be pushed towards sacrificing safety for performance in military AI.

  Policy implications: A structural perspective brings to light a much wider range of policy levers, and consideration of structural dynamics should be a focus in the AI policy discussion.

Drawing in more expertise from the social sciences is a one way to address this, as these disciplines are more experienced in taking structural perspectives on complex issues. A greater focus on establishing norms and institutions for AI is also important, given the necessity of coordination between actors in solving structural problems.

  Read more: Thinking About Risks From AI: Accidents, Misuse and Structure (Lawfare).

Trump signs executive order on AI:
President Trump has signed an executive order, outlining proposals for a new ‘AI Initiative’ across government.

  Objectives: The memo gives six objectives for government agencies: to promote investment in R&D; improve access to government data; reduce barriers to innovation; develop appropriate technical standards; train the workforce; and to create a plan for protecting US advantage in critical technologies.

  Funding: Agencies are encouraged to treat AI R&D as a priority in budget proposals going forward, and to seek out collaboration with industry and other stakeholders. There is no detail on levels of funding, and it is unclear whether, or when, any new funds will be set aside for these efforts.

  Why it matters: The US government has been slow to formulate a strategy on AI, and this is an important step. As it stands, however, it is little more than a statement of intent; it remains to be seen whether this will translate into action. Without significant funding, this initiative is unlikely to amount to much. The memo also lacks detail on the ethical challenges of AI, such as ensuring benefits are equitably distributed, and risks are minimized.

  Read more: Executive Order on Maintaining American Leadership in Artificial Intelligence (White House).

+++

OpenAI Bits&Pieces:

GPT-2:
We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization — all without task-specific training.

Also in this release:
Discussion of the policy implications of releasing increasingly larger AI models. This release triggered a fairly significant and robust discussion about GPT2, increasingly powerful models, appropriate methods for engaging the media and ML communities about topics like publication norms.

   Something I learned: I haven’t spent three or four days directly attached to a high-traffic Twitter-meme/discussion before, I think the most I’ve ever had was a couple of one/two-day bursts related to stories I wrote when I was a journalist, which has different dynamics. This experience of spending a lot of time on Twitter enmeshed in a tricky conversation made me a lot more sympathetic to various articles I’ve read about frequent usage of Twitter being challenging for mental health reasons. Something to keep in mind for the future!

   Read more: Better Language Models and Their Implications (OpenAI).

Tech Tales:

AGI Romance
+++ ❤ +++

It’s an old, universal thing: girl meets boy or boy meets girl  or boy meets boy or girl meets girl or whatever; love just happens. It wells up out of the human heart and comes out of the eyes and seeks out its mirror in the world.

This story is the same as ever, but the context is different: The boy and the girl are working on a machine, a living thing, a half-life between something made by people and something that births itself.

They were lucky, historians will say, to fall in love while working on such an epochal thing. They didn’t even realize it at the time – after all, what are the chances that you meet your one-and-only while working on the first ever machine mind? (This is the nature of statistics – the unlikely things do happen, just very rarely, and to the people trapped inside the probability it can feel as natural and probable as walking.)

You know we’re just mechanics, she would say.
More like makeup artists, he would say.
Maybe somewhere in-between, she would say, looking at him with her green eyes, the blue of the monitor reflected in them.

You know I think it’s starting to do things, he would say.
I think you’re an optimist, she would say.
Anyone who is optimistic is crazy, he would say, when you look at the world.
Look around you, she would say. Clearly, we’re both crazy.

You know I had a dream last night where I was a machine, she would say.
You’re asleep right now, he would say. Wake up!
Tease, she would say. You’ll teach it bad jokes.
I think it’ll teach us more, he would say, filing a code review request.
Where did you learn to write code like this, she would say. Did you go to art school?

You know one day I think we might be done with this, he would say.
I’m sure Sissyphus said the same about the boulder, she would say.
We’re dealing with the bugs, he would say.
I don’t know what are bugs anymore and what are… it, she would say.
Listen, he would say. I trust you to do this more than anyone.

You know I think it might know something, she would say one day.
What do you mean, he would say.
You know I think it knows we like each other, she would say.
How can you tell, he would say.
When I smile at you it smiles at me, she would say. I feel a connection.
You know I think it is predicting what we’ll do, he would say.

You know I think it knows what love is, he would say.
Show me don’t tell me, she would say.

And that would be the end: after that there is nothing but infinity. They will disappear into their own history together, and then another story will happen again, in improbable circumstances, and love will emerge again: perhaps the only constant among living things is the desire to predict the proximity of one to another and to close that distance.

Things that inspired this story: Calm; emotions as a prism; the intimacy of working together on things co-seen as being ‘useful’; human relationships as a universal constant; relationships as a constant; the placid and endless and forever lake of love: O.K.