Import AI: #61: How robots have influenced employment in Germany, AI’s reproducibility crisis, and why Unity is turning its game engine into an AI development system
by Jack Clark
Welcome to Import AI, subscribe here.
Robots in Germany: Lower wages and fewer jobs, but a larger economy through adoption of automation:
…How will the rise of AI influence the economy and will automation lead to so much job destruction that economic damage outweighs the gains? These are some perennial questions people ask themselves about AI – and are likely to in the future.
…So what actually happens when you apply significant amounts of automation to a given economy? There’s very little data to let us be concrete here, but there have been a couple of recent studies that make things a bit more tangible.
…Several months ago Acemoglu and Restrepo with MIT and Boston University published research (PDF) that showed that for every industrial robot employers deployed into an industry, total employment in the nearby area was reduced by about 6.2 workers, and total salaries saw an average reduction of $200 a year.
…Now, researchers with the Center for Economy and Policy Research, a progressive think tank, have studied employment in Germany and its relationship to industrial robots.
…Most striking observation: “Although robots do not affect total employment, they do have strongly negative impacts on manufacturing employment in Germany. We calculate that one additional robot replaces two manufacturing jobs on average. This implies that roughly 275,000 full-time manufacturing jobs have been destroyed by robots in the period 1994-2014. But, those sizable losses are fully offset by job gains outside manufacturing. In other words, robots have strongly changed the composition of employment,” they write.
…The negative equilibrium effect of robots on aggregate manufacturing employment is therefore not brought about by direct displacements of incumbent workers. It is instead driven by smaller flows of labour market entrants into more robot-exposed industries. In other words, robots do not destroy existing manufacturing jobs, but they do induce firms to create fewer new jobs for young people.”
…A somewhat more chilling trend they notice is that people within industries that robots are entering tend to be economically disadvantaged as in some industries it can lead to employees being willing to “swallow wage cuts in order to stabilise jobs in view of the threat posed by robots.”.
…And, for the optimistic crowd: “This worker-level analysis delivers a surprising insight – we find that more robot-exposed workers in fact have a substantially higher probability of keeping a job at their original workplace. That is, robot exposure increased job stability for these workers, although some of them end up performing different tasks in their firm than before the robot exposure.”
…You can read more here: The rise of robots in the German labour market.
AI, charged by the second:
…Many AI developers have an intuition that the way we buy and sell the technology is going to change. Right now, you can buy access to classifiers on a “per-inference” basis if buying pre-wrapped services from companies, but if you want to rent your own infrastructure you will typically be charged by the minute (Google) or hour (Amazon, Microsoft). Now, Amazon has cut the time periods under which it sells compute to one-second increments. This will make it easier for people to rapidly spin-up and spin-down services and I think should make it easier for people to build weirder, large-scale things for niche AI applications.
…Read more here: Per-Second Billing for EC2 Volumes and Instances.
$10 million for image recognizers Matroid:
…Computer vision startup Matroid has raised $10 million from Intel and NEA for easy-to-use video analysis tools:
…Read more here.
The un-reproducible world of AI research:
…Researchers with McGill University in Canada and Microsoft AI acquisition Maluuba have published a brave paper describing the un-reproducible, terrible nature of modern AI.
…To illustrate this, they conduct a series of stress-testing experiments on AI algorithms, ranging from testing variants with and without layer normalization, to modifying some of the fine-grained components used by networks. The results are perturbing, with even seemingly minute changes leading to vast repercussions in terms of performance. They also show how acutely sensitive algorithms are to the random seeds used to initialize them.
…One of the takeaways of the research is that if performance is so variable (even across different implementations during different years by the same authors of similar algorithms), then researchers should do a better job of proposing correct benchmarks to test new tasks on, while ensuring good code quality. Additionally, since no single RL algorithm can (yet) attain great performance across a full range of benchmarks, we’ll need to converge on a set of benchmarks that we as AI practitioners think are worth working on.
…Components used: The paper mostly uses algorithms based on OpenAIs ‘baselines’ project, an initiative to publish algorithms used and developed by researchers, benchmarked against many tasks.
…Read more here: Deep Reinforcement Learning that Matters.
Microsoft AI & Research division, number of employees:
…2016: ~5,000
…2017: ~8,000
…Read more here.
Who optimizes the optimizer? An optimizer trained via RL, of course!
…Optimizing the things that optimize neural networks via RL to learn to create new optimizers…
…In the wonderfully recursive world of AI research one current trend is ‘learning to learn’. This relates to techniques that either let our systems learn to rapidly solve broad classes of tasks following exposure to a variety of different data types and environments (RL2, MAML, etc), and on the other side, using neural networks to invent other neural network architectures and components (see: Neural Architecture Search, Large-scale Evolution of Image Classifiers, etc).
…Now new research from Google learns to generate the update equations used to optimize each layer of a network.
…Results: The researchers test their approach on the CIFAR-10 image dataset, and find that their system discovers several update rules with better performance than standbys like Adam, RMSProp, SGD.
…How it works: The authors create an extremely limited domain specific language (which doesn’t require parenthesis) to let them train RNNs to generate new update rules in the specific DSL. A controller is trained (via PPO) to select between different generated strings.
…Notably spooky: Along with discovering a bunch of basic and primitive optimization operations the system also learns to manipulate the learning rate over time as well, showing how even systems trained via relatively simple reward schemes can develop great complexity.
…Read more here: Neural Optimizer Search with Reinforcement Learning.
After ImageNet comes… Matterport’s 3DNet?
…3D scanning startup Matterport has released a dataset consisting of over 10,000 aligned panoramic views (RGB + depth per pixel), made up of over 194,400 images taken of roughly 90 discrete scenes. Most importantly, the dataset is comprehensively labeled, so it should be possible to train AI systems on the dataset to classify and possibly generate data relating to these rooms.
…”We’ve used it internally to build a system that segments spaces captured by our users into rooms and classifies each room. It’s even capable of handling situations in which two types of room (e.g. a kitchen and a dining room) share a common enclosure without a door or divider. In the future, this will help our customers skip the task of having to label rooms in their floor plan views,” writes the startup.
…Read more here: Announcing the Matterport3D Research Dataset.
AAAI spins up new conference focused on artificial intelligence and ethics:
…AAAI is launching a new conference focused on AI, Ethics, and Society. The organization is currently accepting paper submissions for the conference on subjects like AI for social good, AI and alignment, AI and the law, and ways to build ethical AI systems.
…Dates: The conference is due to take place in New Orleans February 2-3 2018.
…Read more here.
Players of games: Unity morphs game environments into full-fledged AI development systems:
…Game engine creator Unity has released Unity Machine Learning Agents, software to turn games made via the engine into environments in which to train AI systems.
…Any environment built via this system has three main components: agents, brains, and the academy. Think of the agents as the embodied agents, acting out according to the algo running within the brain they are linked to (many agents can be linked to one brain, or each agent can have their own brain, or somewhere in between). Brains can communicate via a Python API with AI frameworks, such as TensorFlow. The academy sets the parameters of the environment, defining frame-skip, episode length, and various configuration settings relating to the game engine itself.
…Additional quirks: One feature the software ships with is the ability to give agents access to multiple camera views simultaneously, which could be handy for training self-driving cars, or other systems.
…Read more here: Introducing Unity Machine Learning Agents.
McLuhan’s revenge: the simulator is the world is the medium is the message:
…Researchers with the Visual Computing Center at KAUST in Saudi Arabia have spent a few months performing code surgery on the ‘Unreal 4’ game engine to create ‘UE4’, software for large-scale reinforcement learning and computer vision development within Unity.
…People are increasingly looking to modern game simulators as components to be used within AI development because they let you procedurally generate synthetic data against which you can develop and evaluate algorithms. This is very helpful! In the real world if I want to test a new navigation policy on a drone I have to a) run the risk of it crashing and costing me money and b) have to deal with the fact my environment is non-stationary, so it’s hard to perfectly re-simulate crash circumstances. Simulators do away with these problems by giving you a tunable, repeatable, high definition world.
…One drawback of systems like this, though is that at some point you still want to bridge the ‘reality gap’ and attempt to transfer from the simulator into reality. Even with techniques like domain randomization (LINK) it’s not clear how well we can do that today. UE4 looks reasonably nice, but it’s going to have to demonstrate more features to woo developers away from systems based in Unity (see above) or developed by other AI-focused organizations (see: NVIDIA: Isaac, DeepMind: DeepMind Lab, StarCraft, Facebook: Torchcraft).
…”The simulator provides a test bed in which vision-based trackers can be tested on realistic high-fidelity renderings, following physics-based moving targets, and evaluated using precise ground truth annotation,” they write.
…Read more: UE4Sim: A Photo-Realistic Simulator for Computer Vision Applications.
Open data: Chinese startup releases large Mandarin speech recognition corpus:
…When researchers want to train speech recognition systems on English they have a wealth of viable large-scale datasets. There are fewer of these for Mandarin. The largest dataset released so far is THCHS30, from Tsinghua University, which contains 50 speakers spread across around 30 hours of speech data.
…To alleviate this, startup Beijing Shell Shell Technology Co has released AISHELL-1, which consists of 400 speakers spread over 170 hours of speech data, as an open source (Apache 2) dataset. Each speaker is recorded by three classes of device in parallel (high-fidelity microphone(s), Android phones, iPhones).
…Components used: Kaldi, a speech processing framework.
…”To our best knowledge, it is the largest academically free data set for Mandarin speech recognition tasks,” the researchers write. “Experimental results are presented using the Kaldi recipe published along with the corpus.”
…Read more here: AISHELL-1: An Open-Source Mandarin Speech Corpus and A Speech Recognition Baseline.
OpenAI Bits&Pieces:
Tech Tales:
[30??: A crater in the Moon.]
The building hugs the contours of the crater, its roof bending up the sides, its walls sloping down to mate with edges and scalloped slices. Stars wheel overhead, as they have done for many years. Every two weeks the data center inside the building dutifully passes a payload to the communications array, which beams the answers out every minute over the next week weeks before another batch arrives.
It’s a kind of relic: computing immensely complex collections of orbital trajectories, serving as a kind of Asteroid & Associated Planets weather service. It is also slowly working its way through the periodic table, performing an ever-expanding set of (simulated) chemical experiments, and reporting those results as well. The Automatic, Moon-based Scientist, newspapers had said when it was built.
Now it is tended to by swarms of machines – many of them geriatric with eons, some of them broken, others made of the cannibalized components of other broken robots. They make bricks out of lunar regolith, building walls and casements to protect from radiation and ease temperature variation, they hollow out great tunnels for the radiator fins that remove computer heat from the building, and some of them tend to the data center itself, replacing server equipment and storage equipment as it (inevitably) fails from a military-scale stockpile that, now, has almost run out.
And so every two weeks it sends its message out into the universe. It will not come to the end of its calculations before its last equipment stores break or run out, one of its overwatch systems calculates. An asteroid strike 500 years ago took out some of its communications infrastructure, deafening it to incoming messages. No human has returned to its base since around 2,200, when they updated some of its computers, refreshes its stocks, and gave it the periodic table task.
If it was able to listen then it would have heard the chorus of data from across the solar system – the flooding, endless susurration of insight being beamed out by its brothers and sisters, now secreted and embodied in moons and asteroids and planets, and their collective processing far outnumbering that of their dwindling human forebears. But it can’t. And so it continues.
A thousand years ago the humans would tell a story about a whale that was doomed to sing in a language no other whale could hear. That whale lived, as well.
“…Researchers with the Visual Computing Center at KAUST in Saudi Arabia have spent a few months performing code surgery on the ‘Unreal 4’ game engine to create ‘UE4’, software for large-scale reinforcement learning and computer vision development within Unity.”
This isn’t quite true. UE4 is the abbreviated name of the Unreal Engine 4. The simulator’s name is “UE4Sim” and it has nothing to do with Unity.