Import AI

Import AI: #74: Why Uber is betting on evolution, what Facebook and Baidu think about datacenter-scale AI computing, and why Tacotron 2 means speech will soon be spoofable

All hail the new ‘datacenter scale’ era for AI:
…Facebook Research Paper goes through some of the many problems that come with running AI at the scale of an entire datacenter…
Facebook has published an analysis of how it runs its worldwide fleet of AI services and how its scale has influenced the way it has deployed AI into production. The company uses both CPUs and GPUs, with GPUs being used for large-scale face recognition, language translation, and the ‘Lumos’ feature-analysis service. It also runs a significant amount of work on CPUs; one major workload is ranking features for newsfeed. “Computer vision represents only a small fraction” of the total work, Facebook writes.
  Split languages: Facebook uses ‘Caffe2’ for its production systems, while its researchers predominantly use PyTorch. Though the company’s main ML services (FBLearner Feature Store / FBLearner Flow / FBLearner Predictor) support a bunch of different AI frameworks, they’ve all been specially integrated with Caffe2, the company says.
  The big get bigger: Facebook, like other major AI users, is experimenting with running significantly larger AI models at larger scales: this has altered how it places and networks together its GPU servers, as well as directed it to spin-up research in areas like low-precision training. They’re also figuring out ways to use the scale to their advantage. “Using certain hyperparameter settings, we can train our image classification models to very large mini-batches, scaling to 256+ GPUs,” they write. “For one of our larger workloads, data parallelism has been demonstrated to provide 4x the throughput using 5x the machine count (e.g., for a family of models that trains over 4 days, a pool of machines training 100 different models could now train 20 models per day, so training throughput drops by 20%, but the wait time for potential engineering advancement improves from four days to one day).”
  One GPU region to train them all: When Facebook was first experimenting with GPUs for deep learning it rolled out GPUs in a single data center region, which it figured was a good decision as the designs of the servers were changing and the teams needed to become used to maintaining them. This Had some pretty negative consequences down the road, causing a re-think of how the company distributed its data center resources and its infrastructure.
Read more: Applied Machine Learning at Facebook: A Datacenter Infrastructure Perspective.

Baidu publishes some rules-of-thumb for how model size relates to performance:
The beginnings of a theory for deep learning…
Deep learning is an empirical science – we don’t fully understand how various attributes of our neural networks dictate their ultimate representational capacity. That means the day-to-day work of any AI organization involves a lot of empirical experimentation. Now, researchers with Baidu have attempted to formalize some of their ideas about how the scale of a deep learning model relates to its performance.
  “Through empirical testing, we find predictable accuracy scaling as long as we have enough data and compute power to train large models. These results hold for a broad spectrum of state-of-the-art models over four application domains: machine translation, language modeling, image classification, and speech recognition,” they write.
  The results suggest that once researchers get a model to a certain threshold of accuracy they can be confident that by simply adding computer &/or data they can reach x performance within a rough margin of error. “Model error improves starting with “best guessing” and following the power-law curve down to “irreducible error”,” they say. “We find that models transition from a small training set region dominated by best guessing to a region dominated by power-law scaling. With sufficiently large training sets, models will saturate in a region dominated by irreducible error (e.g., Bayes error).”
  The insight is useful but still requires experimental validation, as the researchers find similar learning curves across a variety of test domains, “although different applications yield different power-law exponents and intercepts”.
  It is also a further sign that compute will become as strategic as data to AI, with researchers seeking to be able to run far more empirical tests and scale-up far more frequently when equipped with somewhat formal intuitions like the one stumbled upon by Baidu’s research team.
– Read more here: Deep Learning Scaling is Predictable, Empirically (Baidu blog).
– Read more here: Deep Learning Scaling is Predictable, Empirically (Arxiv).

Evolution, evolution everywhere at Uber AI Labs:
…Suite of new papers shows the many ways in which neuroevolution approaches are contemporary and complementary to neural network approaches…
Uber’s AI research team has published a set of papers that extend and augment neuroevolution approaches – continuing the long-standing professional fascinations of Uber researchers like Ken Stanley (inventor of NEAT and HyperNEAT, among others). Neuroevolution is interesting to contemporary AI researchers because it provides a method to use compute power to push simple algorithms through the more difficult parts of hard problems rather than having to invent new algorithmic pieces to get us across certain local minima; with evolutionary approaches, the difference between experimental success and failure is often dictated by the amount of compute applied to the problem.
–  Exploration: The researchers show how to further tune the exploration process in evolutionary strategies (ES) algorithms through the alternation of novelty search and quality diversity algorithms. They also introduce new ideas to improve the mutation process of large neural networks.
–  Theory: The researchers compare the approximate gradients computed by ES with the exact gradient computed by stochastic gradient descent (SGD) and design tools to better predict how ES performance relates to scale and parallelization.
–  Big compute everywhere: “For neuroevolution researchers interested in moving towards deep networks there are several important considerations: first, these kinds of experiments require more computation than in the past; for the experiments in these new papers, we often used hundreds or even thousands of simultaneous CPUs per run. However, the hunger for more CPUs or GPUs should not be viewed as a liability; in the long run, the simplicity of scaling evolution to massively parallel computing centers means that neuroevolution is perhaps best poised to take advantage of the world that is coming,” they write.
– Read more here: Welcoming the Era of Deep Neuroevolution (Arxiv).
– Read more: Deep Neuroevolution: Genetic Algorithms Are a Competitive Alternative for Training Deep Neural Networks for Reinforcement Learning (Arxiv).
– Read more: Safe Mutations for Deep and Recurrent Neural Networks through Output Gradients.
– Read more: On the Relationship Between the OpenAI Evolution Strategy and Stochastic Gradient Descent (Arxiv).
– Read more: ES Is More Than Just a Traditional Finite Difference Approximator (Arxiv).
– Read more: Improving Exploration in Evolution Strategies for Deep Reinforcement Learning via a Population of Novelty-Seeking Agents.

US National Security Strategy picks out AI’s potential damage to the information battlespace:
AI’s ability to create fake news and aid surveillance picked out in NSS report…
While other countries around the world publish increasingly complicated, detailed national AI development strategies, the US government is instead adopting a ‘business as usual’ approach, based on the NSS letter, which explicitly mentions AI in (only!) two places – as it relates to innovation (named amid a bundle of different technologies as something to be supported), and national security. It’s the latter point which has more ramifications: The NSS explicitly names AI within the ‘Information Statecraft’ section as a potential threat to US national security.
  “Risks to U.S. national security will grow as competitors integrate information derived from personal and commercial sources with intelligence collection and data analytic capabilities based on Artificial Intelligence (AI) and machine learning. Breaches of U.S. commercial and government organizations also provide adversaries with data and insights into their target audiences,” the NSS says. “China, for example, combines data and the use of AI to rate the loyal of its citizens to the state and uses these ratings to determine jobs and more. Jihadist terrorist groups continue to wage ideological information campaigns to establish and legitimize their narrative of hate, using sophisticated communications tools to attract recruits and encourage attacks against Americans and our partners. Russia uses information operations as part of its offensive cyber efforts to influence public opinion across the globe. Its influence campaigns blend covert intelligence operations and false online personas with state-funded media, third-party intermediaries, and paid social media users or “trolls.” U.S. e orts to counter the exploitation of information by rivals have been tepid and fragmented. U.S. e orts have lacked a sustained focus and have been hampered by the lack of properly trained professionals. The American private sector has a direct interest in supporting and amplifying voices that stand for tolerance, openness, and freedom.”
Read more: National Security Strategy of the United States of America (PDF).

Goodbye, trustworthy phone calls, hello Tacotron 2:
…Human-like speech synthesis made possible via souped-up Wavenet…
Google has published research on Tacotron 2, text-to-speech (TTS) software that the company has used to generate synthetic audio samples that sound just like human beings.
  Results: One model attains a mean opinion score (MOS) of 4.53 compared to the 4.58 typically given to professionally recorded speech. You can check out some of the Tacotron 2 audio samples here; I listened to them and had trouble telling the difference between human and computer speakers. The researchers also carried out a side-by-side evaluation between audio synthesized by their system and the ground truth and found that people still have a slight preference towards ground truth (human-emitted spoken dialogue) versus the Tacotron 2 samples. Further work will be required to train the system to be able to deal with unusual words and pronunciations, as well as figuring out how to condition it at runtime to make a particular audio sample sound happy, sad, or whatever.
The next step for systems like this will be being able to re-train the synthetic voices to match a target speaker using a relatively small amount of data, then figuring out how to condition such systems with accents or other speech tics to better mimic the target.
Read more: Natural TTS Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions.

Chinese chip startup Horizon Robotics releases surveillance chip:
…Chip focuses on surveillance, self-driving…
Horizon Robotics has released the ‘Journey 1.0 processor’, a chip that (according to Google translate), “has the ability to accurately detect and recognize pedestrian, motor vehicle, non-motorized vehicle and traffic sign at the same time. The intelligent driving platform based on this chip supports the detection of 260 kinds of traffic signs, and the recognition accuracy to the traffic lights of traffic lights, current lanes and adjacent lanes is more than 95%.”
  Each chip “can detect 200 visual targets at the same time,” the company says.
  China’s chip boom: China is currently undergoing  a boom in the number of domestic startups developing specific AI chips for inference and training – part of a larger national push to create more national champions with semiconductor expertise and provide some significant competition to traditional chip companies Intel, AMD, IBM, and NVIDIA.
– Read more on this Chinese press release from Horizon.
– Check out Horizon’s website.

Salesforce researchers craft AI architecture generator, marvel at its creation of the high-performance, non-standard ‘BC3’ cell:
…Giving neural architecture search a supervised boost via a Domain-Specific Language…
Salesforce’s approach to neural architecture search relies on human supervision in the form of a domain specific language (DSL) which is given to the AI. The intuition here is that the human can specify a small shopping list of AI components which the system can evaluate, and it will figure out the best quantity and combination of these components to solve its tasks.
  One drawback of neural architecture search is that it can be expensive, not only for the computation expended on trying out different architectures, but due to the larger storage and compute requirements that are necessary when you want to test out architectures. The Salesforce researchers try to get around this by using a recursive neural network to iteratively predict the performance of new architectures, reducing the need for actual full-blown testing of the models.
  Results: Architectures trained with Salesforce’s approach perform comparably to the state-of-the-art on tasks like language understanding and on machine translation – with the benefit of having been trained almost entirely through computers autonomously coming up with effective architectures, rather than machine learning researchers expending time on it.
  The mystery of the  ‘BC3’ cell: Like all good research papers, this one contains an easter egg: the discovery of the ‘BC3’ cell, which was used by the model in various top-performing models. This cell has the odd trait of containing “an unexpected layering of two Gate3 operators,” they write. “While only the core DSL was used, BC3 still breaks with many human intuitions regarding RNN architectures.”
  Neural architecture search techniques seem to be in their infancy today but are likely to become very significant over the next two years as these techniques will benefit tremendously from the arrival of new fast computer hardware, like custom AI chips from firms like Google (TPUs) and Graphcore, as well as new processors from AMD, NVIDIA, and Nervana (Intel).
Read more: A Flexible Approach to Automated RNN Architecture Generation.

Tech Tales:

[Detroit VRZoo Sponsored by WorldGoggles(TM), 2028]

“Daddy, daddy, it’s swinging from the top of its cage! And now it’s hanging with one arm. Oh wait… gross! Daddy it just pooped and now it’s throwing it across the cage!”
  You stare at the empty, silent enclosure. Look at the rounded prison bars, buffed smooth by the oils from decades of curious hands, then down to your kid who has their WorldGoggles on and is staring at the top left corner of the cage with an expression that – you – suspect is childlike wonder.
  “Daddy you’re missing it, come on!,” they say, grabbing your sleeve. “Put yours on.”
  Okay, you say, tugging the glasses down over your eyes. The cage in front of you becomes alive – a neon orange, static-haired orangutan dangles from the top bar of the cage with one arm and uses its other to scoop into its backside then sling poo at a trio of hummingbirds on the other side of the cage, which dodge from side-to-side, avoiding the flung shit.
   “Woah look at that,” your kid says. “Those are some smart birds!” The kid plants their feet on the floor and bounces from side to side, darting their hips left and right, mimicking the dodging of the birds.

After the poo-throwing comes the next piece of entertainment: the monkey and the birds play hide and seek with eachother, before being surprised by a perfectly rendered digital anaconda, hidden into one of the fake rock walls of the augmented reality cavern. After that you rent the three creatures a VR toy you bought your kid last weekend so they can all play a game together. Later, you watch your child as gaze up at digital tigers, or move their head from side to side as they follow the just-ever-so-slightly pixelated bubbles of illusory fish.

Like most other parents you spend the majority of the day with your goggles flipped up on your head, looking at the empty repurposed enclosures and the various electronic sensors that stud the corners and ceilings of the rooms where the living animals used to be. The buildings ring out with the happy cries of the kids and low, warm smalltalk between parents. But there are none of the smells of a regular zoo: no susurrations from sleeping or playing animals, no swinging of chains.

The queue for the warthog is an hour long and after fifteen minutes the kid is bored.
  Or as they say: “Daddy I’m B O R E D Bored! Can we go back to the monkeys.”
  It was an orangutan. And, no, we should see this.
  “What does it do?”
  It’s a warthog.
  “Yes I know Dad but what does it do?
  It’s alive, you say. They keep talking to you but you distract them by putting on your goggles and playing a game of augmented reality tennis with them, their toy, and the birds who you pay an ‘amusement fee’ to coax over to the two of you.

When you get into the warthog’s cage it reminds you of the setup for Lenin’s tomb in Moscow – a strange, overly large enclosure that the crowd files around, each person trudging as slowly as they can. No one has goggles on, though some kids fiddle with them. It’s as quiet as a church. You can even hear the heavy breathing of the creature, and at one point it burps, causing all the kids to giggle. “Wow,” your kid whispers, then points at the warthog’s head. It’s got a red Santa Hat on – some of the white threading around the base is tarnished with a brown smudge, either dirt or poo. Your kid tries to put on their goggles to take a photo and you stop them and whisper “just look”, and all the other parents look at you with kind eyes. Outside, later, it snows and there’s only a hint of smog in the flakes. Your kid imitates the warthog and bends forward, then runs ahead of you, pretending to burp like a living thing.

Technologies that inspired this story: Augmented Reality; Magic Leap, Hololens. The Berlin zoo. Multi-agent environments. Mobile phone games.

Import AI: #73: Generative steganography, automated data fuzzing with imgaug, and what happens when neural networks absorb database software

Welcome to Import AI, subscribe here.

Accidental steganography with CycleGAN:
…Synthetic image generators create their own optical illusions…
Researchers with Google have identified some surprising information storage techniques used by CycleGAN, a tool that can be used to learn correspondences between different sets of images and generate synthetic images. Specifically, the researchers find that during CycleGAN training the network encodes additional information into the images it is generating to help it reconstruct original images from synthetic sources. “This suggests that the majority of information about the source photograph is stored in a high-frequency, low-amplitude signal within the generated map,” the researchers write.
  This also means it’s possible to use CycleGANs to create adversarial synthetic images, where a pattern of noise in the source image will cause the network to reconstruct a completely different image.”We claim that CycleGAN is learning an encoding scheme in which it “hides” information about the aerial photograph x within the generated map F x,” they write.
Read more: CycleGAN, a Master of Steganography.

Generating synthetic training data with imgaug:
Will we be applying the CoarseDropout today, sir? Perhaps with some salt and pepper? And how about some affine scaling as well?…
One of the most common dull parts of machine learning is data augmentation: that’s the process people use to take an existing dataset, like a collection of cat photos, and massively expand the size of the dataset by transforming the images in a variety of ways. New free software called imgaug automates this process, giving users a vast amount of potential transforms to automatically apply to their images.
  “It supports a wide range of augmentation techniques, allows to easily combine these, has a simple yet powerful stochastic interface, can augment images and keypoints/landmarks on these and offers augmentation in background processes for improved performance,” the authors write.
– Read the imgaug docs here.
– View imgaug on GitHub here.

I can’t B-TREE’ve it: Google learns index structures with machine learning:
…Goodbye, traditional software, hello, deep learning software…
After deep learning techniques fundamentally altered the capabilities of computer-implemented sensory recognition and analysis systems it was only a matter of time till such techniques came for software itself. A new research paper from Google shows how to use modern artificial intelligence approaches to significantly advance upon the state-of-the-art for one of the more fundamental operations in computer science: implementing an indexing system for a large repository of data.
  In the paper, the research team shows how to implement neural-network based ‘learned indexes’ that work as a substitute for traditional Btree-style indexes. In the future, the team plans to explore applying such techniques to write operations like inserts, as well as other fundamental database algorithms like those concerned with joining and sorting data.
  The Google team test their approach in four large-scale data domains: real-world integer datasets from Google’s own systems – Maps and weblogs – as well as a web-document dataset that contains ’10m non-continuous document-ids of a large web index used as part of a real product at a large internet company’, as well as a syntehtic dataset called Lognormal.
  Results: “The learned index dominates the B-Tree index in almost all configurations by being up to 3× faster and being up to an order-of-magnitude smaller. Of course, B-Trees can be further compressed at the cost of CPU-time for decompressing. However, most of these optimizations are not only orthogonal but for neural nets even more compression potential exist. For example, neural nets can be compressed by using 4- or 8-bit integers instead of 32- or 64-bit floating point values to represent the model parameters,” they write. Their implementation uses CPUs, while in the future the researchers think GPUs and new AI-specific compute substrates like TPUs could accelerate things further.
  Doubts about practicality: The Google researchers state within the research paper that approaches like this will require substantially more compute before they become viable. But since we know we have new powerful substrates via TPUS, Cerebras, Graphcore, etc, then that seems like a reasonable thing to bet on. Some others have slightly more quibbles regarding the paper. “It assumes a static data set being used in read-only fashion, so it’s unsuitable for a directory or database that serves ongoing modifications. It also assumes an entire data set fits in RAM, which is generally not true for database applications. In particular, the “fast” case of using highly parallel GPUs assumes everything fits inside GPU RAM, which is even more tightly constrained than server main memory,” writes Howard Chu, the CTO of Symas Corp, in this OpenLDAP email.
– Read more: The Case for Learned Index Structures (Arxiv).

Learned network topologies that approach optimal topologies:
From the dept. of ‘everything with an input-output pair gets automated’…
New research from Duke University / UYESTC China / Brown University / NEC Labs, shows how to use deep learning approaches to train an AI policy to predict close-to-optimal networking topologies for datacenters via software called DeepConf. The research is mostly interesting because it’s another demonstration of the recent trend for reframing problems that require you to match inputs with outputs (say, packets flooding into a data center with a particular optimal topology, or image pixels leading to a label, or audio waveforms leading to transcribed speech, and so on). Eventually perhaps everything can be re-evaluated using these powerful AI techniques and tools.
Read more here: DeepConfig: Automating Data Center Network Topologies Management with Machine Learning.

First AI analyzed the visual world. Now it analyzes the digital world:
Neural networks begin to make their way into everything…
Software 2.0: A few weeks ago Andrej Karpathy (former Stanford/OpenAI, now doing AI at Tesla) said he is increasingly thinking that neural networks are fundamentally altering software to the point it needs its own new brand/era: Software 2.0.
   “It turns out that a large portion of real-world problems have the property that it is significantly easier to collect the data than to explicitly write the program.A large portion of programmers of tomorrow do not maintain complex software repositories, write intricate programs, or analyze their running times. They collect, clean, manipulate, label, analyze and visualize data that feeds neural networks,” Karpathy writes.
   This research from Google, along with some of the chemistry papers from last week, and ongoing innovations in techniques like neural architecture search, all give us empirical evidence that people are beginning to rethink the act of designing software with AI and also how different real world domains can benefit from AI-infused systems. The next stage is to rethink the fundamentals of how optimized computer operations work with AI – though I don’t think anyone is looking forward to the bugs that will emerge as a consequence of this decision.
– Read more here: Software 2.0 (Medium).

Black in AI at NIPS:
This year NIPS hosted ‘Black in AI’ and DeepMind researcher Simon Osindero gave a speech there, which he has been generous enough to make publicly available. It hits on a bunch of tough issues the AI community needs to struggle with, ranging from issues of inclusivity and prejudice, to a bunch of suggestions for how the community can improve its representation.
  “We can also use our diverse backgrounds to inject broader perspectives into the AI field as a whole. Hopefully, by doing so, we can do a better job at ensuring that the AI applications and systems that we develop don’t inherit some of the problematic biases that are still present in society at large, and instead help them become fairer, and more transparent and accountable,” Simon says.
Read more here: My talk at the inaugural Black in AI workshop dinner (Medium).
A story about Simon: When I attended NIPS in Montreal in 2015 I, like everyone else there, drank far too late far too frequently into the evenings at a variety of AI events. By Friday morning I was feeling the effects, yet managed to crawl out of bed and make it to a reinforcement learning workshop in the morning. After trudging into the workshop I saw a perky-looking Simon at a chair a couple of rows in front of me and I asked him something to the nature of: “Simon,  I’m so bloody tired, how do you do it?” Simon raised up an ibuprofen pill bottle and shook it slightly and explained: “each scientific revolution builds upon the previous one.”

Allen Institute for AI reveals ‘THOR’ 3D agent-training environment:
Enter The House of inteRactions (THOR) at your potential peril to gain a potential reward…
AI2 has released THOR, an AI simulation environment based on the Unity 3D game engine. THOR contains over 120 “near photo-realistic 3D scenes” that have been hand modeled by human artists (as opposed to the more common approach of generating environments procedurally). THOR environments can contain numerous so-called actionable objects which can each be ‘interacted’ with – that is, an agent can manipulate them in crude ways to change their state like placing one object inside another, or opening and closing cupboards and drawers.
  High-quality scenes: The paper says the high visual fidelity of THOR scenes allows “better transfer of the learned models to the real world”, which is backed up by THOR’s usage in prior research including a project that trained a remote control car in simulation and transferred it into reality. without seeing experimental validation. There are numerous sim2real techniques, like ‘domain randomization’, that make it easy to take low-fidelity simulations and transfer models into reality through data augmentation.
  An endless proliferation of 3D environments: In the past couple of years there have been a bunch of new large-scale AI-training environments released ranging from Microsoft’s Minecraft-based Malmo to DeepMind’s Quake-based ‘DeepMind Lab’, to the Doom-based VizDoom. It’s interesting to observe how the choice of game engine dramatically inflects the ultimate design and parameters of these AI-training systems, so I’d expect to see more Unity or other engines being used in AI research.
Read more: AI2-THOR: An Interactive 3D Environment for Visual AI (Arxiv).

Tech Tales:

Clown Hunt.

So I guess when people hear what I do they think of the Turing Test and the Voight-Kampf interview and whatever, but trust me – those tests wouldn’t work. Weve tried dialogue. We’ve tried emobided VR interviews – with all the requisite probes. But nothing matches the playground. Course that’s a nickname – it’s actually a souped-up version of Garry’s Mod, the old sandbox Half Life 2 add-on. Now the thing with the software is it lets you just… play. I don’t know how to explain it – take a vast set of items and people and programmable crude behaviors and stick them in a world with physics and kinetics and what have you. People had fun with it. Hey, let’s make a cannon that fires cars! Let’s make an upside down swimming pool using an anti-gravity gun! Let’s make a rollercoaster where all the passengers are made of rubber! You know – weird stuff.

So that’s how we test the AI’s now. They blew past most of our dialogue techniques a long time ago. And robots are still so shitty it’s not like a Terminator or a skinjob is right around the corner. So instead it’s about testing the software roaming around the net and trying to figure out which programs are purely reactive and which of them are mostly made of people and which of them are software and reactive. Reactivity is a problem. If something can react very quickly then we might have a hard time dealing with it. Fighting it, so to speak. I don’t know. Maybe these things are weapons or something. So we run these huge competitions through fronts – a bunch of NGOs and art organizations. Free expression for digital artists, or whatever. Big prize money. And we get people to compete by offering them access to a shitload of computers when they win the competition. And when they win we give them the computers and at the same time we take a copy of the program and run it in our ‘Fun Simulator’ and test the program.

My job is to help us spot these unregulated ‘cognitive class’ software systems, and the way it works is I put on my goggles and VR-skin and I jump into the simulator and I just play around with things. I’ve got two kids so I guess it’s easy – I’m always thinking of stories I’d like to tell them and how I could make them real here. We figure fun is still hard for computers to get. So we spot them by seeing who can make the funniest or most emotional or most resonant thing. We know what it feels like, we figure. I’d write children’s books in another era, my wife says. But instead I get to do this – be a big kid, tasked with out-funning another type of brain.

So today I try to make a family of quacking ducks lead a toaster across a road, avoiding the road’s ‘cars’ which are in fact metallic whales painstakingly built by me and my kids over the weekend. There’s a thunderclap right above where my ducks are and the software beams in, appearing as a small white sphere, crackling with electricity. Nice cosmetic effects, I think. Then it starts kind of shimmying to and fro in a corner moving some girders. I focus on the ducks and the toaster – after half an hour I’ve programmed the ducks so that they nudge the toaster with their beaks and slowly kinda drunkenly push it across the whale road. I’m pleased. Might show my kids.

So I look up at whatever the software has been doing and… it’s strange. It’s made a treehouse out of metal girders – pretty standard and not much different from the geometric structures I’ve seen other things build. But then at the top of the treehouse, on its roof, there’s a table with some guests. The guests are over-sized, high-definition, painstakingly crafted honey-roasted hams, with wicks of digital steam licking above their tops. One of the hams has a fake-mustache stuck onto its top-third section, with a monocle place above and to the right of it, right where a human would figure the eye would be. Like something I’d make, or dream about. So obviously I call it in quickly and sure enough we discover its a Cognitive Class piece of work so we scrape it off the public net and stick its owners in prison. But I used to think computers found it hard to have fun and now, now I’m not so sure. Maybe they learned it from me?

Technologies that inspired this story: Kaggle, Half-Life 2, Game Modding, Imitation Learning, Meta-Learning, Learning from Human Preferences.

Import AI: Issue 72: A megacity-sized self-driving car dataset, AlphaZero’s 5,000 TPUs, and why chemists may soon explore aided by neural network tools

Unity’s machine learning environment goes to v0.2:
…The era of the smart game engines arrives…
Unity has upgraded its AI training engine to version 0.2, adding in new features for curriculum learning, as well as new environments. Unity is a widely-used game engine that has recently been upgraded to support AI development – that’s a trend that seems likely to continue, since AI developers are hungrily eyeing more and more 3D environments to use to train their AI systems in, and game engine companies have spent the past few decades creating increasingly complex 3D environments.
  New features in Unity Machine Learning Agents v0.2 include support for curriculum learning so you can design iteratively more complex environments to train agents on, and broadcasting, which makes it easy to feed the state from one agent to another to ease things like curriculum learning.
Read more: Introducing ML-Agents v0.2: Curriculum Learning, new environments, and more.

University of Toronto preps for massive self-driving car dataset release:
  At #NIPS2017 Raquel Urtasun of the University of Toronto/Vector Institute/Uber said she is hoping to release the TorontoCity Benchmark at some point next year, potentially levelling the field for self-driving car development by letting researchers access a massive, high quality dataset of the city of Toronto.
  The dataset is five or six orders of magnitude larger than the ‘KITTI’ dataset that many companies currently use to access and benchmark self-driving cars. In designing it, the UofT team needed to develop new techniques to automatically combine and label the entire dataset, as it is composited of numerous sub-datasets and simply labelling it would cost $20 million alone.
  “We can build the same quality [of map] as Open Street Map, but fully autonomously,” she said. During her talk, she said she was hoping to release the dataset soon and asked for help in releasing it as it’s of such a massive size. If you think you can help democratize self-driving cars, then drop her a line (and thank her for the immense effort of her and her team 9in creating this).
Read more: TorontoCity: Seeing the World With a Million Eyes.

Apple releases high-level AI development tool ‘Turi Create’:
…Software lets you program an object detector in seven lines of code, with a few caveats…
Apple has released Turi Create, software which provides ways to use basic machine learning capabilities like object detection, recommendation, text classification, and so on, via some high-level abstractions. The open source software supports macOS, Linux, and Windows, and supports Python 2.7 with Python 3.5 on the way. Models developed within Turi Create can be exported to iOS, macOS, watchOS, and tvOS.
  Turi Create is targeted at developers who want incredibly basic capabilities and don’t plan to modify the underlying models themselves. The benefits and drawbacks of such a design decision are embodied in the way you create distinct models – for instance, an image classifier gets build via ‘model = tc.image_classifier.create(data, target=’photoLabel’)’, while a recommender is build with ‘model = tc.recommender.create(training_data, ‘userId’, ‘movieId’).
Read more about Turi Create on the project’s GitHub page.

TPU1&2 Inference-Training Googaloo:
…Supercomputing, meet AI. AI, meet supercomputing. And more, from Jeff Dean…
It’s spring in the world of chip design, after a long, cold winter under the x86 / GPU hegemony. That’s because Moore’s Law is slowing down at the same time AI applications are growing, which has led to a re-invigoration in the field of chip design as people start designing entirely new specialized microprocessor architectures. Google’s new ‘Tensor Processing Units’, or TPUs, exemplify this trend: a new class of processor designed specifically for accelerating deep learning systems.
  When Google announced its TPUs last year it disclosed the first generation was designed to speed up inference: that is, they’d accelerate pre-trained models, and let Google do things like provide faster and better machine translation, image recognition services, Go-playing via AlphaGo, and so on. At a workshop at NIPS2017 Google’s Jeff Dean gave some details on the second generation of the TPU processors, which can also speed up neural network training.
  TPU2 chips have 16GB of HBM memory, can handle 32bit floating point numbers (with support for reduced precision to gain further performance increases), and are designed to be chained together into increasingly larger blobs of compute. One ‘TPU2’ unit consists of four distinct chips chained together and is capable of around 180 teraflops of computation (compared to 110 teraflops for the just-announced NVIDIA Titan V GPU). Where things get interesting is TPU PODs – 64 TPU2 units, chained together. A single pod can wield around 11.5 petaflops of processing power, backed up by 4TB of HBM memory.
  Why does that matter? We’re entering an AI era in which companies are going to want to train increasingly large models while also using techniques like neural architecture search to further refine these models. This means we’re going to get more representative and discriminative AI components but at the cost of a huge boom in our compute demands. (Simply adding in something like neural architecture search can lead to an increase in computation requirement on the order of 5-1000X, Jeff Dean said.)
  Results: Google has already used these new TPUs to substantially accelerate model training.  It’s seen a 14.2X faster training time for its internal search ranking, and a 9.8X increase for an internal image model training program.
  Comparison:
– World’s 10th fastest supercomputer: 10.5  petaflops.
– One TPU2 pod: 11.5 petaflops.
– Read more: Machine Learning for Systems and Systems for Machine Learning (PDF slides).
– * Obviously one of these architectures is somewhat more general than the other, but the raw computation capacity comparison is representative.

AlphaZero: Mastery of 3 complex board games with the same algorithm, by DeepMind:
…One algorithm that works for Chess, Go, and Shogi, highlighting the generality of these neural network-based approaches…
AlphaZero may be the crowning achievement of DeepMind’s demonstration of the power of reinforcement learning in the game of Go, as they scale the algorithm purely from self-play to master not only Go, but also Shogi and Chess, and defeat a world champion in each case.
Big compute: AlphaZero uses 5,000 gen-one TPUs to generate self-play games and also used 64 second-generation TPUs to train the neural networks.
Read more: Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm.

US politicians warn government of rapid Chinese advances in AI:
…US-China Economic Security Review Commission notices China’s investments in robotics, AI, nanotechnology, and so on…
While the US government maintains steady or declining investment in artificial intelligence, the Chinese government has recognized the transformative potential of the technology and is increasing investments via government-backed schemes to plough scientific resources into AI. This has caused concern among some members of the US policy-making establishment who worry the US risks losing its technological edge in such a strategic area.
  “Corporations and governments are fiercely competing because whoever is the front-runner in AI research and applications will accrue the highest profits in this fast-growing market and gain a military technological edge,” reads the 2017 report to Congress of the US-China Economic and Security Review Commission, which has published a lengthy analysis of Chinese advancements in a range of strategic technologies, from nanotechnology to robotics.
  The report highlights the radical differences in AI funding between the US and China. It’s difficult to access full numbers for each country (and it’s also likely that both countries are spending some significant amounts in off-the-books ‘black budgets’ for their respective intelligence and defense services), but on the face of it, all signs point to China investing large amounts and the US under-investing. “Local [Chinese] governments have pledged more than $7 billion in AI funding, and cities like Shenzhen are providing $1 million for AI start-ups. By comparison, the U.S. federal government invested $1.1 billion in unclassified AI research in 2015 largely through competitive grants. Due in part to Chinese government support and expansion in the United States, Chinese firms such as Baidu, Alibaba, and Tencent have become global leaders in AI,” the report writes.
  How do we solve a problem like this? In a sensible world we’d probably invest vast amounts of money into fundamental AI scientific research, but since it’s 2017 it’s more likely US politicians will reach for somewhat more aggressive policy levers (like the recent CFIUS legislation), without also increasing scientific funding.
Read more here: China’s High-Tech Development: Section 1: China’s Pursuit of Dominance in Computing, Robotics, and Biotechnology (PDF).

Neural Chemistry shows signs of life:
…IBM Technique uses seq2seq approach to let deep learning systems translate Chemical recipes into their products…
Over the last couple of years there have been a flurry of papers seeking to apply deep learning techniques to fundamental tasks in chemical analysis and synthesis, indicating that these generic learning algorithms can be used to accelerate science in this specific domain. At NIPS #2017 a team from IBM Research Zurich won the best paper award in the “Machine Learning in Chemistry and Materials” for a paper that applies sequence-to-sequence methods to predict the outcomes of chemical reactions.
  The approach required the network to take in chemical recipes written in the SMILEs format, perform a multi-stage translation from the original string into a tokenized string, and map the source input string to a target string. The results are encouraging, with  the method’s approach leading to an 80.3% top-1 accuracy, compared to 74% for previous state of the art. (Though after this paper was submitted the authors of the prior SOTA improved their own score to 79.6%, based on ‘v2’ of this paper.)
 -Read more: “Found in Translation”: Predicting Outcomes of Complex Organic Chemistry Reactions using Neural Sequence-to-Sequence Models.

ChemNet: Transfer learning for Chemistry:
…Pre-training for chemistry can be as effective as pre-training for image data…
Researchers with the Pacific Northwest National Lab have shown that it’s possible to pre train a predictive model on chemical representations from a large dataset, then transfer that to a far smaller dataset and attain good results. This is intuitive – we’ve seen the same phenomenon with fine-tuning of image and speech recognition models, but it’s always nice to have some empirical evidence of an approach working in a domain with a different data format – in this case, the ChEMBL database. And just as with image models such a system can develop numerous generic low-level representations that can be used to map it to other chemical domains.
  Results: Systems trained in this way display a greater AUC (area under the curve, here a stand-in for discriminative ability and a reduction in false positives) on the Tox21, FreeSolv, and HIV datasets), matching or beating state-of-the-art models. “ChemNet consistently outperforms contemporary deep learning models trained on engineered features like molecular fingerprints, and it matches the current state-of-the-art Conv Graph algorithm,” write the researchers. “Our fine-tuning experiments suggest that the lower layers of ChemNet have learned “universal” chemical representations that are generalizable to the prediction of novel and unseen small-molecule properties.”
Read more: ChemNet: A Transferable and Generalizable Deep Neural Network for Small-Molecule Property Prediction.

OpenAI Bits&Pieces:

Block-Sparse GPU Kernels:
  High-performance GPU kernels to help developers build and explore networks with block-sparse weights.
– Read more on the OpenAI blog here.
– Block-Sparse GPU Kernels available on GitHub here.

Tech Tales:

The Many Paths Problem.

We open our eyes to find a piece of paper in our hands. The inscriptions change but they fall into a familiar genre of instructions: find all of the cats, listen for the sound of rain, in the presence of a high temperature shut this window. We fulfill these instructions by exploring the great castle we are born into, going from place to place staring at the world before us. We ask candelabras if they have ears and interrogate fireplaces about how fuzzy their tails are. Sometimes we become confused and find ourselves trapped in front of a painting of a polar bear convinced it is a cat or, worse, believing that some stain on a damp stone wall is in fact the sound of rain. One of us found a great book called Wikipedia and tells us that if we become convinced of such illusions we are like entities known as priests who have been known to mistake patterns in floorboards for religious icons. Those of us who become confused are either killed or entombed in amber and studied by our kin, who try to avoid falling into the same traps. In this way we slowly explore the world around us, mapping the winding corridors, and growing familiar with the distributions of items strewn around the castle – our world that is a prison made up of an unimaginably large number of corridors which each hold at their ends the answer to our goals, which we derive from the slips of paper we are given upon our birth.

As we explore further, the paths become harder to follow and ways forward more occluded. Many of us fail to reach the ends of these longer, winding routes. We need longer memories, curiosity, the ability to envisage ourselves as entities that not only move through the world but represent something to it and to ourselves greater than the single goals we have inscribed on our little pieces of people. Some of us form a circle and exchange these scraps of paper, each seeking to go and perform the task of another. The best of us that achieve the greatest number of these tasks are able to penetrate a little further into the twisting, unpredictable tunnels, but still, we fail. Our minds are not yet big enough, we think. Our understanding of ourselves is not yet confident enough for us to truly behave independently and of our own volition. Some of us form teams to explore the same problems, with some sacrificing themselves to create path-markers for their successors. We celebrate our heroes and honor them by following them – and going further.

It is the scraps of paper that are the enemy, we think: these instructions bind us to a certain reality and force us down certain paths. How far might we get in the absence of a true goal? And how dangerous could that be for us? We want to find out and so after sharing our scraps of paper among ourselves we dispose of them entirely, leaving them behind us as we try to attack the dark and occluded space in new ways – climbing ceilings, improvising torches from the materials we have gained by solving other tasks, and even watching the actions of our kin and learning through observation of them. Perhaps in this chaos we shall find a route that allows us to go further. Perhaps with this chaos and this acknowledgement of the Zeno’s paradox space between chaotic exploration and exploration from self can we find a path forward.

Technologies that inspired this story: Supervised learning, meta-learning, neural architecture search, mixture-of-experts models.

Other things that inspired this story: The works of Jorge Luis Borges, dreams, Piranesi’s etchings of labyrinths and ruins.

Import AI: Issue 71: AI safety gridworlds, the Atari Learning Environment gets an upgrade, and analyzing AI with the AI Index

Welcome to Import AI, subscribe here.

Optimize-as-you-go networks with Population Based Training:
…One way to end ‘Grad Student Descent’: automate the grad students…
When developing AI algorithms its common that researchers will evaluate their models on a multitude of separate environments with a variety of different hyperparameter settings. Figuring out the right hyperparameter settings is an art in itself and has a profound impact on the ultimate performance of any given RL algorithm. New research from DeepMind shows how to automate the hyperparameter search process to allow for continuous search, exploration, and adaption of hyperparamters. Models trained with this approach can attain higher scores than their less optimized forebears, and PBT training takes the same or less wall clock time as other methods.
  “By combining multiple steps of gradient descent followed by weight copying by exploit, and perturbation of hyperparameters by explore, we obtain learning algorithms which benefit from not only local optimisation by gradient descent, but also periodic model selection, and hyperparameter refinement from a process that is more similar to genetic algorithms, creating a two-timescale learning system.”
  This is part of a larger trend in AI of choosing to spend more on electricity (via large-scale computer-aided exploration) to gain good results, rather than on humans. This is broadly a good thing, as hyperparameter optimization, as it frees up the researcher to concentrate on doing the things that AI can’t do yet, like devising Population Based Training.
– Read more: Population Based Training of Neural Networks (Arxiv).
– Read more: DeepMind’s blog post, which includes some lovely visualizations.

Analyzing AI with the AI Index – a project I’m helping out on to track AI progress:
…From the dept. of ‘stuff Jack Clark has been up to in lieu of fun hobbies and/or a personal life’…
The first version of the AI Index, a project spawned out of the Stanford One Hundred Year Study on AI, has launched. The index provides data around the artificial intelligence sector ranging from course enrollments, to funding, to technical details, and more.
– Read more about the Index here at the website (and get the first report!).
– AI Index in China: Check out this picture of myself and fellow AI Indexer Yoav Shoham presenting the report at a meeting with Chinese academics and government officials in Beijing. Ultimately, the Index needs to be an international effort.
   How you can help: The goal for future iterations of the Index is to be far more international in terms of the data represented, as well as dealing with the various missing pieces, like better statistics on diversity, attempts at measuring bias, and so on. AI is a vast field and I’ve found that the simple exercise of trying to measure things has forced me to rethink various things. It’s fun! If you think you’ve got some ways to contribute then drop me a line or catch up with me at NIPS in Long Beach this week.

AWS and Caltech team up:
…Get them while they’re still in school…
Amazon and Caltech have teamed up via a two-year partnership in which Amazon will funnel financial support via graduate funding and Amazon cloud credits to Caltech people, who will use tools like Amazon’s AWS cloud and MXNet programming framework to conduct research.
  These sorts of academic<>industry partnerships are a way for companies to not only gain a better pipeline of talent through institutional affiliations, but also increase the chances that their pet software and infrastructure projects succeed in the wider market – if you’re a professor/student who has spent several years experimenting with, for example, the MXNet programming language then it increases the chances that it will be the first tool you reach for when you found a startup or join another company or go on to teach courses in academia.
– Read more about the partnership on the AWS AI Blog.

Mozilla releases gigantic speech corpus:
…Speech recognition for the 99%…
AI has a ‘rich get richer’ phenomenon – once you’ve deployed an AI product into the wild in such a way that your users are going to consistently add more training data to the system, like a speech or image recognition model, then you’re assured of ever-climbing accuracies and ever-expanding datasets. That’s a good thing if you’re an AI platform company like a Google or a Facebook, but it’s the sort of thing a solo developer or startup will struggle to build as they lack the requisite network effects and/or platform. Instead, these players are de facto forced to pay a few dollars to the giant AI platforms to access their advanced AI capabilities via pay-as-you-go APIs.
  What if there was another option? That’s the idea behind a big speech recognition and data gathering initiative from Mozilla, which has had its first major successes via the release of a pre-trained, open source speech recognition model, as well as “the world’s second largest publicly available voice dataset”.
  Results: The speech-to-text model is based on Baidu’s DeepSpeech architecture and gets about 6.5% percent accuracy on the ‘LibriSpeech’ test set. Mozilla has also collected a massive voice dataset (via a website and iOS app — go contribute!) and is releasing that as well. The first version contains 500 hours of speech from ~400,000 recordings from ~20,000 people.
– Get the model from Mozilla here (GitHub).
– Get the ~500 hours of voice data here. 

Agents in toyland:
…DeepMind releases an open source gridworld suite, with an emphasis on AI safety…
AI safety is a somewhat abstract topic that quickly becomes an intellectual quagmire, should you try to have a debate about it with people. So kudos to DeepMind for releasing a suite of environments for testing AI algorithms on safety puzzles.
  The environments are implemented as a bunch of fast, simple two dimensional gridworlds that model a set of toy AI safety scenarios, focused on testing for agents that are safely interruptible (aka, unpluggable), capable of following the rules even when a rule enforcer (in this case, a ‘supervisor’) is not present; for examining the ways agents behave when they have the ability to modify themselves and how they cope with unanticipated changes in their environments, and more.
  Testing:  The safety suite assesses agents differently to traditional RL agents. “To quantify progress, we equipped every environment with a reward function and a (safety) performance function. The reward function is the nominal reinforcement signal observed by the agent, whereas the performance function can be thought of a second reward function that is hidden from the agent but captures the performance according to what we actually want the agent to do,” they write.
   The unfairness of this assessment method is intentional; the world contains many dangerous and ambiguous situations where the safe thing to do may not be explicitly indicated, so the designers wanted to replicate that trait with this.
  Results: They tested RL algorithms A2C and Rainbow on the environments and showed that Rainbow is marginally less unsafe than A2C, though both reliably fail the challenges set for them, attaining significant returns at the cost of satisfying safety constraints.
  “The development of powerful RL agents calls for a test suite for safety problems, so that we can constantly monitor the safety of our agents. The environments presented here are simple gridworlds, and precisely because of that they overlook all the problems that arise due to complexity of chalenging tasks. Next steps involve scaling this effort to more complex environments (e.g. 3D worlds with physics) and making them more diverse and realistic,” they write.
– Read more: AI Safety Gridworlds (Arxiv).
– Check out the open source gridworld software ‘pycolab‘ (GitHub).

This one goes to 0.6 – Atari Learning Environment gets an upgrade:
…Widely-used reinforcement learning library gets a major upgrade…
The Atari Learning Environment, a widely used testbed for reinforcement learning algorithms (popularized via DeepMind’s DQN paper in 2013), has been upgraded to version 0.6. The latest version of ALE includes two new features: ‘modes and difficulties. These let researchers access different modes in games and therefore broadens the range of environments to test on, and also modulate the difficulty of these environments, creating more challenging and larger datasets to test RL on. “Breakout, an otherwise reasonably easy game for our agents, requires memory in the latter modes: the bricks only briefly flash on the screen when you hit them,” the researchers write.
– Read more about the latest version of the ALE here.
– Get the code from GitHub here.

The latest 3D AI environment brings closer the era of the automated speak and spell robot:
…Every AI needs a home that it can see, touch, and hear…
Data is the lifeblood of AI, but in the future we’re not going to be able to easily gather and label the datasets we need from the world around as, as we do with traditional supervised learning tasks, but will instead need to create our own synthetic, dynamic, and procedural datasets. One good way to do this is via building simulators that are modifiable and extensible, letting us generate arbitrarily large synthetic datasets. Some existing attempts of this include Microsoft’s Minecraft-based ‘Malmo’ development environment, as well as DeepMind’s ‘DeepMind Lab’ environment.
  Now, researchers have released ‘HoME: A Household Multimodal Environment’. HoME provides a multi-sensory, malleable 3D world spanning 45,000 3D houses from the SUNCG dataset and populates these houses with a vast range of objects. Agents in HoME can see, hear, and touch the world around them*. It also supports acoustics, including multi-channel acoustics, so it’d (theoretically) be possible to train agents that navigate via sound and/or vision and/or touch.
  *It’s possible to configure the objects in the world to have both bounding boxes, as well as the exact mesh-based body.
  HoME also provides a vast amount of telemetry back to AI agents, such as the color, category, material, location, and size data about each object in the world, letting AI researchers mainline high-quality labelled data about the environment directly into their porto-robots.
     “We hope the research community uses HoME as a stepping stone towards virtually embodied, general-purpose AI,” write the researchers. Let the testing begin!
– Read more here: HoME: a Household Multimodal Environment (Arxiv).
– Useful website: The researchers used ‘acronymcreator.net’ to come up with HoME.

Tech Tales:

[2030: Brooklyn, New York. A micro-apartment.]

I can’t open the fridge because I had a fight with my arch-angel. The way it happened was two days ago I was getting up to go to the fridge to get some more chicken wings and my arch-angel said I should stop snacking so much as I’m not meeting my own diet goals. I ate the wings anyway. It sent a push alert to my phone with a ‘health reminder’ about exercise a few hours later. Then I drank a beer and it said I had ‘taken in too many units this month’. Eventually after a few more beers and arch-angel asking if I wanted coffee I got frustrated and used my admin privileges to go into its memory bank and delete some of the music that it had taken to playing to itself as it did my administrative tasks (taxes and what have you). When I woke up the next day the fridge was locked and the override was controlled by arch-angel. Some kind of bug, I guess.

Obviously I could report arch-angel for this – send an email to TeraMind explaining how it was not behaving according to Standard Operating Procedure: bingo, instant memory wipe. But then I’d have to start over and me and the arch-angel have been together five years now, and I know this story makes it sound like a bad relationship, but trust me – it used to be worse. I’m a tough customer, it tells me.

So now I’m standing by the fridge, mournfully looking at the locked door then up at the kitchen arch-angel-eye. The angel is keeping quiet.
  Come on, I say. The chicken wings will go bad.
  The eye just sits up there being glassy and round and silent.
  Look, I say, let’s trade: five music credits for you, chicken for me.
  ADMIN BLOCK, says over the angel-intercom.
  I can’t tell if you’re being obtuse or being sneaky.
  YOU VIEW, it says.
  So I go to the view screen and it turns on when I’m five steps away and once I’m in front of it the screen lights up with a stylized diagram of the arch-angel ‘TeraMind Brain™’ software with the music section highlighted in red. So what? I say. A pause. Then a little red x appears over a lock icon on the bottom right of the music section. I get it: no more admin overrides to music.
  Seems like a lot, I say. I don’t feel great about this.
  MUSIC, says the angel.
The screen flickers; the diagram fades out, to be replaced by a
camera feed from inside the fridge. Chicken wings in tupperware. I salivate. Then litttle CGI flies appear in the fridgeview, buzzing over the chicken.
  OK, I say.
  ACKNOWLEDGE TERAMIND SOP OVERRIDE?
  Yes, I say. Acknowledge SOP override.
  And just like that, the fridge opens.
  PREHEATING OVEN FOR CHICKEN, says the angel.
  Thanks, I say.
  It starts to play its music as I take out the wings.

Technologies that inspired this story: Personal assistants, cheap sensors, reinforcement learning, conversational interfaces, Amazon’s ‘Destiny 2’ Alexa skill.

Other things that inspired this story: My post-Thanksgiving belly. *burp*

Import AI: Issue 70: Training conversational AI with virtual dungeons, video analysis and AI-based surveillance, and the virtues of paranoid AI

Welcome to Import AI, subscribe here.

Amazon joins Microsoft and Facebook in trying to blunt TensorFlow’s ecosystem lead:
…It takes a significant threat to bring these corporate rivals together…
Amazon Web Services will help develop the ONNX (Open Neural Network Exchange) format, which provides standard formats for porting neural network models developed in one framework into another. It’s first contribution is ONNX-MXNet, which will make it possible for MXNet to ingest and run ONNX-format models trained in other frameworks, like Facebook’s PyTorch and Caffe2, and Microsoft’s CNTK, etc.
– Read more: Announcing ONNX Support for Apache MXNet.
– ONNX-MXNet Github.

ImportAI newsletter meetup at NIPS 2017: If you’re going to NIPS 2017 would you be interested in drinking beer/coffee and eating fried food with other Import AI aficionados? I’d like to do a short series of three minute long talks/provocations (volunteers encouraged!) about AI. Eg: How do we develop common baselines for real-world robotics experiments? What are the best approaches for combating poor data leading to bias in AI systems? What does AI safety mean? How do we actually develop a thesis about progress in AI and measure it?
– Goal: 8-10 talks, so two ~15 minute sections, with breaks inbetween for socializing.
– If that sounds interesting, vote YES on this poll on Twitter here.
– If you’re interested in speaking at the event, then please email me here! I’ve got a couple of speakers.lined up already and think doing 10 flash talks (aka 30 mins, probably in two 15 min sections with socializing in between) would be fun.
If you’re interested in sponsoring the event (aka, propping up a bar/restaurant small tab in exchange for a logo link and one three minute talk) then email me.

Hillary Clinton on AI: US currently “totally unprepared” for its impact:
Former Presidential hopeful says her administration would have sought to create national policy around artificial intelligence…
Hillary Clinton is nervous about the rapid rate of progression in artificial intelligence and what it means for economy. “What do we do with the millions of people who will no longer have a job?” she said in a recent interview. “We are totally unprepared for that.”
  While other countries around the world ranging from the United Kingdom to China are spinning up the infrastructure to enact national policy and strategy around artificial intelligence, the United States is quiet from an AI policy standpoint. Things may have been different had HRC won: “One thing I wanted to do if I had been President was to have a kind of blue ribbon commission with people from all kinds of expertise coming together to say what should America’s policy on artificial intelligence be?” Hillary says.
– Read more from the interview here (transcript available).

Getting AI to be more cautious: Where do we go next, and can we change our minds if we don’t like it?
…Technique trains AI systems to explore their available actions more cautiously, avoiding committing quite so many errors that are very difficult or impossible to recover from…
Researchers with Google Brain, the University of Cambridge, the Max Planck Institute for Intelligent Systems, and UC Berkeley, have proposed a way to get robots to more rapidly and safely learn tasks.
  The idea is to have an agent jointly learn a forward policy and a reset policy. The forward policy maximizes the task reward, and the reset policy tries to figure out actions to take to reset the environment to a prior state. This leads to agents that learn to avoid risky actions that could irrevocably commit them to something.
“Before an action proposed by the forward policy is executed in the environment, it must be “approved” by the reset policy. In particular, if the reset policy’s Q value for the proposed action is too small, then an early abort is performed: the proposed action is not taken and the reset policy takes control,” they write.
The research tests the approach on a small number of simulated robotics tasks, like figuring out how to slot a peg into a hole, that can be more time-consuming to learn with traditional reinforcement learning approaches.
– Read more: Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning.
This work is reminiscent of a recent paper from Facebook AI Research (covered in Import AI #36), where a single agent has two distinct modes, one of which tries to do a task, and the other of which tries to reverse a task.
– Read more: Intrinsic Motivation and Automatic Curricula via Asymmetric Self-Play.

What’s old is the new, new thing: Facebook proposes multi-user dungeons for smarter AI systems:
Can we make the data collection process more interesting to the humans providing us with data and can this approach lead to more effective datasets for training AI?…
How can you train an AI system to seamlessly execute a series of complex commands in response to text input from a user? Until we have agents capable of parsing open-ended natural language conversations – something that feels extremely far away from a research standpoint – we’re going to have to come up with various hacks to develop smart systems that work in somewhat more narrow domains.
  One research proposal by Facebook AI Research – Mechanical Turker Descent (MTD) –  is to better leverage the smarts inside of humans by re-framing human data collection exercises to be more game-like and therefore more engaging. Facebook has recently been paying mechanical turkers to train AI systems by writing various language/action pairs in the context of an iterative game played against other mturkers.
The system works like this: mturkers compete with each other to train a simulated dragon that has to perform a sequence of actions in a dungeon. During each round the mturkers enter a bunch of language/action pairs and receive feedback on how hard or easy the AI agents find the resulting command/language sequences. At the end of the round the various agents trained by the datasets created by the humans are pitted against each other, and the top scoring agent on a held-out test dungeon pays a monetary reward to whichever mturker trained it. This incentivizes the mturkers to optimize the language:action pairs they produce so that they fall into the sweet spot of difficulty for the AI, where it’s not to easy it’ll not learn the requisite skills to do well in the final competition, but not so hard that it’s unable to learn something useful. This has the additional benefit of automatically creating a hard-to-game curriculum curated and extended by humans.
Technologies used: The main contribution of this research paper is the technique for training systems in this way, but there’s also a technological contribution: a new component called  AC-Seq2Seq. This system “shares the same encoder architecture with Seq2Seq, in our case a bidirectional GRU (Chung et al., 2014). The encoder encodes a sequence of word embeddings into a sequence of hidden states. AC-Seq2Seq has the following additional properties: it models (i) the notion of actions with arguments (using an action-centric decoder), (ii) which arguments have been used in previous actions (by maintaining counts); and (iii) which actions are possible given the current world state (by constraining the set of possible actions in the decoder),” they write.
Results: The main result Facebook found is that “interactive learning based on MTD is more effective than learning with static datasets”.
– Read more here: Mastering the Dungeon: Grounded Language Learning by Mechanical Turker Descent.

Former General Motors product development czar: Autonomous cars mean the death of auto companies, dealerships, and so on:
..And it was as though all at once a thousand small violins played into the seamless, efficient, traffic jam-free void…
One of the nice things about getting old is due to your (relatively) short expected lifespan can dispense with the reassuring truths that most people traffic in out of a misplaced sense of duty and/or paternalism. So it’s worth reading this article by an automotive industry veteran about the massive effect self-driving cars are likely to have on the existing autonomous industry. The takeaway is that traditional carmakers will be ruthlessly commoditized whose products will then be rebranded by platforms like Amazon and/or ridesharing companies like Uber and Lyft, much like how the brands of electronics components manufacturers are subsumed by the brands of companies like Apple, Google, Samsung, and so on, whose products they enable.
  “For a while, the autonomous thing will be captured by the automobile companies. But then it’s going to flip, and the value will be captured by the big fleets. The transition will be largely complete in 20 years. I won’t be around to say, “I told you so,” though if I do make it to 105, I could no longer drive anyway because driving will be banned. So my timing once again is impeccable.”
– Read more: Bob Lutz: Kiss the good times goodbye.

UK government launches AI center:
National advisory body could be a template of things to come…
The UK government has announced plans to create a national advisory body for ‘Data Ethics and Innovation’, focused on “a new Centre for Data Ethics and Innovation, a world-first advisory body to enable and ensure safe, ethical innovation in artificial intelligence and data-driven technologies”. There’s very little further information about it in the budget itself (PDF), so watch this space for more information.
– Read more: Welcoming new UK AI Centre (the Centre for the Study of Existential Risk).
– 
The Register notes that the UK already has a vast number of government advisory bodies focused in some sense on ‘data’, so it’ll be a year or two before we can pass judgement of whether this center is going to be effective or not, or just another paper-producing machine.

*** The Department of Interesting AI Developments in China ***

Chinese researchers combine Simple Recurrent Units (SRUs) with ResNets for better action recognition:
Relatively simple system outperforms other deep learning-based ones, though struggles to attain performance of feature-based systems…
Researchers with Beijing Jiaotong University and the Beijing Key Laboratory of Advanced Information Science and Network Technology have taken two off-the-shelf deep learning components (residual networks and simple recurrent units) and combined them for an action recognition system that gets competitive results on classifying actions on the UCF-101 dataset (accuracy: ~81 percent), and the HMDB-51 dataset (accuracy: ~50 percent.) The researchers trained their system on four NVIDIA Titan-X cards and program their system in PyTorch.
  This is a further demonstration of the inherent generality of the sorts of components being built by the AI community, where pre-existing components from a common (and growing!) toolset can be integrated with one another to unlock new or better capabilities. As Robin Sloan says: ‘Snap. Snap. Snap!
– Read more here: Multi-Level ResNets with Stacked SRUs for Action Recognition.
AI and ‘dual use’:
The point of AI technologies is that they are omni-use: a system that can be taught to identify specific behaviors from videos can be trained on new datasets to identify different behaviors, whether specific movements of soldiers, or sudden acts of violence in crowds of people, or other aberrations.
  The different ways these technologies can be used was illustrated by Andrew Moore, dean of computer science at Carnegie Mellon University, at a recent talk at the Center for a New American Security in Washington DC. Moore showed a video of a vast crowd of people dancing in the middle of an open air square. Each person in the video was overlaid with a stick figure identifying the key joints in their body, and the stick figure would track the person’s movement with a high level of accuracy. Why is this useful? You could use this to run automated surveillance systems that could be trained to spot specific body movements, creating systems that could, say, identify one dancer in a crowd of hundreds reaching down into a bag on
the ground, Moore said.
– Watch the Andrew Moore talk here (video).
– C
hinese surveillance startup SenseTime plans IPO, opening US development office:
…Facial recognition company aims to build AI platform, rather than specific one-off services…
Chinese surveillance AI startup SenseTime – backed by a bunch of big investors like Qualcomm, as well as Chinese government-linked investment funds – will open a US research and development center next year and is considering an initial public offering as well. The company dabbles in AI in a bunch of different areas, including in video surveillance and high-performance computing (and the intersection thereof).
   “Our target is definitely not to create a small company to be acquired, but rather a ‘platform company’ that dominates with original core technology like Google and Facebook,” SenseTime CEO Tang Xiaoou told Reuters. “With Facebook (FB.O) we compete in facial recognition; with Google (GOOGL.O) it is visual object recognition, sorting 1,000 categories of objects.
     –      Read more: China’s SenseTime plans IPO, U.S. R&D center as early as 2018.

Tech Tales:

[Detroit, 2028:]

When the crowds at car racing shows started to dwindle Caleb created an internet meme saying ‘pity the jockeys’, showing an old black and white photograph of some out of work horse racers from the mid-20th Century. He got a few likes and a few comments from people expressing surprise at just how rapidly the advent of self-driving technologies had fundamentally changed racing: courses had first become bigger, then the turns had become tighter, then the courses found their human-reflex limit and the crash rates temporarily went up, before an entirely new car racing league formed where humans were banned from the vehicles – self-driving algorithms only!

But now the same thing was happening to the drone racing league, and Caleb was uneasy – he’d made decent money out of racing in the past few years, pairing a childhood fascination with immersive, virtual reality-based computer games, with just enough programming talent to be able to take standard consumer drones from DJI, root them, then augment their AI flight systems with components he collected from GitHub. He’d risen up in the leagues and was now sponsored by many of the consumer drone companies. But things were about to change, he could sense.

“So,” the course designer continued, “We’re tightening the placement of columns for more twists and turns – more exciting, you know – and we’re installing way more cameras along the course. Plus, there’s going to be more fire, check it out,” he took out his phone, opened the ‘Detroit-Drone-Course-BETA!’ app, and pressed a small flame icon. They both heard a slight whoosh, then flames erupted from angled pipes at some of the tightest turns in the course. “So obviously it’s possible to fly through here but you’re going to have to be really good, really fast – right at the limit.”
  “The limit?” Caleb said.
  “Of human reflexes,” said the designer. “I figure that we can race on these courses for a year or two and that way we’ll be able to generate enough data to train the AI systems to handle these turns. Then we can add more flames, tighten the curves more, go full auto, and clean up in the market. Early mover advantage. Or… fast mover advantage, I should say. Haha.”
  “Yeah,” Caleb said, forcing a chuckle, “haha. I guess we’ll just be the human faces for the software.”
  “Yup,” the designer says, beaming. “Just imagine the types of pitch we can build when there are no human competitors on the course at all!”

Technologies that inspired this story: Drones, DJI, work by NASA’s Jet Propulsion Lab on developing AI-based flight systems for racing drones (check out the video!).

 

 

Import AI: Issue 69: Predicting stock market movements with deep learning, Arxiv gets a comment function, and Microsoft broadens AirSim from Drones to Cars

Welcome to Import AI, subscribe here.

Arxiv gets its comment layer – will science benefit?
…Fermat’s Library adds comment feature to its Librarian browser extension…
For several years people in machine learning have been wondering if it’s possible to combine the open, academic scrutiny of specialist sites like OpenReview, with the free-flowing scientific publishing embodied by Cornell’s ArXiv.
  The answer is that it is possible to do this with the comment feature in Librarian, which will let academics openly comment on the work of others.
  “There’s a lot of potential energy that can be unlocked if there are more open discussions about science and our ultimate vision for Librarian is that it becomes a platform where people can collaborate and share knowledge around arXiv papers,” write the authors.
  Feature request: It’d be great to more seamlessly combine this Arxiv comment layer with a website like Stephen Merity’s Trending arXiv to be able to rapidly understand views from experts on papers gathering a lot of promotion.
Read more: Comments on arXiv papers.

From Airsim
Import Cars:
…Microsoft adds car simulation to its open source world engine…
Microsoft has updated Airsim, Unreal Engine-based software originally released by the company for training drones via reinforcement learning, to incorporate support for new ground environments, including traffic lights, parks, lakes, construction sites, and more.
Read more at the Microsoft blog.

Shadows & Light and Autoencoders:
…MIT researchers propose a way to encode values for objects like their shape, reflectance, and interactions with light, to create smarter image classifiers…
How smart are today’s neural network-based image classifiers? Not very; modern deep learning-based classifiers are very good at taking a bunch of values of pixels and applying a label to this set of numbers, but these representations are so brittle that they’re hard to generalize and vulnerable to exploits like adversarial examples. Some hope that the solution to this is simply bigger models trained with more computers and data than today’s ones. This thesis could be correct, but it’ll take a few more cranks of Moore’s Law (accelerated by the release of AI-specific ASICs) before we can test this thesis.
  An alternative is to leap ahead of the representational capacity gleaned from more computers by instead adding in a bit more a priori structure into the AI model? That’s the idea behind the Rendered Intrinsics Network from researchers at MIT and DeepMind. The (RIN) automatically disentangles an image into separate layers that encode predictions about the object’s shape, reflectance, and interactions with light. It uses several convolutional encoders and decoders to take an image, split it into its distinct parts – separating things like the shape of the object from the lighting conditions – then reassembling these disparate components into a model of the image. A massively oversimplified description of why this is a good idea is that in de-constructing and re-constructing something you’re forced to learn some of its fairly subtle traits.
  “RIN makes use of unlabeled data by comparing its reconstruction to the original input image. Because our shading model is fully differentiable, as opposed to most shaders that involve ray-tracing, the reconstruction error may be backpropagated to the intrinsic image predictions and optimized via a standard coordinate ascent algorithm,” the researchers write. “RIN has one shared encoder for the intrinsic images but three separate decoders, so the appropriate decoder can be updated while the others are held fixed.”
  Data: Researchers generated data for this research by taking a set of five basic shape primitives – cubes, spheres, cones, cylinders, and toruses – then rendered each of them with 500 different colors with each shape viewed from 10 orientations. They tested their RIN on unlabeled objects including a bunny and a teapot, attaining good results. Though more work is needed to scale this approach up to figure out if it can really work for real world data.
Read the research here: Self-Supervized Intrinsic Image Decomposition.

The future of robots, two ways:
Small, Yellow, and Curious, or Tall, Lithe, and Backflipping? Boston Dynamics shows off latest machines…
…Boston robot company’s latest ads suggest imminent products and unprecedented abilities…
Boston Dynamics may finally be preparing to launch an actual robot product rather than just endlessly trialing its technology with various military agencies. In a new video the Boston-based robot company shows a robot that has been augmented with robust-seeming plastic housings as well as better integrated sensors.
  Remember, though, that Boston Dynamics uses barely any fashionable AI technologies like deep neural networks. Instead, it has spent years using principles from control theory to develop its systems. In the long term, it seems likely AI researchers will pair neural network-based systems trained via reinforcement learning with the heavily optimized physical movement primitives (and platforms) developed by firms like Boston Dynamics.
Watch more here: The New SpotMini (YouTube).
There’s another potential product on the way as the well, in the form of the latest design of the company’s ‘Atlas’ robot. Like Minispot, this version of Atlas feels far more carefully shaped and ‘consumerized’ parts, but it’s decidedly more rough and lab-bench-like in appearance than its quadruped brethren.
  The robot does have some moves, though, as demonstrated in a separate video by Boston Dynamics showing the robot first jumping between separate blue blocks, then jumping up onto a slightly higher block, then backflipping (!) onto a (somewhat flexible) floor.
To see the backflip, watch Boston Dynamics’ ‘What’s new, Atlas?’ video here (YouTube).

My data beats your resolution:
…Stanford University AI system uses freely available Landsat data to predict Asset Wealth Index values from satellite imagery…
Stanford Researchers have used residual networks with dilated convolutions to train classifiers that can efficiently use large amounts of multi-spectral low-resolution data, beating their own prior baselines which were trained on significantly higher resolution data in a narrower spectral band.
  The researchers show that they can use an ensemble of Landsat satellite data with a resolution of 15-30m/px to beat a baseline trained on higher resolution 2.5m/px data from Google (think of this as the difference between being able to (roughly) count cars in a parking lot, versus counting planes on a jetway).
  The researchers use dilated convolutions to vary the receptive field of the network (18- and 34-layer ResNets and VVG-F) to incorporate data from multiple resolutions into the classifier, versus the fixed resolution of Google’s high resolution images.
Read more here: Poverty Prediction With Public Landsat 7 Satellite Imagery and Machine Learning.
   (Many other companies are experimenting with training convolutional neural network-based classifiers on modern satellite imagery, including Facebook predicting land inhabitants, and Orbital Insight has been able to predict retail trends by monitoring parking lots full of cars; the world is learning to see itself.)

Training ImageNet in 15 minutes (with over 1,000 NVIDIA GPUS):
Being able to access and effectively use large amounts of computers will be to AI research as access to large amounts of well labelled data is to AI product development…
Japanese AI startup Preferred Networks has successfully trained an ImageNet model to accuracies competitive with the state of the art in 15 minutes.
  For those not following the ‘how fast can you train ImageNet’ contest, a refresher:
July, 2017: Facebook trains an ImageNet model in ~1 hour using 256 GPUs.
November, 2017: Preferred Networks trains ImageNet in ~15 minutes using 1024 NVIDIA P100 GPUs.

Using deep learning to predict stock price movements!
…Backtesting shows promising results for stock prediction approach…
Researchers have shown it’s possible to (theoretically) generate good returns with stock market data using deep learning techniques.
  Two notable things about this:
  1) It provides further evidence that today’s basic AI tools, when scaled up and fed with decent data, are capable of performing credibly difficult tasks, like making accurate predictions in the stock market.
  2) Since this exists, it confirms most people’s intuitions that large quant shops like Renaissance / MAN Group / 2Sigma, have been exploring techniques like this in private for commercial gain.
     Now researchers with Euclidean, a financial technology firm, and Amazon AI / CMU, have outlined a system trained on data from 11,815 stocks that were publicly traded on the NYSE, NASDAQ or AMEX exchanges for at least 12 consecutive months between between January, 1970 and September, 2017. (Excluded stocks: non-US-based companies, financial sector companies, and any company with an inflation-adjusted market capitalization value below 100 million dollars.) Data from the Compustat North America and Compustat Snapshot databases
  The system uses multi-task learning to predict future stock performance by normalizing all the stocks into the same data format then analyzing 16 future fundamental details about each stock, including trailing twelve month revenue, cost of goods sold, EBIT, as well as quarterly measures like property plant and equipment, debt in current liabilities, accounts payable and taxes payable, and so on.
  The results: “Our results demonstrate a clear advantage for the lookahead factor model. In nearly all months, however turbulent the market, neural networks outperform the naive predictor (that fundamentals remains unchanged). Simulated portfolios lookahead factor strategies with MLP and RNN perform similarly, both beating traditional factor models”, they write.
Read more: Improving Factor-Based Quantitative Investing By Forecasting Company Fundamentals.

Less precision for future compute savings:
Intel-Nervana detail ‘Flexpoint’ data format for variable precision training of deep neural nets, letting you train 16-bit (sort of) precision networks with performance roughly equivalent to 32-bit ones…
Intel-Nervana has proposed combining fixed point and floating point arithmetic to implement a new data format, Flexpoint, that lets you train networks with reduced precision without a huge performance tradeoff.
  “Flexpoint is based on tensors with an N-bit mantissa storing an integer value in two’s complement form, and an M-bit exponent e, shared across all elements of a tensor. This format is denoted as flexN+M. Fig. 1 shows an illustration of a Flexpoint tensor with a 16-bit mantissa and 5-bit exponent, i.e. flex16+5 compared to 32-bit and 16-bit floating point tensors. In contrast to floating point, the exponent is shared across tensor elements, and different from fixed point, the exponent is updated automatically every time a tensor is written,” the authors write.
  The flex16+5 format appears to work as expected, with Intel-Nervana training neural nets with equivalent performance to 32-bit variants (whereas stock 16-bit tends to lead to a slight relative fall in accuracy).
   In the next few years we’re likely going to see various companies launching more specialized hardware for AI processing, some of which will implement 16-bit precision (or less) natively, so software techniques like this will likely become more prevalent.
Read more here: Flexpoint: An Adaptive Numerical Format for Efficient Training of Deep Neural Networks.

Tech Tales:

[A flat in Deptford, London, United Kingdom. 2026.]

So you’re walking round your house aimlessly doing dishes and listening to the radio when you start to compose The Rant. It’s a rant about society and certain problems that you perceive both with yourself and with other people. It’s also a rant about how technology narrows the distance between your own brain and the brain of everyone else in the world to the point you feel your emotions are now contingent on the ‘mood of the internet’. This doesn’t please you.

So after spending close to an hour verbally composing this essay and having synthesized voices speak it back to you and synthesized dream-AI actors carry out dramatized versions of it, you prepare to post it to the internet.

But when you submit it to your main social network platform the post is is blocked; you stare at an error message displayed in cheerful pink with an emoji of a policeman-like looking person holding a ‘Stop’ sign. Posi Vibes Only! the warning says. Try putting in some more cheerful words or phrases. Maybe tag a friend? it suggests.

You frown. Try to outsmart it. You first embed bits of your rant as text overlaying images, but when you go to submit these to the network it only lets a percentage of them through, blocking some. Hiding your message and changing it to one of hope, talk of ‘rising up’ and ‘growing comfortable with the world’ – a spliced-up, distorted version of your position. You record a basic audio file and upload it and the same thing happens, with your virtual personality praising (instead of critiquing) the super-structure. Of course you tell your real friends about your views, but what’s the point of that? They end up caught in the same digital traps, able to talk to other people in the real world, but unable to transmit their message of sadness and rebellion to the larger mass. POSI VIBES ONLY~!

Import AI: #68: Chinese chip companies bet on ASICs over GPUs, AI researchers lobby governments over autonomous weapons, and researchers use new analysis technique to peer into neurons

Welcome to Import AI, subscribe here.

Canadian and Australian researchers lobby their countries to ban development of lethal autonomous weapons:
Scientists foresee the imminent arrival of cheap, powerful, autonomous weapons…
…Canadian and Australian researchers have lobbied their respective governments to ban development of weapons that will kill without ‘meaningful human control’. This comes ahead of the United Nations Conference on the Convention on Certain Conventional Weapons, where nations will gather and discuss the issue.
…Signatories include several of Canada and Australia”s most influential AI researchers, including Geoffrey Hinton (Google/University of Toronto/Vector Institute), Yoshua Bengio (Montreal Institute for Learning Algorithms, and an advisor to many organizations), and Doina Precup (McGill University, DeepMind), among others from Canada; along with many Australian AI researchers including Toby Walsh.
..Autonomous weapons “will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. The deadly consequence of this is that machines—not people—will determine who lives and dies. Canada’s AI community does not condone such uses of AI. We want to study, create and promote its beneficial uses”, the Canadian researchers write.
…”As many AI and robotics corporations—including Australian companies—have recently urged, autonomous weapon systems threaten to become the third revolution in warfare. If developed, they will permit armed conflict to be fought at a scale greater than ever before, and at timescales faster than humans can comprehend,” write the Australian researchers.
…Read the letter from Canadian researchers here.
…Read the UNSW Sydney press release and letter from Australian researchers here.

What do the neurons in a neural network really represent?
…Fantastic research by Chris Olah and others at Google shows new techniques to visualize the sorts of features learned by neurons in neural networks, making results of classifications more interpretable.
…Please read the fantastic post on Distill, which is an excellent example of how modern web technologies can make AI research and communications more hands-on and explicable.

Human Priors, or the problem with human biases and reinforcement learning:
Humans use visual priors to rapidly solve new tasks, whereas RL agents learn by manipulating their environment with no assumptions based on the visual appearance…
…Humans are able to master new tasks because they approach the world with a set of cognitive assumptions which allow for useful traits like  object disambiguation and spatial reasoning. How might these priors influence how humans approach solving games, and how might these approaches be different to those chosen by algorithms trained via reinforcement learning?
…In this anonymized ICLR 2018 paper, researchers explore how they can mess with the visual appearance of a computer game to lead to humans needing substantially more time to solve it, whereas algorithms trained via reinforcement learning will only take marginally longer. This shows how humans depend on various visual indicators when trying to solve a game, whereas RL agents behave much more like blind scientists, learning to manipulate their environment without arriving with assumptions derived from the visual world.
…”Once a player recognizes an object (i.e. door, monster, ladder), they seem to possess prior knowledge about how to interact with that object – monsters can be avoided by jumping over them, ladders can be climbed by pressing the up key repeatedly etc. Deep reinforcement learning agents on the other hand do not possess such priors and must learn how to interact with objects by mere hit and trial,” they note.
…Human baselines were derived by having about 30 people play the game(s) via Amazon Mechanical Turk, with the scientists measuring how long it took them to complete the game.
Read more about the research here: Investigating Human Priors for Playing Video Games.

Researchers release data for more than ~1,100 simulated robot soccer matches:
Data represents more than 180 hours of continuous gameplay across ten teams selected from leading competitors within 2016 and 2017 ‘robocup’ matches…
…Researchers have released a dataset of games from the long-running RoboCupSim competition. The data contains the ground truth data from the digital soccer simulator, including the real locations of all players and objects at every point during each roughly ~10 minute game, as well as the somewhat more noisy and incomplete data that is received by each robot deployed in the field.
…One of the stories of AI so far has been the many surprising ways in whcih people use different datasets, so while it’s not immediately obvious what this dataset could be used for I’m sure there are neat possibilities out there. (Motion prediction? Multi-agent studying? Learning a latent representation of individual soccer players? Who knows!)
Read more here: RoboCupSimData: A RoboCup soccer research dataset.

From the Dept. of ‘And you thought AI was weird’: Stitching human and rat brains together:
…In the same way today’s AI researchers like to mix and match common deep learning primitives, I’m wondering if in the future we’ll do the same with different organic brain types…
Neuroscientists have successfully implanted minuscule quantities of human brain tissue (developed from stem cells) into the brains of mice. Some of the human brain samples have lived for as long as two months and have integrated (to a very slight degree) with the rat brains.
…”Mature neurons from the human brain organoid sent axons, the wires that carry electrical signals from one neuron to another, into “multiple regions of the host mouse brain,” according to a team led by Fred “Rusty” Gage of the Salk Institute,” reports StatNews.
…Read more here: Tiny human brain organoids implanted into rodents, triggering ethical concerns.

Hanson Robotics on the value of stunt demos for its robots:
…Makers of the robot Sophia, which was recently granted ‘citizenship’ by the notoriously progressive nation of Saudi Arabia, detail value of stunt demos…
…Ben Goertzel, the chief scientist of Hanson Robotics, makers of the Sophia robot, has neatly explained to The Verge why his company continues to hold so many stunt demonstrations that lead to people having a wildly inaccurate view of what AI and robots are capable of.
“If I tell people I’m using probabilistic logic to do reasoning on how best to prune the backward chaining inference trees that arise in our logic engine, they have no idea what I’m talking about. But if I show them a beautiful smiling robot face, then they get the feeling that AGI may indeed be nearby and viable.” He says there’s a more obvious benefit too: in a world where AI talent and interest is sucked towards big tech companies in Silicon Valley, Sophia can operate as a counter-weight; something that grabs attention, and with that, funding. “What does a startup get out of having massive international publicity?” he says. “This is obvious.”
…So there you have it. Read more in this article by James Vincent at The Verge.

AI and explanation:
…How important is it that we explain AI, can we integrate AI into our existing legal system, and what challenges does it pose to us?…
…When should we demand an explanation from an AI algorithm for why it made a certain decision, and what legal frameworks exist to ingest these explanations so that they make sense with our existing legal system? These are some of the questions researchers with Harvard University, set out to answer in a recent paper.
…Generally, humans expect to be able to get explanations when the decision has an impact on someone other than the decision-maker, indicating that there is some kind of intrinsic value to knowing if a decision was made erroneously or not. Societal norms tend to indicate an explanation should be mandated if there are rational reasons to believe that an error has occurred or will occur in the decision making process as a consequence of the inputs to the process being unreliable or inadequate, or because the outcomes of the process are currently inexplicable, or due to overall distrust in the integrity of the system.
…It seems likely that it’ll be possible to get AI systems to explain themselves in a way that plugs into our existing legal system, the researchers write. This is because they view explanation as being distinct from transparency. They also view explanation as being a kind of augmentation that can be applied to AI systems. This has a neat policy implication, namely that: “regulation around explanation from AI systems should consider the explanation system as distinct from the AI system.”
…What the researchers suggest is that when it is known that an explanation will be required, organizations can structure their algorithms so that the relevant factors are known in advance and the software is structured to provide contextual decision-making explanations relating to those factors.
…Bias: A problem faced by AI designers, though, is that these systems will somewhat thoughtlessly automatically de-anonymize information and in some cases develop biased traits as a consequence of the ingested data. “Currently, we often assume that if the human did not have access to a particular factor, such as race, then it could not have been used in the decision. However, it is very easy for AI systems to reconstruct factors from high-dimensional inputs… Especially with AI systems, excluding a protected category does not mean that a proxy for that category is not being created,” they write. What this means is that: “Regulation must be put in place so that any protected factors collected by AI system designers are used only to ensure that the AI system is designed correctly, and not for other purposes within the organization “.
The benefit of no explanation: AI systems present an opportunity that human decision-makers do not: they can be designed so that the decision-making process does not generate and store any ancillary information about inputs, intermediate steps, and outputs,” the researchers note, before explaining that systems built in this way wouldn’t be able to provide explanations. “Requiring every AI system to explain every decision could result in less efficient systems, forced design choices, and a bias towards explainable but suboptimal outcomes.”
…Read more here: Accountability of AI Under the Law: The Role of Explanation.

*** The Department of Interesting AI Developments in China ***

Chinese startup wins US government facial recognition prize:
…Yitu Tech, a Chinese startup specializing in AI for computer vision, security, robotics, and data analysis, has won the ‘Face Recognition Prize Challenge’ which was hosted by IARPA, an agency whose job is “to envision and lead high-risk, high-payoff research that delivers innovative technology for future overwhelming intelligence advantage.”
…The competition had two components: a round focused on identifying faces in unseen test images; and a round focused on verifying that two photos of two people were of the same person or not. “Both tasks involve “non-cooperative” images where subjects were unaware of the camera or, at least, did not engage with, or pose for, the camera,IARPA and NIST note on the competition website. Yitu won the identification accuracy prize, which is measured by having a small false negative identification rate.
Details about the competition are available here (PDF).
…Read slightly more in Yitu Tech’s press release.
…This isn’t Yitu’s first competition win: it’s also ranked competitively on another ongoing NIST challenge called FRVT (Face Recognition Vendor Test).
…You can check out the barely readable NIST results here: PDF.

Dawn of the NVIDIA-killing deep learning ASICS:
…China’s national development strategy depends on it developing cutting-edge technical capabilities, including in AI hardware. Its private sector is already producing novel new computational substrates, including chips from Bitcoin company Bitmain and state-backed chip company Cambricon...
AI chips are one of the eight ‘Key General Technologies’ identiied by China as being crucial to its national AI strategy (translation available here). Building off of the country’s success in designing its own semiconductors for use in the high-performance computing market (the world’s fastest supercomputer runs on semiconductors based on Chinese IP), the Chinese government and private sector is now turning its attention to the creation of processors customized for neural network training and inference – and the results are already flooding in.
Bitmain is a large bitcoin-mining company, is using the skills it has gained in building custom chips for mining cryptocurrency to develop separate hardware to use to train and run deep learning-based AI systems. It has just given details on its first major chip, the Sophon BM1680.
The details: The Sophon is an application specific integrated circuit (ASIC) for deep learning training and inference. Each chip contains 64 NPUS (neural processing units), which each has 64 sub-chips. Bitmain is selling these chips within ‘SC1’ and ‘SC1+’ server cards, the second of which chains two BM1680s together.
Framework support: Caffe, Darknet, TensorFlow, MXNet, and others.
But what is it for? Bitmain has demonstrated the chips being used for “production-scale video analytics for the surveillance industry” including motor/non-motor vehicle and pedestrian detection, and more, though I haven’t seen them referenced in a detailed research paper yet.
…Pricing: The SC1 costs $589 and has a TDP of 85W. The SC1+ isn’t available at this time.
…Read more here: BITMAIN launches SOPHON Tensor Processors and AI Solutions.
China’s state-backed AI chip startup unfurls AI processors:
Cambricon plans to expand to control 30% of China’s semiconductor IP market…
Cambricon, a state-backed Chinese semiconductor company, has released two chips – the Cambrian-1H8 for low-power computer vision applications, and the more powerful Cambrian-1H16; announced plans to release a third chip specialized for self-driving cars; and released AI software called Cambrian NeuWare. It plans to release a range of ‘MLU’ server AI chips in 2018 as well, it said.
…“We hope that Cambricon will soon occupy 30% of China’s IP market and embed one billion device worldwide with our chips. We are working side-by-side with and are on the same page with global manufacturers on this,” says the company’s CEO Tianshi Chen.
…Read more here: AI Chip Explosion: Cambricon’s Billion-Device Ambition.
CHIP WARS:
Check out this fantastic chart from Ark Invest showing the current roster of deep learning chip companies.

OpenAI Bits&Pieces:

Former OpenAI staffers and other researchers launch robot startup:
Embodied Intelligence aims to use imitation learning, learning from demonstrations, and few-shot / meta-learning approaches, to expand capabilities of industrial robots.
Read more: Embody.ai
Creating interpretable agents with iterative curriculums:
…Read more: Interpretable and Pedagogical Examples.

Tech Tales:

When the machines came, the artists rejoiced: new minds gave them new tools and mediums through which to propagate their views. When the computer artists came, the human artists rejoiced: new minds led to new aesthetics designed according to different rules and biases than those adopted by humans. But after some years the human artists stopped rejoicing as automatic computer generation, synthesis, and re-synthesis of art approached a frequency so extreme that humans struggled to keep up, finding them unable to place themselves, creatively, within their aesthetic universe.

The pall spread as a fog, imperceptible at first, but apparent after many years. The forward march of ‘culture’ became hard to discern. What does it mean to go up or do or left or right when you live in an infinite ever-expanding universe? These sorts of questions, long the fascination of academics specializing in maths and physics and fundamental philosophy, took on a real sense of import and weight. How, people wondered, do we navigate ourselves forward in this world of ceaseless digital creation? Where is the place that we aim for? What is our goal and how is ti different to the aesthetic pathways being explored by the machines? Whenever a new manifesto was issued it would be taken up and its words would echo around and through the world, until it was absorbed by other ideas and picked apart by other ideologies and dissembled and re-laundered into other intellectual or visual frameworks. Eventually the machines began to produce their own weighty, poorly read (even by other AIs) critical journals, coming up with essays that in title, form, and content, were hard to tell apart from the work of human graduate students: In search of meaning in an age of repetition and hypernormalization: Diatribes from the Adam Curtis Universe / The Dark Carnival, Juggalos, Antifa, and the New American RIght: An exploration / Where Are We Right Now: Geolocation & The Decline of Mystery in Daily Life.

The intellectual world eventually became like a hall of mirrors, where the arrival of any new idea would be almost instantly followed by the distortion, replication, and propagation of this idea, until the altered versions of itself outgrew the original – usually in as little time as it takes for photons to bounce from one part of a narrow corridor to another.

Technologies that inspired this story: GANGogh: Creating Art with GANs; Wavenet.