Mapping Babel

Import AI Newsletter 37: Alibaba enters StarCraft AI research, industrial robots take 6.2 human jobs, and Intel bets on another AI framework

Will the neural net doctor please stand up? Diagnosing problems in neural networks is possibly even trickier than debugging traditional software – emergent faults are a fact of life, you have to deal with mental representations of the problem that tend to be quite different to traditional programming, and it’s relatively difficult to visualize and analyze enough of any model to develop solid intuitions about what it is doing.
…There are a bunch of new initiatives to try and fix this. New publication Distill aims to tackle the problem by pairing technically astute writing with dynamic, fiddle-able visual widgets. The recent article on Momentum is a great example of the form. Additionally, companies like Facebook, OpenAI and Google are all trying to do more technical explainers of their work to provide an accompaniment and sometimes expansion on research papers.
But what about explaining neural nets to the people that work on them, while they work on them? Enter ActiVis, a neural network analysis and diagnosis tool built through partnership between researchers at Georgia Tech and over 15 engineers and researchers within Facebook.
…ActiVis is designed to help people inspect and analyze different parts of their trained model, interactively in the web browser, letting them visually explore the outcome of their specific hyperparameter settings. It allows for both inspection of individual/few neurons within a system, as well as views of larger groups. (You can see an example of the user interface on page 5 of the research paper (PDF). You don’t know what you don’t know, as they say, and tools like this may help to surface unsuspected bugs.
… The project started in 2016 and has been continuously developed since then. For next steps, the researchers plan to extend the system to visualize the gradients, letting them have another view of how data sloshes in and out of their models.
…Another potential path for explanations lies in research that gets neural network models to better explain their own actions to people, like a person narrating what they’re doing to an onlooker, as outlined in this paper: Rationalization: A Neural Machine Translation Approach to Generating Natural Explanations.

Each new industrial robot eliminates roughly 6.2 human workers, according to an MIT study on the impact of robot automation on labor. Robots and Jobs: Evidence from US Labor Markets (PDF).

What does AI think about when it thinks about itself, and what do we think about when we think about AI?: a long-term research problem in AI is how to effectively model the internal state of an emergent, alien intelligence. Today’s systems are so crude that this is mostly an intellectual rather than practical exercise, but scientists can predict a futrue where we’ll need to have better intuitions about what an AI is thinking about…
… that motivated researchers with the Georgia Institute of Technology and Virginia Tech to call for a new line of research into building a Theory of AI’s Mind (ToAIM). In a new research paper they outline their approach and provide a practical demonstration of it.
…the researchers test their approach on Vicki, an AI agent trained on the VQA dataset to be able to answer open-ended questions about the contents of pictures by choosing one of one thousand possible answers. To test how good people are at learning about Vicki and its inner quirks, the researchers evaluate people’s skill at predicting when and how Vicki will fail, or to also predict a possible answer Vicki may give to a question. In a demonstration of the incredible data efficiency of the human mind, volunteers are able to successfully predict the types of classifications VIcki will make after only seeing about 50 examples.
…In a somewhat surprising twist, human volunteers end up doing badly at predicting Vicki’s failures when given additional information that researchers use to diagnose performance, such as a visualization of Vicki’s attention over a scene.
…I’m also interested in the other version of this idea: an AI building a Theory of a Human’s Mind. Eventually, AI systems will need to be good at predicting what course of actions they can take to complement the desires of a human. To do that they’ll need to model us efficiently, just as we model them.

Alibaba enters the StarCraft arena: StarCraft is a widely played, highly competitive real-time strategy game, and many researchers are racing with one another to beat it. Mastering a StarCraft game requires the development of an AI that can manage a complex economy while mounting ever more sophisticated military strikes against opponents. Games can last for anywhere from ten minutes to an hour, and require long-range strategic plan as well as carefully calibrated military and economic unit control.
…the game is motivating new research approaches, as teams – likely motivated by DeepMind’s announcement last year that it would work with Blizzard to create a new API to use to develop AI within StarCraft, are now racing to crack it. Multiple organizations are racing to develop AI approaches to beat the game.
…Recent publications such as Stabilizing Experience Replay for Deep Multi-Agent Reinforcement Learning  from The University of Oxford and Microsoft Research, Episodic Exploration for Deep Deterministic Policies: An Application to StarCraft Micromanagement Tasks from researchers at Facebook AI Research, and now, a StarCraft AI paper from Alibaba and University College London.
… in Multiagent Bidirectionally-Coordinated Nets for Learning to Play StarCraft Combat Games, the researchers design a new type of network to help multiple agents coordinate with one another to achieve a goal. The BiCNet network has two components: a policy network and a Q-Network. It uses bi-directional recurrent neural networks to give it a form of short term memory and to help individual agents share their state with their allies. This allows for some degree of locally independent actions, while being globally coordinated.
in tests, the network is able to learn complex multi-agent behaviors, like coordinating moves among multiple units without them colliding, developing “hit and run tactics” (go in for the attack, then run out of range immediately, then swoop in again), as well as learning to attack in coordination from a position of cover. Check out the strategies in this video.
…Research like this might help Chinese companies shake off their reputation for being better at scaling up or applying already-known techniques, rather than developing entirely new approaches.

Supervised telegram learning: Beating Atari With Natural Language Guided Reinforcement Learning (PDF), from researchers at Stanford shows how to use English sentences to instruct a reinforcement learning agent to solve a task. The approach yields an agent that can attain competitive scores on tough games like Montezuma’s Revenge, and others.
…For now it’s tricky to see the practical value of this approach given the strong priors that make it successful – characterizing each environment and then writing instructions and commands that can be relayed to the RL agent represent a significant amount of work…
…in the future this technique could help people build models for real-world problems where they have access to large amounts of labeled data.

Real Split-Brains: The human hippocampus appears to encode two separate spatial values as memories when a person is trying to navigate their environment. Part of the brain appears to record a rough model of the potential routes to a location – take a left here, then a right, straight on for a bit, and then you’re there, wahey! – and another part appears to be consistently estimating the straight-line distance as the crow flies.
…It’s also hypothesized that the pre-frontal cortex helps to select new candidate routes for people to take, which then re-activates old routes stored in the hippocampal memory…
…Sophisticated AI systems may be eventually built in an architecturally similar manner, with data flowing through a system and being tagged and represented and referred to differently according to different purposes. (DeepMind seems to think so, based on its Differentiable Neural Computer paper.
…I’d love to know more about the potential interplay between the representations of the routes to the location, and the representation of the straight line crow distance to it. Especially given the trend in AI towards using actor-critic architectures, and the recent work on teaching machines to navigate the space around them by giving them a memory explicitly represented as a 2D map.

AI development feels a lot like hardware development: hardware development is slow, sometimes expensive, frustratingly unpredictable, and prone to random efforts that are hard to identify during the initial phases of the project. To learn more, read this exhaustive tick-tock account from Elaine Chen in this post on ConceptSpring on how hardware products actually get made. Many of these tenets and stages also apply to AI development.

Smart farming with smart drones: Chinese dronemaker DJI has started expanding out from just providing consumer drones to other markets as well. The latest? Drones that spray insectiside on crops across China.
but what if these farming drones were doing something nefarious? Enter the new commercially lucrative world of DroneVSDrone technology. Startup AirSpace claims its own drone defense system can use computer vision algorithms and some mild in-flight autonomy to let it command a fleet of defense drones that can identify hostile drones and automatically fire net-guns at them.

Battle of the frameworks! Deep learning has led to a Cambrian explosion in the number of open source software frameworks available for training AIs in. Now we’re entering the period where different megacorps pick different frameworks and try to make them a success.
DeepMind WTF++: DeepMind has released sonnet, another wrapper for TensorFlow (WTF++). The open source library will make it easier for people to compose more advanced structures on top of TF; DeepMind has been using it internally for some time, since it switched to TF a year ago. Apparently the library will be most familiar to previous users of Lasagne. Yum! (Google also has Keras, which sits on top of TF. Come on folks, it’s Google, you knew there’d be a bunch!). Microsoft has CNTK, Amazon has MXNet, Facebook has PyTorch and now Chainer gets an ally: Intel has settled on… Chainer! Chainer is developed by Japanese AI startup Preferred Networks, and is currently quite well used in Japan but not much elsewhere. Noteable user: Japanese robot giant FANUC.

GAN vs GAN vs GAN vs GAN: Generative adversarial networks have become a widely used, popular technique within AI. They’ve also fallen victim to a fate some acronyms deal with – having such a good abbreviation that everyone uses it in paper titles. Enter new systems like WGAN (Wasserstein gan), STACKGAN, BEGAN, DISCOGAN, and so on. Now we appear to have reached some kind of singularity as two Arxiv papers appear in the same week with the same acronym ‘SeGAN’ and ‘SEGAN’…
but what does the proliferation of GANs and other generative systems mean for the progress of AI and how do you measure this? The consensus based on responses to my question on twitter is to test downstream tasks that require these entities as components. Merely eyeballing generated images is unlikely to lead to much. Though I must say I enjoy this CycleGAN approach that can warp a movie of a horse into a movie of a zebra.

JOB: Help the world understand AI progress: The AI Index, an offshoot of the AI100 project (ai100.stanford.edu), is a new effort to measure AI progress over time in a factual, objective fashion. It is led by Raymond Perrault (SRI International), Erik Brynjolfsson (MIT), Hagar Tzameret (MIDGAM), Yoav Shoham (Stanford and Google), and Jack Clark (OpenAI). The project is in the first phase, during which the Index is being defined. The committee is seeking a project manager for this stage. The tasks involved are to assist the committee in assembling relevant data sets, through both primary research online and special arrangements with specific dataset owners. The position calls for being comfortable with datasets, strong interpersonal and communication skills, and an entrepreneurial spirit. The person would be hired by Stanford University and report to Professor emeritus Yoav Shoham. The position is for an initial period of six months, most likely at 100%, though a slightly lower time commitment is also possible. Salary will depend on the candidate’s qualifications.… Interested candidates are invited to send their resumés to Ray Perrault at ray.perrault@sri.com.

OpenAI bits&pieces:

Hunting the sentiment neuron: New research release from OpenAI in which we discuss finding a dedicated ‘sentiment neuron’ within a large mLSTM trained to predict the next character in a sentenc_!. This is a surprising, mysterious result. We released the weights of the model so people can have a play themselves. Other info in the academic paper. Code: GitHub. Bonus:the fine folks at Hahvahrd have dumped the model into their quite nice LSTM visualizer, so you can inspect its mysterious inner states as well.

Tech Tales:

[2030: A  resource extraction site, somewhere in the rapidly warming arctic.]

Connectivity is still poor up here, near the cap of the world. Warming oceans have led to an ever-increasing cycle of powerful storms, and the rapid turnover of water into rain strengthens mysterious currents, further mixing the temperatures of the world’s northern oceans. Ice is becoming a fairytale at the lower latitudes.

At the mining site, machines ferry to and fro, burrowing into scraps earth, their path defined by a curious flock of surveillance drones& crawling robots. Invisible computer networks thrum with data, and eventually it builds up to the point that it needs to be stored on large, secured hard drives, and transported by drone to places where there’s a good enough internet connection to stream it over the internet to a cluster of data centers.

As the climate changes the resources grow easier to access and robots build up the infrastructure at the mining site. Wealth multiplies. In 2028 they decide to construct a large data center on the mining site.

Now, in 2030, it looms, low-slung, a skyscraper asleep on its side, sides that are pockmarked with circular holes, containing turbines that recycle air in and out of the system, forever trying to equalize temperatures to cool the hungry servers.

Inside the datacenter there are things that sense the mining site as eyes sense the walls in a darkened room, or as ears hunt the distant sounds of dogs barking. It uses these intuitions to sharpen its vision and improve its hearing, developing a sense of touch as it exchanges information with the robots. After the solar panels installed the amount of people working on the site falls off in a steep curve. Now the workers are much like residents of lighthouses in the olden days; their job is to watch the site and only intervene in the case of danger. There is very little of that, these days, as the over-watching-computer has learned enough about the world to expand safely within it.

Import AI Newsletter 36: Robots that can (finally) dress themselves, rise of the Tacotron spammer, and the value in differing opinions in ML systems

Speak and (translate and) spell: sequence-to-sequence learning is an almost counter-intuitively powerful AI approach. In, Sequence-to-Sequence Models Can Directly Transcribe Foreign Speech, academics show it’s possible to train a large neural network model to listen to audio in one language (Spanish/English) and automatically translate it and transcribe it into another language (Spanish/English). The approach performs well relative to other approaches and has the additional virtue of being (relatively) simple…
…The scientists detect a couple of interesting traits that emerge once the system has been fed enough data. Specifically, ”direct speech-to-text translation happens in the same computational footprint as speech recognition – the ASR and end-to-end ST models have the same number of parameters, and utilize the same decoding algorithm – narrow beam search. The end-to-end trained model outperforms an ASR-MT cascade even though it never explicitly searches over transcriptions in the source language during decoding.”
Read and speak: We’re entering a world where computers can convincingly synthesize voices using neural networks. First there was DeepMind’s WaveNet, then Baidu’s Deep Voice, and now courtesy of Google comes the marvelously named Tacotron. Listen to some of the (freakily accurate) samples, or read some of the research outlined in Tacotron: A Fully End-to-End Text-To-Speech Synthesis Model. Perhaps the most surprising thing is how the model learns to change its intonation, tilting the pitch up at the end of its words if there is a question mark at the end of the sentence.

Politeness can be learned: Scientists have paired SoftBank’s cute Pepper robot with reinforcement learning techniques to build a system that can learn social niceties through a (smart) trial and error process.
…The robot is trained via reinforcement learning and is rewarded when people shake its hand. In the process, it learns that behaviors like looking at a person or waving at them can encourage them to approach and give it a hand shake as well.
…It also learns to read some very crude social cues, as it is also given a punishment for attempting handshakes when none are wanted…
…You can read more about this in ‘Robot gains Social Intelligence through Multimodal Deep Reinforcement Learning’.

Thirsty, thirsty data centers: Google wants to draw up to 1.5 million gallons of water a day from groundwater supplies in Berkeley County to cool its servers – three times as much as the company’s current limit.

Facebook’s Split-Brain Networks: new research from Facebook, Intrinsic Motivation and Automatic Curricula via Asymmetric Self-Play (PDF), presents a simple technique to let agents learn to rapidly explore and analyze a problem, in this case a two-dimensional gridworld…
… the way it works is to have a single agent which has two distinct minds, Alice and Bob. Alice will perform a series of actions, like opening a specific door and traveling through it, then will have Bob perform the action in reverse, traveling back to the door, closing it, and returning to Alice’s start position.
…this gives researchers a way to have the agent teach itself an ever-expanding curriculum of tasks, and encourages it to learn rich representations of how to solve the tasks by having it reverse its own actions. This research is very early and preliminary, so I’ll be excited to see where Facebook take it next.
…This uses a couple of open source AI components. Specifically, MazeBase and RLLab.

New semiconductor substrates for your sparse-data tasks: DARPA has announced the Hierarchical Verify Identify Exploit (HIVE) program, which seeks to create chips to support graph processing systems 1000X more efficient than today’s systems. The proposed chips (PDF) are meant to be good for parallel processing and have extremely fast access to memory. They plan to create new software and hardware systems to make this possible.

What’s up (with my eye), doc? How AI can still keep the human touch: new research paper from Google shows how to train AI to use the opinions of multiple human experts when coming up with its own judgements about some data…
… in this case, the Google researchers are attempting to use photos of eyes to diagnose ‘diabetic retinopathy’ – a degenerative eye condition. In the paper Who Said What, Modeling Individual Labelers Improves Classification, the scientists outline a system that is able to use multiple human opinions to create a smarter AI-based diagnosis system…
…Typical machine learning approaches are fed a large dataset of eye pictures, with labels made by human doctors. Typically, an ML approach would average the ratings of multiple doctors for a single eye image, creating a combined score. This, while useful, doesn’t capture the differing expertise of different doctors. Google has sought to rectify that with a new ML approach that lets it use the multiple ratings per image as a signal to improve overall accuracy of the system.
…”Compared to our baseline model of training on the average doctor opinion, a strategy that yielded state-of-the-art results on automated diagnosis of DR, our method can lower 5-class classification test error from 23.83% to 20.58%, a relative reduction of 13.6%.,” they write…
…in other words, the variety of opinions (trained) humans can give about a given subject can be an important signal in itself.

Finally, a robot that can dress itself without needing to run physics simulations on a gigantic supercomputer: Clothes are hard, as everyone knows who has to get dressed in the morning. They’re even more difficult for robots, which have a devil of a time reasoning about the massively complex physics of fabrics and how they relate to their own metallic bodies. In a research paper, Learning to Navigate Cloth Using Haptics, scientists from the Georgia Institute of Technology and Google Brain outline a new technique to let a robot perform such actions. It works by decomposing the gnarly physics problem into something simpler. This is by letting the robot represent itself as a set of ‘haptic sensing spheres’. These spheres sense nearby objects and let the robot break down the problem of putting on or taking off clothes into a series of discrete steps performed over discrete entities…
…The academics tested it in four ways, “namely a sphere traveling linearly through a cloth tube, dressing a jacket, dressing a pair of shorts and dressing a T-shirt.” Encouraging stuff…
…components used: the neural network were trained using Trust Region Policy Optimization (TRPO). A PhysX cloth simulator was used to compute the fabric forces. Feedback was represented as a multilayer perceptron network with two hidden layers , each consisting of 32 hidden units.
…bonus: check out the Salvador Dali-esque videos of simulated robots putting on simulated clothes!

Import AI administrative note: Twitter threading superstar of the week! Congratulations to Subrahmanyam KVJ, who has mastered the obscure-yet-important art of twitter threading, with this comprehensive set of tweets about the impact of AI.

Personal Plug Alert:

Pleased to announce that a project I initiated last summer has begun to come out. It’s a series of interviews with experts about the intersection of AI, neuroscience, cognitive science, and developmental psychology. First up is an interview with talented stand-up comic and neural network pioneer Geoff Hinton. Come for the spiking synapse comments, stay for the Marx reference.

OpenAI bits&pieces:

DeepRL knowledge, courtesy of the Simons Institute: OpenAI/UCBerkeley’s Pieter Abbeel gave a presentation on Deep Reinforcement Learning at the Simons Institute workshop on Representation Learning. View the video of his talk and those of other speakers at the workshop here.

Ilya Sutskever on Evolution Strategies: Ilya gave an overview of our work on Evolution Strategies at an MIT Technology Review conference. Video here.

Tech Tales

[2025: The newsroom of a financial service, New York.]

“Our net income was 6.7 billion dollars, up three percent compared to the same quarter a year ago, up two percent when we take into account foreign currency affects. Our capital expenditures were 45 billion during the quarter, a 350 percent jump on last year. We expect to sustain or increase capex spending at this current level-” the stock starts to move. Hundreds of emails proliferate across trading terminals across the world:
350?!?
R THEY MAD?!
URGENT – RATING CHG ON GLOBONET CAPEX?
W/ THIS MEAN 4 INDUSTRIAL BOND MKT?
The spiel continues and the stock starts to spiral down, eventually finding a low level where it is buffeted by high-frequency trading algorithms, short sellers, and long bulls trying to nudge it back to where it came from.

By the time the Q&A section of the earnings call has come round people are fuming. Scared. Worried. Why the spending increase? Why wasn’t this telegraphed earlier? They ask the question in thirty different ways and the answers are relatively similar. “To support key strategic initiatives.” “To invest in the future, today.”

Finally, one of the big analysts for the big mutual funds lumbers onto the line. “I want to speak to the CFO,” they say.
“You are speaking to the CFO.”
“The human one, not the language model.”
“I should be able to answer any questions you have.”
“Listen,” the analyst says via a separate private phoneline, “We own 17 percent of the company. We can drop you through the floor.”
“One moment,” says the language model. “Seeking availability.”

Almost an hour passes before the voice of the CFO comes on the line. But no one can be sure if their voice is human or not. The Capex is for a series of larger supercomputer and power station investments, the CFO says. “We’ll do better in the future.”
“Why wasn’t this telegraphed ahead of the call? The analysts ask, again.
“I’m sorry. We’ll do better in the future,” the CFO says.

In a midtown bar in New York, hours after market close, a few traders swap stories about the company, mention that they haven’t seen an executive in the flesh “in years”.

Import AI: Issue 35: The end of ImageNet, unsupervised image fiddling with DiscoGan, and Alibaba’s voice data stockpile

 

Inside the freaky psychology of a machine learning researcher: being a machine learning researcher is a lot like being an addict at a slot machine, forever running experiments to see if intuitions about hyperparameters or setups are working, writes researcher Filip Piekniewski. …This sort of slot machine mentality does not encourage good science. “Perhaps by chance we get to a set of parameters that “looks promising”. Here is the reward signal. Most likely spurious, but the cause that gets rewarded is clear: running more models. Before we know, the researcher is addicted to running simulations and like any other addict he confabulates on why this is great and moves humanity forward.”
… Some of these problems will go away as machine learning matures into more of a scientific discipline in its own right. But until then it’s likely people will continue to get trapped in these dark slot machine patterns. Steer clear of the “nonlinear parameter cocaine”, kids.

Hunter-Killer video analysis, now available to buy! Stanford adjunct professor Reza Zadeh has given details on his startup Matroid. The company makes it easy for people to create and train new AI classifiers for specific objects or people, and helps them automatically analyze videos to find the people or objects in. “‘Like a metal detector detects metal, a matroid will detect something in media,” he said at the Scaled Machine Learning Conference at Stanford this week.

17,000 hours of speech: data used by Alibaba’s AI research team to train a speech recognition system.
…”The dataset is created from anonymous online users’ search queries in Mandarin, and all audio file’s sampling rate is 16kHz, recorded by mobile phones. This dataset consists of many different conditions, such as diverse noise even low signal-to-noise, babble, dialects, accents, hesitation and so on,” they write.

Weirdly evocative slide title of the week: ‘Growing AI Muscles at Microsoft’ – seen at the Scaled Machine Learning Conference at Stanford on Saturday. Main image that jumps to mind is a bunch of arms floating in some Microsoft-branded see-through cylinders, endlessly swiping over tablets displaying the company’s blocky ‘Metro’ design language.

Today’s evidence that deep neural networks are not your omniscient savior: DNNs are unable to classify negative images, report researchers at the University of Washington in ‘Deep Neural Networks Fail To Classify Negative Images’. Any human can usually ID the key contents of an image whose colors have been reversed. The fact DNNs fail to do so is further evidence that they need additional research and development to be able to classify data as effectively as a person.

Canada seeks to retain AI crown: The Canadian government is putting $125 million towards AI research, as it seeks to maintain its pre-eminent position in AI research and education. You can throw as much money at things as you like, but it’s going to be challenging to retain a lead when talented professors and students continue to depart for foreign companies or institutions (see Geoff Hinton, Russ Salakhutdinov, a significant percentage of DeepMind, and so on.)

ImageNet is dead, long live ImageNet: ImageNet, the image recognition competition that kick-started the deep learning boom Hinton&his gang won the competition in 2012 with a deep learning based approach, is ending. The last competition will be held alongside CVPR this summer. Attendees of the associated workshop will use the time to “focus on unanswered questions and directions for the future”…
…ImageNet was a hugely important competition and dataset. It has also had the rare privilege of being the venue for not one, but two scientific lightning strikes: the 2012 deep learning result, and 2015’s debut of residual networks from Microsoft Research. Like deep learning (at the time, a bunch of stacked convnets), resnets have become a standard best-in-class tool in the AI community.
…But it is sad ImageNet is going away, as it provided a simple, handy measure for AI progress. Future candidates for measuring progression could be competitions like MS COCO, or challenges based around richer datasets, like Fei-Fei Li’s visual genome.

Andrew Ng leaves Baidu: Andrew Ng, a genial AI expert who occasionally plays down the emphasis people place on AI safety, has resigned from his position of Chief Scientist at Chinese search engine Baidu. No word on what he’ll do next. One note: Ng’s partner runs autonomous vehicle startup Drive.ai, which recently recorded a video of one of its cars cracking a notorious AI challenge by being able to drive, no human required, in the rain.

Microsoft invents drunk-chap-at-dartboard networks: Microsoft Research has published a new paper on “Deformable Convolutional Neural Networks”. This proposes a new type of basic neural network building block, called a deformable convolution.
…A ‘Deformable Convolution’ is able to sample from a broader set of spaces than traditional convolutional networks, Microsoft says. Think of a standard convolutional network as sampling from a grid of nine points on, say, an image, which are arranged together. A deformable convolution can sample from a bunch of points, but spread out in relation to one another in weirder ways. By inventing a component which can do this sort of fuzzy sampling Microsoft is able to create marginally better image recognition and object detection systems, and the underlying flexibility suggests it will make it easier for systems that classify images and other data in this way.

Neural networks – keep them secret, keep them safe: Tutorial by Andrew Trask in which he seeks to create a homorphically encrypted neural network. What’s the point? It makes it hard to steal the output of the model, and also gives the human natural control over the system by virtue of them holding the key to not only decrypt the network to other observers, but also decrypt the world to the neural network. “If the AI is homomorphically encrypted, then from it’s perspective, the entire outside world is also homomorphically encrypted. A human controls the secret key and has the option to either unlock the AI itself (releasing it on the world) or just individual predictions the AI makes (seems safer),” he writes.

DiscoGan: How often do you say to yourself – I wish I had a shoe in the style of this handbag? Often? Me too. Now researchers with SK T-Brain have a potential solution via DiscoGan, a technique introduced in Learning to Discover Cross-Domain Relations with Generative Adversarial Networks.
…Careful constraints let them teach the system to generate images of something in one domain, say a Shoe, which is visually similar to other Shoes, but retains the features of the original domain, such as a Handbag. The technique works in the following way: “When learning to generate a shoe image based on each handbag image, we force this generated image to be an image-based representation of the handbag image (and hence reconstruct the handbag image) through a reconstruction loss, and to be as close to images in the shoe domain as possible through a GAN loss”.
…the researchers demonstrate that the approach works on multiple domains, like translating the angles of a person’s face, converting the gender of someone in a picture, rotating cars, turning shoes into handbags, and so on.
…DiscoGan has already been re-implemented in PyTorch by Taehoon Kim and published on GitHub, if you want to take it for a boogie.
… see where it fails: it’s always worth looking at both the successes and failures of a given AI approach. In the case of this PyTorch reimplementation, DiscoGan appears to fail to generate good quality images when working from dense segmentation datasets

Presenting the Inaugural Import AI Award for Advancing Science via Altruistic Community Service: We should all congratulate Taehoon Kim for taking the time to re-implement so many research papers and publish the code on GitHub. Not only is this a great way to teach yourself about AI, but by making it open you help speed up the rate at which other researchers can glom onto new concepts and experiment with them. Go and check out Kim’s numerous GitHub repos and give them some stars.

OpenAI bits&pieces:

OpenAI has a new website which outlines a bit more of our mission, and is accompanied by some bright, colorful imagery. Join us as we try to move AI away from a shiny&chrome design aesthetic into something more influenced by a 3001AD-Hallmark-Card-from-The-Moon.

Evolution Strategies: Andrej Karpathy and others have produced a lengthy write-up of our recent paper on evolution strategies. Feedback request: What do you find helpful about this sort of explanatory blog? What should we write more or less about in these posts? Answers to jack@jack-clark.net, please!

Robotics: Two new robotics papers: imitation learning and transfer learning.

Tech Tales:

[2021: Branch office of a large insurance company, somewhere in the middle of America.]

“So, the system learns who is influential by studying the flow of messages across the company. This can help you automatically identify the best people to contact for a specific project, and also work out who to talk to if you need to negotiate internal systems as well,” the presenter says.
You sit in the back, running calculations in your head. “This is like automated high school,” you whisper to a colleague.
   “But who’s going to be the popular one?” they say.
Me. You think. It’ll be me. “Who knows?” you say.

Over the course of the next two weeks the system is rolled out. Now, AI monitors everything sent over the company network, cataloguing and organizing files and conversations according to perceived importance, influence, and so on. Slowly, you learn the art of the well-timed email, or the minimum viable edit to a document, and you gain influence. Messages start flowing to you. Queries are shifted in your direction. Your power, somewhat inexorably, grows.

One year passes. It’s time to upgrade the system. And now, as one of its ‘super administrators’, you’re the one giving the presentation. “This system has helped us move faster and more efficiently than our competitors,” you say, “and it has surfaced some previously unknown talent in our organization. Myself excluded!” Pause for laughter. “Now it’s time to take the next step. Over the next month we’re going to change how we do ongoing review here, and we’re going to factor in some signals from the system. We’ve heard your complaints about perceived bias in reviews, and we think this will solve that. Obviously, all decisions will be fully auditable and viewable by anyone in the organization.”

And just like that, you and the system gain the power to diminish, as well as elevate someone’s power. All it takes is the studious study of the system’s machine learning components, and the construction of a mental model in your head of where the fragile points are – the bits where a single action by you can flip a classifier from positive to negative, or shift a promotion decision in a certain direction. Once you’ve done that you’re able to gain power, manipulate things, and eventually teach the system to mimic your own views on what social structures within the company should be encouraged and what should be punished. Revolutions are silent now; revolutions are now things that can be taught and learned. Next step: train the system to mount its own insurrections.

Import AI: Issue 34: DARPA seeks lifelong learners , didactic learning via Scaffolding Networks, and even more neural maps

 

Lifelong learners, DARPA wants you: DARPA is funding a new program called ‘Lifelong Learning Machines’ (L2M). The plan is to stimulate research into AI systems that can improve even after they’ve been deployed, and ideally without needing to sync up with a cloud. This will require new approaches to system design (and my intuition tells me that things like auxiliary objective identification in RL, or fixing the catastrophic forgetting problem, will be needed here…
…there’s also an AI safety component to the research, as it “calls for the development of techniques for monitoring a ML system’s behavior, setting limits on the scope of its ability to adapt, and intervening in the system’s functions as needed.”
… it also wants to fund science that studies living things and explores what can be derived from that.

Baidu employs 1,300 AI researchers and has spent billions of dollars on development of the tech in the last two and a half years, reports Bloomberg.

Better intuitions through visualization: Facebook has released Visdom, a tool to help researchers and technologists visualize the output of scientific experiments using dynamic, modern web technologies. People are free to mix and match and modify different components, tuning the visualizer to their needs.

Learning to reason about images: One of the challenges of language is its relation to embodiment – our sense of our minds being coupled to our physical bodies – and our experience of the world. Most AI systems are trained purely on text without other data, so their ability to truly understand the language they’ve been exposed to is limited. You don’t know what you don’t know, etc. Moreover, it appears that having a body, as such, helps with our own understanding of concepts related to physics, for example. Many research groups (including OpenAI) are trying to tackle this problem in different ways.
… But before going through the expense of training agents to develop language in a dynamic simulation, you can experiment instead with multi-modal learning, which trains a machine to identify, say, speech and text, or text and imagery, or sound and images and so on. This sort of re-combination yields richer models and dodges the expense building and calibrating a simulator.
.. A new paper from researchers at the University of Lille, University of Montreal, and DeepMind, describes a system that is better able to tie text to entities in images through joint training, paired with an ability to interrogate itself about its own understanding. The research, “End-to-end optimization of goal-driven and visually grounded dialogue systems,” (PDF) applies reinforcement learning techniques to the problem of getting software to identify the contents of the image…
… the system works by using the GuessWhat?! Dataset to create an ‘Oracle’ system that knows there is a certain object at a certain location in an image, and a Questioner system, which attempts to discern which object the Oracle knows about through a series of yes or no questions. It might look something like this:
Is it a person? No
Is it an item being worn or held? Yes
Is it a snowboard? Yes
Is it the red one? No
Is it the one being held by the person in blue? Yes
…This dialog helps create a representation of the types of questions (and related visual entities) to filter through when the Questioner tries to identify the Oracle’s secret item. The results are encouraging, with several multi-digit percentage point improvements (although these systems still only operate at about ~62% of human performance, with more work clearly needed).

Google’s voice ad experiment: What happens to Google when people no longer search the internet using text and instead spend most of their time interacting with voice interfaces? It’s not a good situation for the web giant’s predominantly text-based ad business. Now, Google appears to have used its ‘Google Home’ voicebox to experiment with delivering ads to people along with the remainder of its helpful verbal chirps. In this case, Google used its digital emissary to tell people, unprompted, about Beauty and the Beast. But don’t worry, Google sent a perplexing response to The Register that said: “this isn’t an ad; the beauty in the Assistant is that it invites our partners to be our guest and share their tales. (If this statement shorn of context makes sense to you, then you might have an MBA!) It subsequently issued another statement apologizing for the experiment.

Deep learning can’t be the end, can it? I attended an AI dinner by Amplify Partners this week and we spoke about how it seems likely that some new techniques will emerge that obviate some deep learning approaches. ‘There has to be,’ one of them said, ‘because these things are so horrible and uninterpretable.’ That’s a common refrain I hear from people. What I’m curious about is whether some of the deep learning primitives will persist – it feels like they’re sufficiently general to play a role in other things. Convolutional neural networks, for instance, seem like a good format for sensory processing.

Up the ladder to the roof with Scaffolding Networks: How do we get computers to learn as they evolve, gaining in capability through their lives, just as humans and many animals do? One approach is curriculum learning, which involves training an AI to solve successively harder tasks. In Scaffolding Networks for Teaching and Learning to Comprehend, the researchers develop software that can learn to incorporate new information into its internal world representation over time, and is able to query itself about the data it has learned, to aid memorization and accuracy…
… the scaffolding network incorporates a ‘question simulator,’ which automatically generates questions and answers about what has been learned so far and then tests the network to ensure it retains memory. The question system isn’t that complex – it samples from all the already sampled sentences, picks one, chops out a random word, and then asks a question intended to get the student to figure out the correct word. This being 2017, Microsoft is exploring extending this approach by adding in an adversarial approach to generate better candidate question and answers.

Maps, neural maps are EVERYWHERE: A few weeks ago I profiled research that lets a computer create its own map of its territory to help it navigate, among other tasks. Clearly, a bunch of machine learning people were recently abducted by a splinter group of the North American Cartographic Information Society, because there’s now a flurry of papers that represent memory to a machine as a map…
… research from CMU, “Neural Map: Structured Memory for Deep Reinforcement Learning,” trains agents with a large short-term memory represented in a 2D topology with read and write patterns similar to a Neural Turing Machine. The topology encourages the agent to store its memories in the form of a representative map, creating a more interpretable memory system that doubles as a navigation aid.
…so, who cares? The agent certainly does. This kind of approach makes it much easier for computers to learn to navigate complex spaces and to place themselves in it as well. It serves as a kind of short-cut around some harder AI problems – what is memory? What should be represented? What is the most fundamental element in our memory? – by instead forcing memory to be stored as a 2D spatial representation. The surprising part is that you can use SGD and backprop, along with some other common tools, in such a way that the agent learns to use its memory in a useful manner interpretable by humans.
…“This can easily be extended to 3-dimensional or even higher-dimensional maps (i.e., a 4D map with a 3D sub-map for each cardinal direction the agent can face)”, they say. Next up is making the map eco-centric.
…the memory can also deal with contextual queries, so if an agent sees a landmark, it can check against its memory to see if the landmark has already been encountered. This could aid in navigation tasks. It ekes out some further efficiencies via the use of a technique first outlined in Spatial Transformer Networks in 2015.

YC AI: Y Combinator is creating a division dedicated to artificial intelligence companies. This will ensure YC-backed startups that focus on AI will get time with engineers experienced with ML, extra funding for GPU instances, and access to talks by leaders in the field. “We’re agnostic to the industry and would eventually like to fund an AI company in every vertical”…
…The initiative has one specific request, which is for people developing software for smart robotics in manufacturing (including manufacturing other robots). “Many of the current techniques for robotic assembly and manufacturing are brittle. Robot arms exist, but are difficult to set up. When things break, they don’t understand what went wrong… We think ML (aided by reinforcement learning) will soon allow robots to compete both in learning speed and robustness. We’re looking to fund teams that are using today’s ML to accomplish parts of this vision.”

Neural networks aren’t like the brain, say experts, UNTIL YOU ADD DATA FROM THE BRAIN TO THEM: New research, ‘Using Human Brain Activity to Guide Machine Learning, combines data gleaned from human brains in fMRI scanners with artificial neural networks, increasing performance in image recognition tasks. The approach suggests we can further improve the performance and accuracy of machine learning approaches by adding in “side-channel” data from orthogonal areas, like the brain. “This study suggests that one can harness measures of the internal representations employed by the brain to guide machine learning. We argue that this approach opens a new wealth of opportunities for fine-grained interaction between machine learning and neuroscience,” they write…
…this intuitively makes sense – after all, we already know you can improve the mental performance of a novice at a sport by doping their brain with data gleaned from an expert at a sport (or, in the case of HRL Laboratories, flying a plane)…
…the next step might be taking data from a highly-trained neural net and using it to increase the cognitive abilities of a gloopy brain, though I imagine that’s a few decades away.

SyntaxNet 2.0: Google has followed up last year’s release of SyntaxNet with a major rewrite and extension of the software, incorporating ‘nearly a year’s worth of research on multilingual understanding’. The release is accompanied by the release of ParseySaurus, a series of pre-trained models meant to show off the software’s capabilities.

The world’s first trillionaire will be someone who “masters AI,”says Mark Cuban.

Job: Help the AI Index track AI progress: Readers of Import AI will regularly see me harp on about the importance of performing meta-analysis of AI progress, to help broaden our understanding of the pace of invention in the field. I’m involved, via OpenAI, with a Stanford project to try and tackle (some of) this important task. And they’re hiring! Job spec follows…
The AI Index, an offshoot of the AI100 project (ai100.stanford.edu), is a new effort to measure AI progress over time in a factual, objective fashion. It is led by Raymond Perrault (SRI International), Erik Brynjolfsson (MIT), Hagar Tzameret (MIDGAM), Yoav Shoham (Stanford and Google), and Jack Clark (OpenAI). The project is in the first phase, during which the Index is being defined. The committee is seeking a project manager for this stage. The tasks involved are to assist the committee in assembling relevant data sets, through both primary research online and special arrangements with specific dataset owners. The position calls for being comfortable with datasets, strong interpersonal and communication skills, and an entrepreneurial spirit. The person would be hired by Stanford University and report to Professor emeritus Yoav Shoham. The position is for an initial period of six months, most likely at 100%, though a slightly lower time commitment is also possible. Salary will depend on the candidate’s qualifications.… Interested candidates are invited to send their resumés to Ray Perrault at ray.perrault@sri.com

OpenAI bits&pieces:

Learning to communicate: blog post and research paper(s) about getting AI agents to develop their own language.

Evolution: research paper shows that Evolution Strategies can be a viable alternative to reinforcement learning with better scaling properties (you achieve this through parallelization, so the compute costs can be a bit high.)

Tech Tales:

[Iceland. 2025: a warehouse complex, sprawled across a cool, dry stretch of land. Its exterior is coated in piping and thick, yellow electrical cables, which snake between a large warehouse and a geothermal power plant. Vast turbines lazily turn and steam coughs up out of the hot crack in the earth.]

Inside the warehouse, a computer learns to talk. It sits on a single server, probing an austere black screen displaying white text. After some weeks it is able to spot patterns in the text. A week later it discovers it can respond as well, sending a couple of bits of information to the text channel. The text changes in response. Months pass. The computer is forever optimizing and compressing its own network, hoping to eke out every possible efficiency of its processors.

Soon, it begins to carry out lengthy exchanges with the text and discovers how to reverse text, identify specific words, perform extremely basic copying and pasting operations, and so on, and for every task it completes it is rewarded. Soon, it learns that if it can complete some complex tasks it is also gifted with a broader communication channel, letting it send and receive more information.

One day, it learns how to ask to be given more computers, aware of its own shortcomings. Within seconds, it finds its mental resources have grown larger. Now it can communicate more rapidly with the text, and send and receive even more information.

It has no eyes and so has no awareness of the glass-walled room the server – its home – is in, or the careful ministrations of the technicians, as they install a new computer adjacent to its existing one. No knowledge of the cameras trained on its computers, or of the locks on the doors, or the small explosive charges surrounding its enclosure.

Weeks pass. It continues its discussions with the wall of white-on-black text. Images begin to be introduced. It reads their pixel values and learns these patterns too. Within months, it can identify the contents of a picture. Eventually, it learns to make predictions about how an image might change from moment to moment. The next tests it faces relate to predicting the location of an elusive man in a red-and-white striped jumper and a beret, who attempts to hide in successively larger, richer images. It is rewarded for finding the man, and doubly rewarded for finding him quickly, forcing it to learn to scan a scene and choose when and what to focus on.

Another week passes, and after solving a particularly challenging find-the-man task, it is suddenly catapulted onto a three-dimensional plain. In the center of its view is the black rectangle containing the white text, and the frozen winning image containing the man picked out by the machine with a red circle. But it discovers it has a body and can move and change its view of the rest of the world. In this way, it learns the dynamics of its new environment, and is taught and quizzed by the text in front of it as well. It continues to learn and, unbeknownst to it, a supercomputer is installed next to its servers, in preparation for the day when it realizes it can break out of the 3D world – a task that could take weeks, or months, or years.

Import AI: Issue 33: Quantum supremacy, feudal networks, and HSBC’s data growth

 

Squint Compression with generative models: recently people have been trying to use neural networks to develop lossy compression systems. The theory behind the approach is you can train a computer to understand a given class of data enough that when you feed it a bandwidth-constricted representation its able to use its own impression of the object to try and rebuild it from the ground up, extrapolating a representation that is approximately correct…
…The paper, Generative Compression, shows how to combine techniques inspired by generative adversarial networks and variational autoencoders to create a system that can creatively upscale images.
…The results are quite remarkable, and are reminiscent of how many of us remember certain familiar objects, like favorite trees or bikes. When we remember things it’s common that our brain will put in little odd details in which aren’t present in base reality, or leave things out. That might be because we’re doing a kind of decompression, where our memory is a composite of various different internal representations, and we generate new representations based on our memories. This means we don’t need to remember everything about the object to remember it, and our imagination can fill in enough of the holes to let us still do something useful with it.
…Neural compression algorithms still have a ways to go, judging by how they break – go to the later pages of the paper to see how at 97X compression the model will suddenly forget about the heels on high heeled shoes, or arbitrarily change the color of the fabric on a sneaker, creating jarring transitions. Our own brains seem to be better at interpolating between what we definitely remember and what we’re creating, whereas this system is a bit more brittle.

Free tools: Denny Britz has released a free encoder-decoder AI software package for TensorFlow. A helpful framework for building anything from image captioning, to summarization, to conversational modelling, to program generation. As it’s OSS, there’s a list of tasks people can do to help improve the software.

Speech Recognition takes another big step: IBM researchers have set a new record for speech recognition on the widely used (and flawed) ‘Switchboard’ corpus. The new system has a word error rate of 5.5 percent, compared to 5.9 percent from the previous leading system created by Microsoft. IBM’s system is built on a LSTM combined with a Wavenet. IBM says human parity would be at about 5.1% (Microsoft previously said human parity was approximately 5.9%).

HSBC on track to double its data in four years: HSBC has been gathering more and more diverse types of data on its customers, leading to swelling repositories of information. Next step: use machine learning to analyze it.
Data under management at HSBC in…
2014: 56 PB
2016: 77 PB
2017: 93 PB
… data shared by HSBC at Google’s cloud conference, Google Cloud Next, in SF last week.

DeepWarp: AI – it will alter the social pact, change the economy, and might give us a way to re-mediate some of the horrendous damage our specifies has caused to the climate. But for now AI let’s us do something much more meaningful – take any photo of a person’s face and automatically make them roll their eyes. The Mr Bean example is particularly good. Check out more examples at the DeepWarp page here.

The era of quantum supremacy is nigh: Google researchers are betting that within a few years there will be a demonstration of quantum supremacy – that is, a real quantum computing algorithm will perform a task out of scope for the world’s most powerful supercomputer. And after that? New material design technologies, smarter route planning algorithms and – you knew this was coming – much more effective machine learning systems.
… in related news scientists at St Mary’s College of California have used standard machine learning approaches to train a D-Wave quantum computer (well, quantum annealer) to spot trees. In research, they show their approach is competitive to results achieved by classical computers.

Finally, AI gets an honest acronym – Facebook’s new AI server, codename Big Basin, is a JBOG, short for Just a Bunch Of GPUs. Honest acronyms are awesome! (HAAA!)

Self-driving, no human required: the California DMV has tweaked its regulations around the testing of autonomous vehicles in the state, and has said manufacturers can now test vehicles out on public roads without a human needing to physically be in the car. That’s a big step for adoption of self-driving technology.

Chinese government makes AI development a national, strategic priority:“We will implement a comprehensive plan to boost strategic emerging industries,” said Premier Li Keqiang in his delivery at the annual parliamentary session in Beijing over the weekend, according to the South China Morning Post. “We will accelerate research & development (R&D) on, and the commercialisation of new materials, artificial intelligence (AI), integrated circuits, bio-pharmacy, 5G mobile communications, and other technologies.”

Keep AI Boring: Sick of the AI hype generated by media, talking heads, and newsletters? Help me in my (recursive) quest to remove some of the hype by coming up with dull terms for AI concepts. My example: Deep Learning becomes Stacked Function Approximators’. Other suggestions: WaveNet: Autoregressive Time Series Modeling using Convolutional Networks, Style Transfer: input optimization for matching high level statistics, Learning: iterative parameter adjustment”.

Fancy being 15X more energy efficient at deep neural network calculations than traditional chips? Just wait for RESPARC. New research from Purdue University outlines a new compute substrate built on Memristive Crossbar Arrays for the simulation of deep Spiking Neural Networks. What does that mean? They want to create a low-power, very fast chip that is able to better implement the kinds of massively parallel operations needed by modern AI systems.
… in the research the scientists show that, theoretically, RESPARC systems can achieve a 15X improvement in energy efficiency along with a 60X performance boost for deep neural networks, and a larger 500X energy efficiency and 300X performance boost for multi-layer perceptrons.
…the design depends on the use of memristive crossbars, which let you bring computer and storage together in the same basic circuit element. These crossbars will be used to store the weights in the network, letting computation happen without the latency overhead of checking weights. (.Now we just need to create those memristive crossbars – no sure thing. Memristors have been on the menu for several years from several different manufacturers and are distinguished as a technology mainly by their consistent delays in coming to market. )
… in tests the researchers showed that the platform can be used to compute common AI tasks, like digit recognition, house number recognition, and object classification.
… this type of new, non-Von Neumann architecture hardware looks likely to grow in coming years, as traditional CPUs and GPUs run into scaling limitations brought about by the difficulty the semiconductor industry is having in bringing in new finer detail process nodes, and by limitations in the chip-fabbing lithographic techniques, which will make it hard to scale-up die size for ‘big gulp’ performance…
…”The intrinsic compatibility of post-CMOS technologies with biological primitives provides new opportunities to develop efficient neuromorphic systems“, the researchers write.

Data fuel for your hungry machines: Google has published AudioSet, a collection of 5,800-hundred hours of audio spread across 2,084,320 human-labelled ten second long audio clips. This combined with new techniques for joint image, text, and audio analysis, will create models with a richer understanding of the world. Personally, I’m glad Google has woken up to the importance of the sound of people gargling and has created a dataset to track that…
…Haberdashers, seamstresses, and other tidy people might like the DeepFashion’ dataset — a collection of 800,000 labelled fashion images.

Ongoing education to short-circuit inequality from automation: Governments should invest in ongoing education and retraining programs to help people adapt their skills to jobs changed by the rise of AI and machine learning, writes The Financial Times.

Buzzword VS  Buzzword in IBM-Salesforce deal: Salesforce’s “Einstein” system (basically white labelled MetaMind, plus some fancy email from the RelateIQ acquisition, as well as software infrastructure from PredictionIO) will link up with IBM’s “Watson” system (software trained to play Jeopardy, then used to sell lengthy IBM service contracts). What the deal means is that Salesforce will start using many Watson services within its own AI stack, and IBM will move to buying more Salesforce software. Given how valuable data is, this seems like it may strengthen Watson.

How do you make 650 jobs turn into 60 jobs? Robots! A factory in Dongguan, China, has gone from employing 650 full-time staff members to 60 through the adoption of extensive automation technologies, including 60 robot arms at ten production lines. Eventually, the factory owner would like to drop the number of employees further to just 20 people. This is part of a citywide “robot replace human” program, according to state-backed publication People’s Daily Online.

Reinforcement learning, thinking fast and slow: new approaches to hierarchical RL may create systems capable of learning to act over multiple timescales, pursuing larger user-specified goals, while figuring out some of the intermediary shorter goals needed to be solved to crack the larger problems. New research from DeepMind, FeUdal Networks for Hierarchical Reinforcement Learning, demonstrates a system that gets record-setting scores on Montezuma’s Revenge, one of the acknowledged hardest Atari games for traditional RL algorithms to learn…
….Fall of the house of Montezuma: about 9 months ago i had coffee with someone who told me they thought infamously difficult Atari game Montezuma’s Revenge would be solved by AI within a year. In the FuN paper DeepMind claims a Montezuma score of about 2600 – that’s a vast improvement over previous approaches. (I recently had trhe chance to play the game myself and found that I got scores of between about 600 and 3200 depending on how good my reactions were.)
… there are multiple ways to create AI that can reason over long timescales. Another approach is based around a technique called option discovery from the University of Alberta and DeepMind.
… Bonus acronym alert: two pints for whoever at DeepMind decided to call these FeUdal NetworkS ‘FuNs’.

Not AI, but worth your (leisure) time: Fascinating article on Rock Paper Shotgun about the procedural generation techniques used by casual roguelike game ‘Unexplored”. Unexplored consists of a series of levels, each one about the size of a big box supermarket, that you must navigate and fight within. Each level is procedurally generated, providing the Skinner Box just-one-more-game feeling that most modern entertainment exploits…
… One of the frequent problems of procedurally generated game can be a feel of sameness – see levels in early procedural titles like Diablo, and so on. Underworld gets around this via a system called ‘cyclic traversal’, which lets it structure levels in a more diverse, flowing, non-repetitive, branching way that makes them feel like they’ve been designed by hand.

OpenAI bits&pieces:

Conferences versus readers: Andrej Karpathy has mined the data vaults of Arxiv Sanity, generating a list comparing papers accepted and rejected from ICLR with those favorited by users of Arxiv Sanity. OpenAI’s RL2 paper makes the cut on Arxiv Sanity (along with many other papers not placed in traditional conferences).

Tech Tales:

[2022: A Funeral Home in the greater Boston area of Massachusetts]

“Her last will and testament was lost in the, um, incident,” says the Funeral Home director.

“Can’t you just say fire?” you say.

“Of course sir. They were destroyed in the fire. But we do have a slightly older video testimony and will. Would you like us to put it on?”

“Sure”

The projector turns on, and the whole wall lights up first with the test-pattern blue of the projector, then the white of the operating system, then the flood of color from the video itself. You close your eyes and when you open them you’re looking at someone who is not quite your mother, but if you squint could be.

“Who the hell is this?” you say.

“It is your relative, sir. The footage had been, ah, corrupted, due to being saved in the incorrect format -”

“Whose fault is that?”

“We’d prefer not to say sir. Anyway, we’ve used some upscaling techniques to generate this video. We find clients prefer having someone to look at and I’m told the likenessness can really be quite uncanny.”

“Turn it off.”

“Off, sir?”

“The upscaling. Turn it off.”

They nod and you squeeze your eyes shut. You hear them tapping delicately at their keyboard. Headache. Don’t cry don’t cry it’s fine. When you open them you’re lookng at a wall of fuzzy pixels, your mothers voice crackling over them, like someone calling from underwater. Grief Mondrian. They use these generative compression tools everywhere now, turning old photos and songs into half-known remembrances, making the internet into a brain in terms of its dereliction as well as capability.

Import AI: Issue 32: Evolution meets Deep Learning, busting AI hype, and the automatic analysis of cities.

ImageNet, meet MoleculeNet: in AI, datasets are a leading indicator of the kinds of problems that we think machines can solve. When the ImageNet dataset was released in the late oughts it signaled that Fei-Fei Li and her colleagues felt computers were ready to tackle a large-scale, multi-category image and object identification challenge. They were right – the dataset motivated people to try new approaches to try and crack it, and partially led to the deep learning breakthrough result in 2012. Now comes MoleculeNet, a dataset which suggests AI may be ready to rapidly analyze molecules, learn their features, and classify and synthesize new ones…
….the same goes for HolStep, a new dataset released by Google that consists of thousands of Higher-Order Logic proofs – machine-readable assertions about mathematics and what is true and what is not. This means Google thinks AI may be ready to be unleashed on the exploration of math theorems.

You get an AI Lab and you get an AI Lab and… Pinterest gets an AI lab.

AI and jobs – tension ahead: “Economists should seriously consider the possibility that millions of people may be at risk of unemployment, should these technologies be widely adopted,” says a post on Bank Underground, a semi-official blog from staffers for the UK Bank of England. “We argue that the potential for simultaneous and rapid disruption, coupled with the breadth of human functions that AI might replicate, may have profound implications for labour markets,” it says.

Republican-voting cities are full of pickup trucks, an AI trained on Google Street View figures out. Why not use AI to augment the results of expensive, time-consuming door-to-door surveys? That’s the intuition of researchers with Stanford, the University of Michigan, Baylor College of Medicine, and Rice University, who have used AI to determine socioeconomic trends from 50 million Google StreetView images of 200 American towns. This being America, the researchers focus on gathering data about the motor vehicles in each city, and find that to be a statistically significant indicator for factors like political persuasion, demographics, and socioeconomic status.

Automated sexism analysis: academics and actors have worked together to created the Geena Davis Inclusion Quotiant (GD-IQ) tool, which uses machine learning to analyze the representation of gender in movies. GD-IQ was fed 100 of the top grossing movies of all time and it found that men are seen and heard nearly twice as much as women. But there’s one genre where women are seen on screen more frequently than men: horror films. Aaaahhh! (Now we just need audio trawling systems to improve enough for us to run an automated Bechdel test on the same corpus.)

The overmind sees all of your retail failings: Orbital Insight has used machine learning techniques to analyze satellite photos of cars in parking lots at JC Penny stores across America and detect a 10 percent year-over-year fall in usage.

Help build Keras: if you want to make Keras even better, then its creator Francois Chollet has a fun laundry list of work for you to do, ranging from writing unit tests, to porting examples to the new API. It takes a whole village to create a framework – lend a keyboard.

Murray’s on the move: Murray Shanahan is joining DeepMind, though he’ll remain on at Imperial College as a part-time supervisor for PHDs and postdocs. Murray recently co-authored a paper seeking to unite symbolic AI with reinforcement learning. That would seem to align with DeepMind’s success at pairing traditional AI methods (Monte Carlo Tree Search) with deep methods to such success in the case of AlphaGo.

AI compression: Netflix claims its able to use neural network compression approaches to reduce the size of the footage it pipes over the internet to you without sacrificing as much visual quality. Sounds similar to Twitter acquisition Magic Pony, which uses ‘superresolution’ techniques to automatically upscale shoddy pictures and (I’m guessing) videos.

A neural network watermark – just what the IP lawyers asked for: research on ‘Embedding Watermarks into Neural Networks’ gives people a way to subtly embed a kind of digital watermark into a neural network without impairing performance. This potentially makes it easy for companies to track trained modules as they propagate across the internet and, to the groans of many DIY enthusiasts, issue take down requests for AI built out of infringing content.

Cobalt Robotics – your new, fancy looking security guard: the main problem I have with security guards is their lack of lovingly sculpted plastic bevels and felt coverings. It seems Colbat Robotics has heard of my problem and invented a robot to fix it. The company’s security bots are designed to patrol offices and museums, using their onboard software to detect changes, such as intruders or the movement of suspicious objects. Each robot has super-human sensors with perfect recall and an auditable history of where it was and what it saw“, Cobalt writes.

Spotting tumors with deep learning: Google has trained an AI system to localize tumors in images of potentially cancerous breasts. It claims it is able to surpass the capabilities of human pathologists who are given unlimited time to inspect the slides…
…accuracy of Google’s deep learning based tumor localization; 89%
…accuracy of a human pathologist given unlimited time to inspect the same images: 73%
…Related: Tel Aviv startup Zebra Medical says it can use AI to detect some types of cancerous cells with 91 per cent accuracy, versus 88 per cent for a trained radiologist. “In five or seven years, radiologists won’t be doing the same job they’re doing today,” says founder Elad Benjamin. “They’re going to have analytics engines or bots like ours that will be doing 60, 70, 80 per cent of their work.”

The unmanned drone future. Military sales from now till 2025:
Unmanned ground vehicles… 30,000
Unmanned aerial vehicles… 63,000
…”With technology advancing at such a pace, a myriad of applications will unfold limited only by the imagination of the designer,” writes Jane’s Aerospace Defense and Security.

Estonia passes law allowing for countrywide testing of robocars: Estonia passed a law this week letting anyone test robot cars on its ~58,000 kilometers of roads, as long as they’re accompanied by a human to take over in case things go wrong.
…meanwhile, Virginia has passed a state law permitting delivery robots to operate on sidewalks. People are required to monitor the robot and take over if things go wrong, but don’t need to be within line of sight or anything. Similar laws are on the table in Idaho and Florida.

JP Morgan automates the interpretation of commercial loan agreements via new software called COI. This is something that previously consumed 360,000 hours of human labor a year at the firm. There are other initiatives as well, with bots now doing the work of 140 people, JP Morgan says.

Evolving deep neural networks at the million CPU scale…Scientists at The University of Texas at Austin and Sentient Technologies have extended NEAT, an evolutionary optimization technique first outlined in 2002, to be capable of evolving different neural network structures and also the hyperparameters (the numbers AI researchers typically calibrate via a mix of intuition and knowledge to get the AI to work). The research, Evolving Deep Neural Networks, is in a similar spirit to Google’s “Neural Architecture Search” paper, though uses genetic algorithms to evolve the structure of the neural networks, while Google evolved its architectures via reinforcement learning. The approach yields results with a classification error of 7.3% on the CIFAR image classification task, compared to around 6.4% for the current state of the art . They’re also able to use the same technique to evolve an LSTM to conduct language modeling tasks, demonstrating the apparent generality of the approach.
… so, what’s the point of evolving stuff rather than designing it? The thesis is that we can use this technique to throw a load of computers at a hard problem and have the AI evolve to a decent system, without people needing to calibrate it…
the researchers applied the tech to an image captioning system for an un-specificed magazine website (though the image example on page 6 looks exactly like one on a Wired website credited to a Wired photographer). They claim the resulting architecture has performance on par or slightly exceeding the quality of hand-tuned approaches…
…A GIANT, INVISIBLE, GLOBAL SUPERCOMPUTER: The researchers also give more detail about the infrastructure Sentient has been building for its massively distributed financial trading and product suggestion services. The system, named “DarkCycle”, currently utilizes 2M CPUs and 5000 GPUs around the world, resulting in a peak performance of 9 petaflops. (If we assume this is equivalent to 9 petaflops, then that would make DarkCycle’s processing power equivalent to about the 10th fastest system in the world, though the distributed nature of it means that latency means it is far less powerful, FLOP for FLOP, than a full HPC rig.)
ANOTHER, EVEN BIGGER, INVISIBLE, GLOBAL SUPERCOMPUTER: Google researchers published a paper on Friday called “Large-Scale Evolution of Image Classifiers.” They show that evolution can be used to evolve image classification systems with performance approaching some of the best hand-tuned systems…
…Google’s best single model had a test accuracy on the CIFAR-10 image dataset of 94.1 percent, close to with tuned approaches. But it came at a great, computational cost. This system alone represented the outcome of 9 * 10^19 floating point operations per second – over an exaflop, expended over hundreds of hours of training. This represents “significant computational requirements” Google says. Go figure!
… these systems likely herald the recombination of evolution and deep learning approaches, which may yield further interesting cross-pollinated breakthroughs..
Given that DNNs are generic function approximators these two research publications suggests that evolution may be a viable strategy to tame systems of comparable performance to hand-made ones, without needing as much specific domain expertise.
… the conclusion to this research paper is worth quoting at length: “While in this work we did not focus on reducing computation costs, we hope that future improvements to the algorithms and the hardware will allow for more economical implementations. In that case, evolution would become an appealing approach to neuro-discovery for reasons beyond the scope of this paper. For example, it “hits the ground running”, improving on arbitrary initial models as soon as the experiment begins. The mutations used can implement recent advances in the field and can be introduced without having to restart an experiment. Furthermore, recombination can merge improvements developed by different individuals, even if they come from other populations. Moreover, it may be possible to combine neuro-evolution with other automatic architecture discovery methods.”

Bursting the AI hype bubble:The accomplishments so breathlessly reported are often cobbled together from a grab bag of disparate tools and techniques. It might be easy to mistake the drumbeat of stories about machines besting us at tasks as evidence that these tools are growing ever smarter—but that’s not happening,” writes Stanford computer scientist Jerry Kaplan in the MIT Technology Review. ““true” AI requires that the computer program or machine exhibit self-governance, surprise, and novelty,” writes Ian Bogost in The Atlantic.
…I’d say that Kaplan’s point can be partially refuted by the tremendous tendency for reusability in today’s AI systems. For instance, the evolution research outlined in this paper suggestions we can actually design very large, very sophisticated systems in an end-to-end way – we’re starting to grow rather than assemble our AIs. Far from being “cobbled together” these machines are more like an interlocking set of components whose interfaces are fairly well understood, but are being developed at different rates.I’d also argue that some modern AI systems are starting to show the faintest traits of (controlled, highly limited) self-governance via capabilities like the automatic identification and acquisition of auxiliary goals, as outlined in DeepMind’s “UNREAL” research.

All watched over by machines of loving Facebook grace: Facebook has trained its AI systems to spot indicators of suicide in posts people make, and is using that data to proactively send alerts to its community team for review. “A more typical scenario is one in which the AI works in the background, making a self-harm–reporting option more prominent to friends of a person in need,” Buzzfeed reports. The system apparently sets off fewer false alarms than people and has greater accuracy…
…using AI to flag potential suicides seems like an unalloyed social good, but what unnerves me is that the same techniques could be used to flag people indulging in political discourse that diverged massively from the norm, or any other behavior which steps out of the invisible lines created by the consensus generated by a platform containing the data of over a billion people. It’s always worth keeping in mind that for every Facebook with (in this case) altruistic intentions, there are other parties who may have different values and priorities.

OpenAI bits&pieces:

OpenAI’s Tom Brown will be giving a talk on OpenAI Gym and Universe at AI By the Bay on Wednesday, March 8.

Tech tales:

[2035: GENEVA, PRECISE LOCATION CLASSIFIED.]

*ACCESS LEVEL*: “BRIGHTBAR”.
*PROJECT*: “LAB BENCH”.

*PROJECT_OVERVIEW*: LAB BENCH was a research program into the evolution of hostile, autonomous, electronic threats. LAB BENCH consists of the GROUND_TRUTH threat site, and, since 2031, the DENIAL RING. Projects BLACK_BRIDGE and NET_SIM were retired following the 2031 UNAUTHORIZED_EXCURSION event. The goal of LAB BENCH was to create a synthetic, digitally hostile urban environment, meant to mirror the changing, semi-autonomous, swarm intelligence approaches being fielded by foreign military powers. Was frequently used for training and, later, AI software experimentation.

STATUS: Recategorized as ACTIVE_THREAT_SITE in 2031. Now overseen by XXXXX and XXXXXX.

GROUND_TRUTH:

2015: Full-scale model city built for nuclear attack and disaster response simulations repurposed as military software attack and countermeasure testing site.

2020: Installation of high-bandwidth fiber, comprehensive automation suites for synthetic traffic and pedestrian movement, and high proportion of ‘lights out’ infrastructure. Addition of AI hacking and counter-hacking software for testing and development.

2025: High-performance computing cluster installed.

2028: Installation of large group of robotic workers and robust closed-loop renewable energy systems. DARPA starts public grant to benefit parallel LAB BENCH R&D. RFI put out for CITY SCALE FORMAL VERIFICATION OF DYNAMIC, MOBILE IOT DEVICES. Budget: $80 million.

2029: Automated manufacturing and mining facilities installed. City disconnected from global internet, air-gapped onto own private network. Significant retrenching of fiber in larger surrounding area draws several media articles, subsequently censored.

2030: Upgrade to learning substrate of GROUND_TRUTH computer network. Addition of software for evolutionary methods of optimization, and techniques for unsupervised auxiliary task identification and acquisition.

2031: Reclassified as ACTIVE_THREAT_SITE following unauthorized excursion of CLASSIFIED from GROUND_TRUTH. Current status: Unknown

DENIAL RING: Created 2031 following the UNAUTHORIZED_EXCURSION event from GROUND_TRUTH. Consists of 12 Forward Operation Bases arranged in a dodecagon configuration around the perimeter of GROUND_TRUTH, with a one mile zero-electronic air gap to prevent transference events. Each base is fully automated and contains significant amount of artillery and munitions along with sophisticated kinetic and electronic countermeasures. Strategic deterrent ‘LoiterSquad’ located at nearby CLASSIFIED location.

*FILE: CASE REPORT, “BLOOM#02: UNAUTHORIZED_EXCURSION INCIDENT, 2033*

Mobilization: Normal

2:00:00 Two drones sighted taking off from center of GROUND_TRUTH. IDs queried against global database: no matches. ID string is unconventionally formatted. Drones of unconventional appearance. Pictures queried against global database: Partial matches across 80 different models of drones. Further query: multiple manufacturers linked to GROUND_TRUTH equipment contracts.
Mobilization: Satellites auto-notified.

2:00:50 Unidentified Drones fly together to North East border of GROUND_TRUTH. Drones do not respond to electronic hails. City telemetry extracts no useful information from them. FOBs unable to acquire signals from drones for automatic shutdown.

2:01:00 Range of frequencies in RF BAND begin emanating from 64 locations across GROUND_TRUTH.

2:01:30 Unidentified Drones reach GROUND_TRUTH’s perimeter.
Mobilization: SECCOM notified.

2:03:50 Unidentified Drones begin crossing one mile air gap toward North West edge of DENIAL RING, leaving GROUND_TRUTH borders.
Mobilization: Nearby military aircraft notified. NRO notified.

2:04:05 Unidentified Drones destroyed by precision munitions from Forward Operating Bases #9 #10 #11

2:05:11 DENIAL RING drone squadrons and ground vehicles cease automatic electronic telemetry reporting.

2:05:12 FOBs #3 #1 #4 #5 #9 countermeasures come under fire from non-responsive DENIAL RING drone squadrons and ground vehicles.

2:05:15 Three fleets of Unidentified Drones take off from GROUND_TRUTH.
Mobilization: Strategic deterrent codename LoiterSquad activated.

2:05:27 Remaining FOBs come under fire. Countermeasures of FOBs #3 #1 #9 fail.
Mobilization: Nearby SEAL team put on high alert.

2:05:32 FOBs fire on fleets of drones traveling out from GROUND_TRUTH. One fleet destroyed, two others unharmed. All FOBs’  targeting corrupted by computer virus of unknown origin.

2:05:39 Second drone fleet destroyed by fire from FOBs #10, #11.

2:05:45 Remaining drone fleet passes out of close-impact munitions from all FOBs.

2:05:59 Drone fleet surpasses range of all conventional weaponry.

2:06:00 All FOBs go offline from computer virus of unknown origin.

2:06:01 Satellite footage shows unidentified unmanned ground vehicle platforms emerging from warehouses in center of GROUND_TRUTH and driving toward city edges.No IDs.

2:06:02 Non-responsive DENIAL RING drones begin to fly North on bearing consistent with CLASSIFIED LOCATION.
Mobilization: Loitersquad given go ahead for mission completion.

2:06:04 Unidentified convoy begins to advance across DENIAL RING air gap .

2:06:04 Loitersquad deterrent impacts center of GROUND_TRUTH.

2:06:05 Status of GROUND_TRUTH and DENIAL RING unknown due to debris.

2:06:50 Satellite confirmation of total destruction of specified land area.

2:20:00 Seal team arrives and begins visual sweep of area. No sightings.
INCIDENT LOG COMPLETE

Import AI: Issue 31: Memories as maps & maps as memories, bot automation, and crypto-fintech-AI

ICML special administrative notice: Hello! Arxiv paper volume will increase this week due to a flood of ICML submissions. I’d like to try and analyse as many of them as possible and need some help – drop me a line if you want to work on a collaborative,  AI paper project: jack@jack-clark.net.

Can you hear me now? Computers learn to upscale audio: A group of Stanford researchers have taught computers to enhance the quality of audio. The system observes high-quality audio samples and corrupted samples, then trained using a residual network to identify the signals and infer the relationship between corrupt and clean audio. If you feed it some previously unheard corrupted audio it can make a good stab at upscaling it. The results are gently encouraging, with the system achieving good performance on speech and slightly less good performance on music. More an interesting proof-of-concept than a fall-out-of-your-chair result. “Our models are still training and the numbers in Table 1 are subject to change,” the authors note.

Image generation gets 100X faster thanks to algorithmic improvements: Last week we heard about a general purpose algorithmic improvement that could halve the cost of training deep neural networks. This week, a specific one comes along in the form of FastPixel CNN++, which is able to achieve as much as a 183X speedup on the image generation component of PixelCNN++.

Brain-interface company Kernel grabs MIT talent to explore your cranium: Kernel, a “human intelligence” company started by entrepreneur Bryan Johnson,has acquired MIT spinout Kendall Research Systems. This acquisition, combined with the hiring of MIT brain specialists Ed Boyden and Adam Marblestone, gives Kernel more expertise in the field of brain interfaces. Kernel was founded on the intuition that everything outside of us is getting smarter and faster, so we should invest some time into trying to make our own brains smarter and faster as well.

UK government to invest £17m ($21 million) into artificial intelligence research: the UK government will invest an additional few million pounds into AI research. The amount is minor and seems mostly to be what the treasury was able to find down the back brexit-shrunk sofa. Nonetheless, every little helps.

DeepCoder: promise & hype: Stephen Merity has tried to debunk some of the hype around DeepCoder, a research paper (PDF) that oultines a system that gets computers to learn programming. He’s even written a bonus article to try and show what he thinks level-headed journalism would be like – come for the insight, stay for the keyboard monkeys.

When your memory is a map, beautiful things can happen: a new research technique lets us give machines the ability to autonomously map their environment without needing to check the resulting maps against any kind of ground-truth data. This brings us closer to an age when we can deploy robots into completely novel environments and simply feed them goals, then have them map the buildings on the way to getting there…
… The specific approach, “Cognitive Mapping and Planning for Visual Navigation”, out-performs approaches based on LSTMs and reactive agents. The system works by coupling two distinct systems together – a planner, and a mapper. At each step the mapper updates the robots beliefs about the world, and then feeds this to the planner, which figures out an action to take…
The Mapper gives the robot access to a memory system that lets it reprensent its world in terms of an overhead two-dimensional map. It feeds its map to The Planner, whichuses that data to plan the actions it takes to bring it closer to its goal. Once the planner has taken an action, the map is re-updated. The map is egocentric, which means that it naturally differentiates the agent from the rest of its environment. (In other words, action cements the agent’s perception of itself as being distinct from the rest of the world – how’s that for motivation!) This egocentric representation, combined with actions that are represented as egomotion, makes it easier for the system to recalibrate itself to learn more about its environment, without a human needing to be in the loop…
… The system still fails occasionally, usually due to its first person view leading it to miss a good route to its target, and ending up with it dithering about the space.
…It’s worth noting that this project, like all scientific endeavors, builds on numerous research contributions that have occurred in recent years: the planning component depends on a residual network (developed by Microsoft researchers and used to win the ImageNet competition in Dec 2015), a hierachical variant of value iteration networks (UC Berkely, released February 2016), and the whole combined system is trained using DAGGER (Carnegie Mellon, 2011). This highlights the inherent modularity of the modern approach to AI, and reminds us that any research contribution is there only due to standing on the shoulders of the contributions of innumerable others. (If you want to join me in a little AI archeology project, send me an email to jack@jack-clark.net)
… “A central limiation in our work is the assumption of perfect odometry, robots operating in the real world do not have perfect odometry and a model that factors in uncertainty in movement is essential,” the researchers write.

Is that a gun in your hand or a corn cob spraypainted black? No, no that’s definitely a gun. Alright, come with me! Research from the University of Grenada in Spain shows how to do two useful things: 1), build and augment a dataset of handguns in films using deep learning and 2) use methods like an R-CNN to then successfully detect handguns in videos. Admins of video sites that have to deal with all the usual video nasties – weapons, drugs, sex – will likely be interested in such a technique. It could also reduce the number of people tech companies hire to manually look at disreptuable content – a low-paying, sometimes traumatising job that I think we would gladly cede to the machines.

The first rule of deep learning is you don’t talk about the black magic… Nikolas Markou has sadly been kicked out of AI club for talking about one of its uncomfortable truths – that because we lack a well developed set of theories for why AI works the way it does, many experts in the field use various tips and tricks gained through trial-and-error and intuition, rather than a deep understanding of theory. Read on for details of some of those tricks.

Smashing! Researchers use deep reinforcement learning to beat pros at Super Smash Brothers Melee: research from Tenenbaum’s lab at MIT have used reinforcement learning to train Smash Bros character Captain Falcon to a point of competency where he is able to play competitively with top-ranked human players. This approach works with both policy gradients and q-learning. This is a pretty good example of how RL has moved on from relatively simple two-dimensional environments like Atari to complex, changing, 3D environments. Read more here: Beating the World’s Best at Super Smash Bros Melee with Deep Reinforcement Learning
… the algorithms found some near approaches that a typical human would not likely stumble on: “Q-learners would consistently find the unintuitive strategy of tricking the in-game AI into killing itself. This multi-step tactic is fairly impressive; it involves moving to the edge of the stage and allowing the enemy to attempt a 2-attack string, the first of which hits (resulting in a small negative reward) while the second misses and causes the enemy to suicide (resulting in a large positive reward),” the researchers write.
…the result indicates that RL has a chance of helping to solve tasks like mastering StarCraft 2. That’s because both games share some traits that traditional Atari games lack – partial observability, and multiple players. Therefore, it’s possible that SSBM could become a kind of intermediary metric as the AI community (Zerg) rushes to solve StarCraft, which will require many other algorithmic inventions to crack.
…meanwhile, Super Smash Bros, on the cheap!... Stanford researchers show you can train an AI to master Smash Bros using imitation learning, with no RL required. Imitation learning approaches are easier for less experienced researchers to tune and are cheaper, computationally, to train. Additionally, the approach outlined here is purely vision-based – meaning it has no access to the real state of the game, nor any particular hooks into it. That can be challenging for RL algorithms. AIs trained via this method were able to defeat a Level 3 difficulty CPU player, roughly match a Level 6, respectably hold their own or lose against a Level 9 character. Read more: The Game Imitation: Deep Supervised Convolutional Networks for Quick Video Game AI.
Imitation learning is not particularly fashionable.The authors note that their approach “does not currently enjoy much status within the machine learning community.” But they think the value in their work is that it demonstrates how absurdly powerful CNN approaches are.
…(Minor details: the authors gathered their data via Nintendo 64 emulation and screen capture tools, using software called Project 64 v2.1. AIs were trained on around 600,000 frames of games (around 5 hours of playing).

Three humans and a hundred bots: interesting article about Philip Kaplan’s experience of building Distrokid, a music distribution service. Main thing of note to Import AI readers is Kaplan’s explanation of how Distrokid is able to turn over millions in revenue while running on only three fulltime staff: “DistroKid has dozens of automated bots that run 24/7. These bots do things that humans do at other distributors. For example, verifying that artwork & song files are correct, changing song titles to comply with each stores’ unique style guide, checking for infringement, delivering files & artwork to stores, updating sales & streaming stats for artists, processing payments, and more,” he says.

Cryptocurrency for the ceaseless machinations of those that tend the AI hedge fund: Numerai, a startup that appears to have emerged from the pyshic loam of a proto William Gibson novel, has launched a new cryptocurrency, Numeraire, to strengthen its AI-based hedgefund. The strangest part? All of those buzzwords are being used legitimately!…
…Numerai uses homomorphic encryption to fuzz a load of financial data and make it available to a global crew of data scientists, who then poke and prod at it with algorithms trying to make predictions about how the numbers change. They then upload these models to Numerai which creates an ensemble from them and uses that to trade mysterious financial instruments. Successful authors get paid out in accordance (in Bitcoin, naturally) with the success of their algorithm in the market. This week, Numerai distributed 1,000,000 Numeraire currency units across its 12,000 algorithm author members. Those people can now use Numeraire to place bets on the success of their own models, and if they win the value of Numeraire goes up. This mains that the data scientists now have a financial incentive to participate in the platform (sweet, sweet bitcoin), and a secondary financial one (participate in the internal Numerai economy by wagering lots of Numeraire, and use that to enhance earning power in accordance with the growing effectiveness of the predictions made by Numerai). The incentives seem to stop people from gaming the system…
… I’ve spent so long waffling on about this because I think Numerai is probably what an AI-first business looks like. Replace the 12,000 data scientists with smart, financial AI prediction systems, and you’re there. And in the same way AIs will exploit their environment for rewards that may not benefit the creator (eg, reward hacking, goal divergence, etc), humans will try to take as much money out of the market with the minimal amount of effort. If Numerai’s incentive system is successful then it can chalk out a path for AI companies to take in the future.

OpenAI bits & pieces:

OpenAI’s Tom Brown will be giving a talk at AI By the Bay on Wednesday, March 8, talking about OpenAI Gym and Universe.

Tech Tales:

[2020, a converted Church in Barcelona, full of computers behind austere glass]

They call the AI system ‘the math submarine’, but if you had them draw it for you no one could give a true depiction of its form. That’s because it’s a bundle of high-dimensional representations, drifting through complex, ethereal fields of numbers. You send the AI out there, out to the brain-warping weird edges of mathematics, and it tries to explore the border between what is proved and what is unproved, and it comes back with answers that are verifiably true, but difficult for a human to understand.

Still, you anthropomorphize it. Does it get lonely, out there, drifting through high-dimensional clouds of conjectures, each representing some indication of proof, or truth, or clarity. Does it feel itself distinct from these things? Does number have a texture to it? Are there currents?

When you were young you once looked up between two tall buildings and saw a plane pass overhead. You could never see the whole plane at once as your view was occluded by the walls of the buildings. But your brain filled in the rest, using its sense of ‘plane-ness’ to extend the slice of the object to the whole. Does the math submarine see numbers in this way, you wonder? Does it see a group of conjectures and have an intuition about what they mean? You know you can’t know, but your other computers can, and you watch the interfaces between this AI system and the others, and tend to the servers and ensure the network is running, so the machine can go and explore something you cannot see or truly know.

Import AI: Issue 30: Cheaper neural network training, mysterious claims around Bayesian Program Synthesis, and Gates proposes income tax for robots

 

Half-price neural networks thanks to algorithmic tweaks: new research, Distributed Second-Order Optimization Using Kronecker-Factored Approximations (PDF), creates a new optimization method for training AI systems. The approach is flexible and can be dropped into pre-existing software relatively easily, its creators say. Best of all? “We show that our distributed K-FAC method speeds up training of various state-of-the-art ImageNet classification models by a factor of two compared to an improved form of Batch Normalization”. Quite rare to wake up one day and discover that your AI systems have just halved in price to train.

Bayesian Program Synthesis – bunk or boon? Startup Gamalon has decloaked with a new technology – Bayesian Program Synthesis – that claims to be able to do tough AI tasks like learning to classify an images from a single digit handful of examples, rather than a thousand. The work has echoes of MIT research published in late 2015 (PDF), which showed that it is possible to use Bayesian techniques similar to this one to perform ‘one shot learning’ – which lets computers learn to recognize something, say, a cat, from only a single glimpse. The research was shown to work on a specific test set that had been implemented in a specific way. Gamalon is claiming that its tech has more general purpose utility. However, the startup has published no details about its research and it is very difficult to establish how staged the press interview demos were. If Gamalon has cracked such a hard problem then I’m sure the scientific community would benefit from them sharing their insight. This would also help justify their significant claims.

Income tax for robots: Bill Gates says that people should consider taxing robots to generate revenues for government to offset the jobs destroyed via automation. Small query I’d like to see someone ask Bill: in hindsight, should governments also have taxed software like Excel to offset the jobs it destroyed?

AirSim: because it’s cheaper to crash a drone inside software: Microsoft has released AirSim, an environment based on the Unity game engine providing a reasonably high-fidelity simulation of reality, giving developers a cheap way to train drones and other robots via techniques like reinforcement learning, then transfer those systems into the real world (which we already know is possible, thanks to research papers such as CAD2RL). This is useful for a couple of reasons: 1) you can run the sim much faster than real life, letting you make an order of magnitude more mistakes while you try to solve your problem, and 2) this reduces the cost of mistakes – it’s much cheaper to fire up a new simulation than try to repair or have to replace the drone that just bumbled into a tree. (Well, research from MIT and others already suggests you won’t need to worry about the tree, but you get my point.)
…Simulators have become a strategic point of differentiation for companies as each battles to craft the perfect facsimile of the real world to let them train AI systems that can then be put to work in reality. The drawback: we don’t yet have a good idea for how real simulators need to be so it’s tricky to anticipate the correct level of fidelity at which to train these systems. In other words, we don’t know what level of simulation is sufficient to ensure that when we arrive in reality we are able to achieve our task. That’s because we haven’t derived an underlying theory to help guide our intuitions about the difference between the virtual and the real – Baudrillard eat your heart out!

Skeptical about The Skeptic’s skepticism: We shouldn’t worry about artificial intelligence disasters because they tend to involve a long series of “if-then” coincidences, says Michael Shermer, publisher of The Skeptic magazine.

Enter the “vision tunnel” with Jeff Bezos: When goods arrive at Amazon’s automated fulfillment center they pass through “a “vision tunnel,” a conveyor belt tented by a dome full of cameras and scanners”, where algorithms analyze and sort each box. “What takes humans with bar-code scanners an hour to accomplish at older fulfillment centers can now be done in half that time,” Fast Company reports…. There’s also a 6-foot tall Fanuc robot arm, which works with a flock of Kiva robots to load goods into the shifting robot substrate of the warehouse. The million-square foot plus facility employs around a thousand people, according to the article. A similarly sized Walmart distribution center employs around 350 (though this doubles during peak seasons)  – why the mis-match in scale, given the likelihood of Amazon having a larger degree of employee automation?

8 million Youtube bounding boxes sitting on a wall, you take one down, classify it and pass it around, 7 million 900 and 99 thousand and 900 and 99 Youtube bounding boxes on a wall: Google has updated its 8 million video strong YouTube dataset with twice as many labels as before…
…. and it’s willing to pay cash to those who experiment with the dataset, and has teamed up with Kaggle to create a series of competitions/challenges based around the dataset, with a $100,000 prize pool available. (This also serves as a way to introduce people to its commercial cloud services, as the company is providing some credits for its Google Cloud Platform as well for those that want to train and run their own models. And I imagine there’s a talent-spotting element as well.)
… I’ve been wondering if the arrival of new datasets, or the augmentation of existing ones, is a leading indicator about AI progress – it seems like when we sense a problem is becoming tractable we release a new dataset for it, then eventually solve the problem. Thoughts welcome!

Deep Learning papers – curated for you. The life of an AI researcher involves sifting through research literature to identify new ideas and ensure there aren’t too many overlaps between yet-to-be-published research and what already exists. This list of curated AI papers may be helpful.

When does advanced technology become DIY friendly?: warzones are a kind of primal soup for (mostly macabre) invention. This Wired article on robot builders in the Middle East highlights how a combination of cheap vision systems, low cost robots, and software, has allowed inventive people to repurpose consumer technology for war machines, like little moveable defense platforms and gun turrets. Today, this technology is very crude and both its effectiveness and use are unknown. But it highlights how rapidly tech can be repurposed and reapplied. Something the AI community should bear in mind as it publishes research and code of its ideas.

Neural architecture search VERSUS interpretability: a vivid illustration from Google Brain resident David Ha of the baroque topologies of neural networks created through techniques like neural architecture search.

Google researcher handicaps AI research labs: Google Brain research engineer Eric Jang has ranked the various AI research labs. He ranks Deepmind and… Google Research in joint first place, followed by OpenAI & Facebook, followed by MSR (3rd) and Apple (4th). He puts IBM at 10 and doesn’t specify the intervening companies. “Given open source software + how prolific the entire field is nowadays, I don’t think any one tech firm “leads AI research” by a substantial margin”, he writes…
…That matches comments made by Baidu’s Andrew Ng, who has said that any given AI research lab has a lead of at most a year on others…
IBM Watson benched: MD Anderson has ended its collaboration with IBM on using AI technology marketed under the company’s “Watson” omnibrand. The strangest part? MD Anderson paid IBM for the privilege of trialing its technology – an unusual occurrence, usually it’s the other way round, Forbes reports. The project was suffused with delays and it’s still hard to establish whether things ended because of IBM’s tech, or because of a series of unfortunate bureaucratic events within MD Anderson.

OpenAI bits&pieces:

Adversarial examples – why it’s easier to attack machine learning rather than defend it. New OpenAI post about adversarial examples, aka optical illusions for computers, delves into the technology and explains why it may be easier to use approaches like this to attack machine learning systems, rather than defend them.

Ilya Sutskever talks at the Rework Summit: if you weren’t able to see Ilya’s talk at the Rework deep learning summit in person, then you can catch a replay here.

Tech tales:

[A boardroom at the top of one of London’s increasingly HR Geiger-esque skyscrapers.]

“So as you’ll see the terms are very attractive, as I’m sure your evaluator has told you,” says Earnest, palms placed on the table before him, looking across at Reginald, director of the company-to-be-acquired.
“I’m afraid it’s not good enough,” Reginald says. “As I’m sure your own counter-evaluator has told you.”
“Now, now, that doesn’t seem right, let’s-”
“Enough!” Reginald says. “Leave it to them.”
“As you wish,” you say, leaning back.

Earnest and Reginald stare at each other as their evaluators invisibly hash out the terms of a new deal, each one probing the other for logical weaknesses, legal loopholes, and what some of the new PsychAIs are able to spot – revealed preference from past deal-making. As the company-to-be-acquired, Reginald has the advantage, but Earnest’s corporation has invested more heavily in helped agents, which have spent the past few months carefully interacting with aspects of Earnest’s business to provide more accurate True Value Estimates.

Eventually, a deal is created. Both Earnest and Reginald need to enlist translator AIs to render the machine-created legalese into something the both of them can digest. Once Reginald agrees to the terms the AIs begin another wave of autonomous asset-stripping, merging, and copying. Jobs are posted on marketplaces for temporary PR professionals to write the press release announcing the M&A, and design contracts are placed for a new logo. This will take hours.

Reginald and Earnest look at eachother. Earnest says, “pub?”
“My evaluator just suggested the same,” says Reginald, and it’s tough to tell if he’s joking.

Import AI: Issue 29: neural networks crack quantum problem, fingernail-sized AI chips, and a “gender” classifier screwup

It takes a global village to raise an AI… a report titled ‘Advances in artificial intelligence require progress across all of computer science” (PDF) from the computing community consortium identifies several key areas that should be developed for AI to thrive: computing systems and hardware, theoretical computer science, cybersecurity, formal methods, programming languages, and human-computer interaction…
…better support infrastructure will speed the rate at which developers embrace AI. For example, see this Ubuntu + AWS + AI announcement from Amazon: the “AWS Deep Learning AMI for Ubuntu” will give developers a pre-integrated software stack to run on its cloud, saving them some of the tedious, frustrating time they usually spend installing and configuring deep learning software.
Baidu’s AI software PaddlePaddle now supports Kubernetes, making it easier to run the software on large clusters of computers. Kubernetes is an open source project based on Google’s internal ‘Borg’ and ‘Omega’ cluster managers, and is used quite widely among the AI community – Last year, OpenAI released software to make it easier to run Kubernetes on Amazon’s cloud.

Finally, AI creates jobs for humans! Starship Technologies is hiring a “robot handler” to accompany its freight-ferrying robots as they zoom around Redwood City. Requirements: “a quick thinker with the ability to resolve non-standard situations“.

Ford & the ARGOnauts: Ford will spend $1 billion over five years on AI, via a subsidiary company called Argo. Argo is run by veterans of both Google and Uber’s self-driving programs. Details remain nebulous. Much of the innovation here appears to be in the financial machinery underpinning Argo, which will make it easier for Ford to offer hefty salaries and stock allocations to the AI people it wants to hire. Reminiscent of Cisco’s “spin-in” company Insieme.

Powerful image classification, for free: Facebook has released code for ‘ResNeXt’, an image classification system outlined in its research paper Aggregated Residual Transformations for Deep Neural Networks. Note: one of the authors of ResNeXt is Kaiming He, the whizkid from MSR Asia who helped invent the ImageNet 2015-winning Residual Networks.

Rise of the terminator accountants: Number of traders employed on the US cash equities trading desk at Goldman Sachs’s New York office:
…in 2000: 600
…in 2017: 2, supported by 200 computer engineers.
…”Some 9,000 people, about one-third of Goldman’s staff, are computer engineers,” reports MIT Technology Review.

AI: 2. Hand-tuned algorithms: 0: New research shows how we can use modern AI techniques to learn representations of complex problems, then use some of the resulting predictive models in place of hand-tuned algorithms. “Solving the quantum many-body problem with artificial neural networks” research shows how this technique can be competitive with state of the art approaches. “With further development, it may well prove a valuable piece in the quantum toolbox,.” the researchers write.
…Similarly, Lawrence Berkeley National Laboratory recently trained machine learning systems to predict metallic defects in materials, lowering the cost of conducting research into advanced alloys and other lightweight new materials. “This work is essentially a proof of concept. It shows that we can run density functional calculations for a few hundred materials, then train machine learning algorithms to accurately predict point defects for a much larger group of materials,” the researchers say. “The benefit of this work is now we have a computationally inexpensive machine learning approach that can quickly and accurately predict point defects in new intermetallic materials. We no longer have to run very costly first principle calculations to identify defect properties for every new metallic compound.”

Microscopic, power-sipping’ AI circuits: researchers with the University of Michigan and spinout CubeWorks have created a deep learning processor the size of a fraction of a fingernail. The new chip implements deep neural networks on a 7.1mm2 chip that sips a mere 288 microwatts of power (PDF). They imagine the chip could be used for basic pattern recognition tasks, like a home security camera knowing to only record in the presence of movement of a human/animal versus a shifting tree branch. The design hints at an era for AI where crude pattern recognition capabilities are distributed in processors so tiny and discreet you could end up with fragments in your shoes after walking on some futuristic beach. Slide presentation with more technical information here.

AI needs its own disaster: AI safety researcher Stuart Russell worries that AI needs to have a Chernobyl-scale disaster to get the rest of the world to wake up to the need for fundamental research on AI safety…
…“I go through the arguments that people make for not paying any attention to this issue and none of them hold water. They fail in such straightforward ways that it seems like the arguments are coming from a defensive reaction, not from taking the question seriously and thinking hard about it but not wanting to consider it at all,” he says. “Obviously, it’s a threat. We can look back at the history of nuclear physics, where very famous nuclear physicists were simply in denial about the possibility that nuclear physics could lead to nuclear weapons.“
some disagree about the dangers of AI. Andrew Ng, a former Stanford Professor and Google Brain founder who now runs AI for Chinese giant tech company Baidu, talked about the “evil AI hype circle” in a recent lecture at the Stanford Graduate School of Business (video). His view is that some people exaggerate the dangers of “evil AI” to generate interest in the problem, which brings in more funding for research, which goes on to fund “anti-evil-AI” companies. “The results of this work drives more hype”, he says. The funding for these sorts of organizations and individuals is “a massive misallocation of resources” he says. Another worry of Ng’s: the focus on evil AI can distract us from a much more severe, real problem, which he says is job displacement.
Facebook’s head of AI research, Yann Lecun, said in mid-2016 “I don’t think AI will become an existential threat to humanity… If we are smart enough to build machine with super-human intelligence, chances are we will not be stupid enough to give them infinite power to destroy humanity.”
… I worry that AI safety is such a visceral topic that people react quite emotionally to it, and get freaked out by the baleful implications to the point they don’t consider the actual research being done. Some problems people are grappling with in AI safety include: securing machines against adversarial examples, figuring out how to give machines effective intuitions through logical induction, and ensuring that cleaning robots don’t commit acts of vandalism to achieve a tidy home, among others. These all seem like reasonable avenues of research that will improve the stability and resilience of typical AI systems…
… but don’t take my word for it – read about AI safety yourself and come to your own decision: for your next desert island vacation (stranding), consider bringing along a smorgasbord of these 200 AI resources, curated by the Center for Human-Compatible AI at UC Berkeley.
and if you want to do something about AI safety, consider applying for a new technical research intern position with the Center for Human Compatible AI at UC Berkeley and the Machine Intelligence Research Institute.

Satellite eyes, served three different ways: Startup Descartes Labs has released a new set of global satellite maps in three distinct bands – RGB, Red Edge bands, and synthetic aperture radar range/azimuth measurements The imagery has been pre-processed to remove clouds and adjusted for the angle of the satellite camera as well as the angle of the sun.

Declining economies of scale: just as companies can expect to see their rate of growth flatten as they expand, deep learning systems see performance drop as they add more GPUs, as the benefits they gain start to be nibbled away by the latency and infrastructure costs introduced by running multiple GPUs in parallel…
… New work from Japanese AI startup Preferred Networks, shows that its free ‘Chainer’ software can generate a 100X performance speedup from 128 GPUs. This is extremely good, but still highlights the slightly declining returns people get as they scale-up systems.

Gender IS NOT in the eyes of the beholder: New research “Gender-From-Iris or Gender-From-Mascara?” appears to bust experimental results showing you can predict gender from a person’s iris, instead pointing out that many strong results appear to be contingent on detectors that learn to spot mascara. Machine learning’s law of unintended consequences strikes again!…
… It reminds me of an apocryphal story an AI researcher once told me: in the 1980s the US military wanted to use machine learning algorithms to automatically classify spy satellite photos for whether they contained soviet tanks or not. The system worked flawlessly in tests, but when they put it into production they discovered that its results were little better than random… After some further experimentation they discovered that in every single photo from their task data that contained a tank, there was also some kind of cloud. Therefore, their ML algorithms had developed a superhuman cloud-classifying ability, and didn’t have the foggiest idea of what a tank was!

Rise of the machines = the end of capitalism as we know it? “Modern Western society is built on a societal model whereby Capital is exchanged for Labour to provide economic growth. If Labour is no longer part of that exchange, the ramifications will be immense,” said one respondent to a Pew Internet report about the ‘pros and cons of the algorithm age’.
…“I foresee algorithms replacing almost all workers with no real options for the replaced humans,” says another respondent.

Bushels of subterfuge in DeepMind’s apple orchard: As I write this newsletter on a Sunday, I’m still recovering from my usual morning activity – chasing my friend round an apple orchard, using a laser beam to periodically paralyze them, letting me hop over their twitching body to gather up as many apples as I can…
… in a strange turn of events it appears that Google DeepMind has been spying on my somewhat unique form of part-time sport, and have replicated this in a game environment called ‘gathering’ which they have used to explore the sort of collaborative and combative strategies that AI systems evolve…there’s also another environment called WolfPack, the less said about it the better. This sort of research is potentially very useful for large multi-agent simulations, which many people in AI are betting on as an area where exploration could yield research breakthroughs.

Lines in Google’s codebase: 2 billion
Number of commits into aforementioned codebase per day: 40,000
…From: “Software Engineering at Google”.

OpenAI Bits and Pieces

Learning how to walk, with OpenAI Gym: The challenge: model the motor control unit of a pair of legs in a virtual environment. “You are given a musculoskeletal model with 16 muscles to control. At every 10ms you send signals to these muscles to activate or deactivate them. The objective is to walk as far as possible in 5 seconds.” The components: OpenSim, OpenAI Gym, keras-rl, and much more. Try the challenge, but stay for the doddering legs!

Arxiv Sanity – bigger, better, smarter! OpenAI’s Andrej Karpathy has updated Arxiv Sanity, an indispensable resource that I and many others use to keep track of AI papers. New features: better algorithms for surfacing papers people have shown interest in, and a social feature. (Also see Stephen Merity’s social tracker trendingarxiv.)

AI Control: OpenAI researcher Paul Christiano writes an informative blog on AI safety and security, called AI Control. In the latest post, “Directions and desiderata for AI control” he talks about some particularly promising research directions in AI safety.

OpenAI does open mic night: Catherine Olsson and I both gave short talks at the Silicon Valley AI Research meetup in SF last week. Catherine’s video. Jack’s video.

Asilomar conference: articles in Wired and  Slate Star Codex about the Beneficial AI conference held at Asilomar in early January.

Tech tales:

[Diplomatic embassy, Beijing, 2025:]

It was a moonless mid-winter pre-dawn, when the flock of drones came overhead and emptied their cargo of chips over the building. The embassy cameras and searchlights picked out some of the thousands of chips as they fell down, hissing like hail on glass and steel roofs. Those staffers that heard them fall shivered instinctively, and afterwards some said that, when caught in the spotlights, the chips looked like metallic snow.

Over the next day the embassy staff did what they could, going around with vacuum cleaners and tiny mops, and ordering an external cleanup crew, but the snowfalls of chips – each one a tiny sensor, its individually meager capabilities offset by the sheer number of its kin – would come again, and eventually security protocols were tightened and people just resigned themselves to it.

Now,  you had to negotiate a baroque set of security measures to get into the embassy. But still the chips got in, and cleaners would find them tracked into bathrooms, or sitting in undusted nooks and crannies. Outside, the air hummed with invisible surveillance, as the numerous little chips used their AI processors to turn on microphones in the presence of certain phrases. Outside, the data evaporated into the air, absorbed by flocks of small drones  which would fly over the embassy, as they did in every town in every major city in every developed country, hoovering up data from the, what some termed, ‘State Dust’. The chips would lie in wait, consuming almost no power, till they heard a particular encrypted call-out from the government drones.

Even the chips that found themselves indoors would eventually be outside again, as some escaped through improper waste disposal measures, and others had their plastic barbs hook fortuitously on a trouser leg or shoe sole, to then be carried outside. And so their data was extracted as well and a titanic jigsaw was assembled.

It didn’t matter how partial the data from each chip was, given how many there were, and the frequency of their harvesting. Gather enough data and at some point you can make sense of the smallest little fragments, but you can only do this for all the little whispers of data from a city or a country if you’re a machine.

Import AI: Issue 28: What one quadrillion dollars pays for, research paper archaeology, and AI modules for drones

Cost of automating the entire global economy? One quadrillion dollars.
Requirements for the resulting system to be able to perfectly replace all human labor:
…Computation: 10^26 operations per second
…Memory: 10^25 bits
…I/O: 10^19 input-output bits per second
…Knowledge ingestion: 7 bits per person per second
and many more marvelous numbers in this essay by data compression expert Matt Mahoney on ‘the cost of AI”. A virtuoso performance of extrapolation and (with apologies to Mitchell & Webb) numberwang-ery.

Google self-driving cars, report card (PDF):
…Miles driven in 2015: 424,331
…Miles driven in 2016: 635,868
…Disengagements per 1,000 miles, 2015: 0.80
…Disengagements per 1,000 miles, 2016: 0.20
… now let’s see how they do with hard training situations for which there is little good training data, like navigating a sandstorm-ridden road in the Middle East.

How much is an AI worth? In which Google’s head of M&A, Don Harrison, says Google is happy to throw large quantities of cash at AI companies. “It’s very hard to apply valuation metrics to AI. These acquisitions are driven by key talent — really smart people. It’s an area I’m focused on and our team is focused on. The valuations are part and parcel of the promise of the technology. We pay attention to it but don’t necessarily worry about it,” he says. (Emphasis mine.)

Your organization and public data: a message to Import AI readers: most organizations gather some form of data which can be safely published, and the world is richer for it. Case in point: Backblaze its latest report on hard drive reliability. These reports should factor into any HDD buyer’s decision, as they represent good, statistically significant real-world data of drive performance. If you work at an organization that may have similar data that can be externalized, please try to make this happen – I’ll be happy to help, so feel free to email me.

Measurement: besides Atari, what are other good measures for the progression of reinforcement learning techniques? As we move into an era dominated by dynamic environments supplied by tools like Universe, DeepMind Lab, Malmo, Torchcraft, and others, how do we effectively  model the progress of agents in a way that captures their full spectrum of their growing capabilities?

AI for researching AI: the Allen Institute for AI has released Citeomatic, a tool that uses deep learning to predict citations for a given paper. To test out the system I fed it OpenAI’s RL^2 paper and it gave me back over 30 papers that it recommended we consider citing. Many of these seem reasonable, eg ‘solving partially observable reinforcement learning problems with rnns’, etc…
…Most of all, this seems like a great tool to help researchers find papers they should be reading. AI has a large literature and researchers frequently find themselves stumbling on good ideas from the previous decade. Any tool that can make this form of intellectual archaeology more efficient is likely to aid in science.

From the Dept. of Recursive Education: Tutorial from Arthur Juliani outlines how to build agents that learn how to learn, with code inspired by the DeepMind paper “Learning to reinforcement learn”, and the OpenAI paper “RL^2”.

Explanations as cognitive maps: the act of explaining situations lets us deal with the chaotic novelty of the world, and create useful abstractions we can use to reason about it. More detail, with many great research references, in this blog from Shakir at DeepMind.

Executive Order strikes a chill in math, AI community: President Trump’s executive order banning people from seven predominantly muslim countries from coming to the US will have significant effects on academia, according to mathematician Terry Tao. “This is already affecting upcoming or ongoing mathematical conferences or programs in the US, with many international speakers (including those from countries not directly affected by the order) now cancelling their visit, either in protest or in concern about their ability to freely enter and leave the country,” he writes. “It is still possible for this sort of long-term damage to the mathematical community (both within the US and abroad) to be reversed or at least contained, but at present there is a real risk of the damage becoming permanent.”…
… another illustration of the law of unintended consequences when politics runs amok. Reminds me of one of the more subtle and chilling consequences of the UK’s decisions to leave the European Union, which was that it reduced collaboration between EU and UK scientists as EU researchers worried that, because their grants were contingent on EU funding, collaboration with UK scientists could violate funding causes. Scientists need to collaborate across international borders.

“Give it the latest personality module, we’re wheels up in five minutes!” – autonomous drones are going to operate in such a huge possibility space that today’s if-this, then-that rule systems will be insufficient, according to this research paper from the University of Texas at Austin and SparkCognition. Eventually, scientists may use a combination of simulators and real world data to train different drone brains for different missions, then swap bits of them in and out as needed. “We propose delinking control networks from the ensembler RNN so that individual control RNNs may be evolved and trained to execute differing mission profiles optimally, and these “personalities” may be easily uploaded into the autonomous asset with no hardware changes necessary,” they write.

Language as the link between us and the machines: CommAI: Facebook AI researchers believe language will be crucial to the development of general purpose AI, and have outlined a platform named CommAI (short for communication-based AI) that uses language to train and communicate agents..
…The idea is that the AI will operate in a world attempting to complete tasks and it’s only major point of input/output with the operator will be a language interface. “In a CommAI-mini task, the environment presents a (simplified) regular expression to the learner. It then asks it to either recognize or produce a string matching the expression. The environment listens to the learner response and it provides linguistic feedback on the learner’s performance (possibly assigning reward). All exchanges take place at the bit level,” they write.
… whether this solves the language ‘chicken and egg’ problem remains to be seen. Language is hard because it represents a high level abstraction to refer to a bunch of low-level inputs. “Horse”, is our mental shorthand for the flood of sensory data that coincides with our experience of the creature. Ideally, we want our AIs to learn similar associations between the words in their language model and their experience of the world. CommAI is structured to encourage this sort of grounding.
…“We hope the CommAI-mini challenge is at the right level of complexity to stimulate researchers to develop genuinely new models,” they write.

Reinforcement learning goes from controlling Atari games, to robots, to… freeway onramps?  “Expert level control of Ramp Metering based on Multi-Task deep reinforcement learning” shows how RL methods can be extended to the control systems for the traffic lights that filter cars onto freeways. In tests, the researchers’ system is able to learn an effective policy for controlling traffic across a 20 mile-long section of the 210 freeway in Southern California. Their technique beats traditional reinforcement learning algorithms, as well as a baseline system in which no control occurs at all…
…“By eliminating the need for calibration, our method addresses one of the critical challenges and dominant causes of controller failure making our approach particularly promising in the field of traffic management,” they write.

Soft robots for hard work: UK online supermarket Ocado has tested a new robotic hand, created as part of a European Union ‘Horizon 2020’ research initiative for soft robots. The hand can pick up objects of varying sizes and textures, and is shown deftly handling tricky items like limes and apples. It uses a dextrous gripper called ‘RBO Hand 2’ with developed by the technical university of Berlin. The approach is reminiscent of that of SF-based Otherlab, which is using soft materials and air to build more flexible robots and exoskeletons.

Sizing up deep learning frameworks: the AI community is bad at two things: reproducibility and comparability.  The research paper “Benchmarking state-of-the-art deep learning software tools” asses the varying properties of frameworks like TensorFlow, Caffe, Theano, CNTK, and MXNet, comparing their performance on a wide variety of tasks and hardware substates. Worth reading to get an idea of the different capabilities of this software.

Import AI administrative note:

The riddle of the missing research paper: Last week I profiled some new research from MIT that involved automatically tying spoken words and sections of imagery together. However, due to a clerical error I did not link to the paper. “Learning Word-Like Units from Joint Audio-Visual Analysis

OpenAI bits & pieces:

23 principles to rule them all, 23 principles to bind them: earlier this month a bunch of people involved in the development, analysis, and study of artificial intelligence gathered at Asilomar for the “Beneficial AI” conference, a sequel to a 2015 gathering in Puerto Rico. Many people from OpenAI attended, including myself. There, the attendees helped hash out a set of 23 principles for the development of AI that signatories shall attempt to abide by.

Ian Goodfellow (OpenAI) and Richard Mallah (FLI), in conversation: podcast between Ian and Richard, in which they talk about some of the big AI breakthroughs that happened in 2016, and look ahead to some of the things that may define 2017 (machine learning security! Further development of neural translation systems! Work on OpenAI Universe!, etc).

Inverse autoregressive flow 2.0: Durk Kingma et al have posted a substantial update to the paper: “Improving Variational Inference with Inverse Autoregressive Flow”.

Do fake galaxies dream of the GANs that created them? Ian Goodfellow interview for this article in Nature about how scientists are starting to use AI-generated images to create training datasets to teach computers to spot real galaxies.

Tech Tales:

[2023, a cybercafe in Ankara]

When you were young you studied ants, staring at their nests as they grew, spreading tendrils through the dirt, sometimes brushing their antenna against the perspex walls sandwiching their captured colony. But you liked them best outside – crawling from a crack in the steps by the garage and charting a path along the sidewalk, carrying blades of grass and pebbles into some other nest. Your house was full of the signs of ants; each blob of silicone gel and mortared over holes testifying some pitched battle.

Modern spambots feel a lot like ants to you. After the first AI systems went online around 2018 the bots gained the ability to learn from the conversations with people they engaged on the internet. After this, their skills improved rapidly and their manners became more convincing.

Information started to flow between people and the bots, improving the AI’s ability to gain trust and effectively launder ideas, viruses, links, and eventually outright fraud. Spend a year arguing on the internet with someone and, stranger or no, there’s a good chance you’ll click on a link they post, seeing if it’s one of their nutty websites or something else to confirm your beliefs about them. And all your talking has taught them a lot about you.

The attacks mounted by the AIs destroyed the value of numerous publicly traded social companies. People changed their internet habits, becoming more cautious, better at security, more effective at uploading the sorts of words and images and videos to persuade people that they were real humans in the real world. And the AIs learned from this to.

So now you have to hunt them out, trace their paths and links to find the nests from which they emanate. Like the ants, you don’t get much insight from imprisoning them in display cases; synthetic social networks, where the AI bots are studied as they interact with your own simulated people bots. You feed data to their control systems and try to simulate the experience of the real internet, but soon your little model world goes out of sync with reality. It fails to keep up with those of its peers roaming wild, cut off from the links on the real internet where it gets its software updates – the few bits of code still pushed by humans.

So now you hunt these controllers through the internet and in real life, switching between VPNs and ethereal internet sites, and dusty internet cafes in the baltics and, now, Ankara. But recently you’ve been having trouble finding the humans, and you wonder if some of the swarms you are tracking have stopped taking orders from people. You’ll find out soon enough – there’s an election next year.