Import AI: Issue 45: StarCraft rumblings, resurrecting ancient cities with CycleGAN, and Microsoft’s imitation data release

by Jack Clark

Resurrecting ancient cities via CycleGAN: I ran some experiments this week where I used a CycleGAN implementation (from this awesome GitHub repo) to convert ancient hand-drawn city maps (Jerusalem, Babylon, London) into modern satellite views.
…What I found most surprising about this project was the relative ease of it – all it really took was a bit of data munging on my end, and having the patience to train a Google Maps>Google Maps Satellite View network for about 45 hours or so. The base model generalized well – I figure it’s because the Google Maps overhead street-views have a lot of semantic similarity to pen and brush-strokes in city illustrations.
…I’m going to do a few more experiments and will report back here if any of it is particularly interesting. Personally, I find that one of the best ways to learn about anything is to play with it, aimlessly fiddling for the sheer fun of it, discovering little gems in unfamiliar ground. It’s awesome that modern AI is so approachable that this kind of thing is possible.
…Components used: PyTorch, a CycleGan implementation trained for 45 hours, several thousand map pictures, a GTX 1070, patience, Visdom.

Learning from demonstrations: An exciting area of current reinforcement learning research is to develop AI systems that can learn to perform tasks based on human demonstrations, rather than requiring a hand-tuned reward function. But gathering data for this at scale is difficult and expensive (just imagine if arcades were more popular and had subsidized prices in exchange for collecting your play data!). That’s why it’s great to see the release of The Atari Grand Challenge Dataset from researchers at Microsoft Research, and Aachen University. The dataset consists of ~45 hours of playtime spread across five Atari games, including the notoriously hard-to-crack Montezuma’s Revenge.

AI’s gender disparity, visualized: AINow co-founder Meredith Whittaker did a quick analysis of the names on papers accepted to ICML and found that men vastly outnumber women. Without knowing the underlying submission data it’s tricky to use this to argue for any kind of inherent sexism to the paper selection process, but it is indicative of the gender disparity in AI – one of the many things the research community needs to fix as AI matures.

Embedding the un-embeddable: In Learning to Compute Word Embeddings On the Fly researchers with MILA, DeepMind, and Jagiellonian University propose a system to easily learn word embeddings for extremely rare words. This is potentially useful, because while deep learning approaches excel in environments containing a large amount of data, they tend to fail when dealing with small amounts of data.
…The approach works by training a neural network to predict the embedding of a word given a small amount of auxiliary data. Multiple auxiliary sources can be combined for any given word. When dealing with a rare word the researchers fire up this network, feed it a few bits of data, and then try to predict that embeddings location within the full network. This means you can develop your main set of embeddings by training in environments with large amounts of data, and whenever you encounter a rare word you instead use this system to predict an embedding for it, letting you get around the lack of data, though with some imprecision.
…The researchers evaluate their approach in three domains: question answering, entailment prediction, and language modelling, attaining competitive results in all three of these domains.
…”Learning end-to-end from auxiliary sources can be extremely data efficient when these sources represent compressed relevant information about the word, as dictionary definitions do. A related desirable aspect of our approach is that it may partially return the control over what a language processing system does into the hands of engineers or even users: when dissatisfied with the output, they may edit or add auxiliary information to the system to make it perform as desired,” they write.

Battle of the frameworks: CNTK matures: Microsoft has released version 2.0 of CNTK (the Microsoft Cognitive Toolkit), its AI development framework. New features include support for Keras, more Java language bindings, and tools for compressing trained models.

Stick this in your calendar, Zerg scum! The Call for Papers just went out for the Video  Games and Machine Learning workshop at ICML in Australia this year. Confirmed speakers include people from Microsoft, DeepMind, Facebook, and others. Noteable: someone from Blizzard will be giving a talk about StarCraft, a game that the company has partnered with DeepMind on developing AI tools around.
Related: Facebook just released V1.3-0 of TorchCraft, an open source framework for training AI systems to play StarCraft. The system now supports Python and also has improved separate data streams for feature-training, such as maps for walkability, buildability, and ground-height.

Ultra-cheap GPU substrates for AI development: Chip company NVIDIA has seen its stock almost triple in value over the last year as investors realized that its graphical processing units are the proverbial pickaxe of the current AI revolution. But in the future NVIDIA will likely have more competition (a good thing!) from a range of semiconductor startups (Graphcore, Wave, and others), established rivals (Intel via its Nervana and Altera acquisitions, AMD via its extremely late dedication to getting its GPUs to run AI software), and possibly from large consumer tech companies such as Google with its Tensor Processing Units (TPU).
…So if you’re NVIDIA, what do you do? Aside from working to design new GPUs around specific AI needs (see: Volta), you can also try to increase the number of GPU-enabled servers sold around the world. To that end, the company has partnered with so-called ODM companies Foxconn, Quanta, Inventec and Wistron. These companies are all basically intermediaries between component suppliers and massive end-users like Facebook/Microsoft/Google/and so on, and are farmed for designing powerful servers available at a low price (if bought in sufficiently high volumes).

The power of simplicity: What wins AI competitions – unique insight? A PHD? Vast amounts of experience? Those help, but probably the single-most important thing is consistent experimentation, says Keras creator Francois Chollet, in a Quora answer discussing why Keras features in so many top Kaggle competitions.
…”You don’t lose to people who are smarter than you, you lose to people who have iterated through more experiments than you did, refining their models a little bit each time. If you ranked teams on Kaggle by how many experiments they ran, I’m sure you would see a very strong correlation with the final competition leaderboard.”
…Even in AI, practice makes perfect.

Will the AI designers of the future be more like sculptors than programmers? AI seems to naturally lend itself to different forms of development than traditional programming. That’s because most of the neural network-based technologies that are currently the focus of much of AI research are inherently spatial: deep learning is a series of layered neural networks, whose spatial relationship is indicative of the functions the ultimate system approximates.
…Therefore, it’s interesting to look at the types of novel user interface design that augmented- and virtual-reality make possible and think of how it could be applied to AI. Check out this video by Behringer of their ‘DeepMind’ (no relation to the Go-playin’ Google sibling) system, then think about how it might be applied to AI.

CYBORG DRAGONFLY CYBORG DRAGONFLY CYBORG DRAGONFLY: I’m not kidding. A company named Draper has built a so-called product called DragonflEye, which consists of a living dragonfly which has been augmented with solar panels and with electronics that interface with its nervous system.
…The resulting system “uses optical electrodes to inject steering commands directly into the insect’s nervous system, which has been genetically tweaked to accept them. This means that the dragonfly can be controlled to fly where you want, without sacrificing the built-in flight skills that make insects the envy of all other robotic micro air vehicles,” according to IEEE Spectrum.

Are we there yet? Experts give thoughts on human-level AI and when it might arrive: How far away is truly powerful AI? When will AI be able to perform certain types of jobs? What are the implications of this sort of intelligence? Recently, a bunch of researchers decided to quiz the AI community on these sorts of questions. Results are outlined in When Will AI Exceed Human Performance, Evidence from AI Experts.
…The data contains responses from 352 researchers who had published at either NIPS or ICML in 2015, so keep the (relatively small) sample size in mind when evaluating the results.
…One interesting observation pulled from the abstract is that: “researchers believe there is a 50% chance of AI outperforming humans in all tasks in 45 years and of automating all human jobs in 120 years, with Asian respondents expecting these dates much sooner than North Americans.”
…The experts also generate a bunch of predictions for AI milestones, including:
…2022: AI can beat Starcraft.
…2026: AI can write a decent high school level essay.
…2028: An AI system can beat a human at Go given the same amounts of training.
…2030: AI can completely replace a retail salesperson.
…2100: AI can completely automate the work of an AI researcher. (How convenient!)

Monthly Sponsor: Amplify Partners is an early-stage venture firm that invests in technical entrepreneurs building the next generation of deep technology applications and infrastructure. Our core thesis is that the intersection of data, AI and modern infrastructure will fundamentally reshape global industry. We invest in founders from the idea stage up to, and including, early revenue.
…If you’d like to chat, send a note to david@amplifypartners.com.

Tech Tales:

[2024: An advertizing agency in Shoreditch, East London. Three creatives stand around wearing architect-issue black turtlenecks and jeans. One of them fiddles with a tangle of electronic equipment, another inspects a VR headset, and the third holds up a pair of gloves with cables snaking between them and the headset and the other bundle of electronics. The intercom crackles, announcing the arrival of the graffiti artist, who lopes into the room a few seconds later. ]


James, so glad you could make it! Tea? Coffee?
Nah I’m okay, let’s just get started then shall we?
Okay. Ever used these before? says one of them, holding up the electronics-coated gloves.
No. Let me guess – virtual hands?
Exactly.
Alright.

Five minutes later and James is wearing a headset, holding his gloved hands as though spray-painting. In his virtual reality view he’s standing in front of a giant, flawless brick wall. There’s a hundred tubs of paint in front of him and in his hand he holds a simulated spraycans that feel real because of force feedback in the gloves.


Funny to do this without worrying about the coppers, James says to himself, as he starts to paint. Silly creatives, he thinks. But the money is good.

It takes a week and by the end James is able to stare up at the virtual wall, gazing on a giant series of shimmering logos, graffiti cartoons, flashing tags, and the other visual glyph and phrases. Most of these have been daubed all across South London in one form or the other in the last 20 years, snuck onto brick walls above train-station bridges, or slotted beneath window rims on large warehouses. Along with the paycheck they present him with a large, A0 laminated print-out of his work and even offer to frame it for him.


No need, he says, rolling up the poster.

He bends one of the tube ends as he slips an elastic band over it and one of the creatives winces.

I’ll frame it myself.

For the next month, the creatives work closely with a crew of AI engineers, researchers, roboticists, artists, and virtual reality experts, to train a set of industrial arms to mimic James’s movements as he made his paintings. The force feedback gloves he wore collected enough information for the robot arms to learn to use their own skeletal hand-like grippers to approximate his movements, and the footage from the other cameras that filmed him as he painted helps the robots adjust the rest of their movements. Another month goes by and, in a film lot in Los Angeles, James’s London graffiti starts to appear on walls, sprayed on by robot arms. Weeks later it appears in China, different parts combined and tweaked by generative AI algorithms, coating a fake version of East London in graffiti for Chinese tourists that only travel domestically. A year after that and James sees his graffiti covering the wall of a street in South Boston in a movie set there and uses his smartphone to take a photo of his simulated picture made real in a movie.


Caption: “Graffin up the movies now.”.

Techniques that inspired this story: Industrial robots, time-contrastive networks, South East London (Lewisham / Ladywell / Brockley / New Cross), Tilt Brush.

OpenAI bits&pieces:

AlphaGO versus the real world: Andrej Karpathy has written a short post trying to outline what DeepMind’s AlphaGo system is capable of and what it may struggle with.

DeepRL bootcamp: Researchers from the University of California at Berkeley, OpenAI, DeepMind, are hosting a deep reinforcement learning workshop in late August in Berkeley. Apply here.