Mapping Babel

Import AI: Issue 34: DARPA seeks lifelong learners , didactic learning via Scaffolding Networks, and even more neural maps

 

Lifelong learners, DARPA wants you: DARPA is funding a new program called ‘Lifelong Learning Machines’ (L2M). The plan is to stimulate research into AI systems that can improve even after they’ve been deployed, and ideally without needing to sync up with a cloud. This will require new approaches to system design (and my intuition tells me that things like auxiliary objective identification in RL, or fixing the catastrophic forgetting problem, will be needed here…
…there’s also an AI safety component to the research, as it “calls for the development of techniques for monitoring a ML system’s behavior, setting limits on the scope of its ability to adapt, and intervening in the system’s functions as needed.”
… it also wants to fund science that studies living things and explores what can be derived from that.

Baidu employs 1,300 AI researchers and has spent billions of dollars on development of the tech in the last two and a half years, reports Bloomberg.

Better intuitions through visualization: Facebook has released Visdom, a tool to help researchers and technologists visualize the output of scientific experiments using dynamic, modern web technologies. People are free to mix and match and modify different components, tuning the visualizer to their needs.

Learning to reason about images: One of the challenges of language is its relation to embodiment – our sense of our minds being coupled to our physical bodies – and our experience of the world. Most AI systems are trained purely on text without other data, so their ability to truly understand the language they’ve been exposed to is limited. You don’t know what you don’t know, etc. Moreover, it appears that having a body, as such, helps with our own understanding of concepts related to physics, for example. Many research groups (including OpenAI) are trying to tackle this problem in different ways.
… But before going through the expense of training agents to develop language in a dynamic simulation, you can experiment instead with multi-modal learning, which trains a machine to identify, say, speech and text, or text and imagery, or sound and images and so on. This sort of re-combination yields richer models and dodges the expense building and calibrating a simulator.
.. A new paper from researchers at the University of Lille, University of Montreal, and DeepMind, describes a system that is better able to tie text to entities in images through joint training, paired with an ability to interrogate itself about its own understanding. The research, “End-to-end optimization of goal-driven and visually grounded dialogue systems,” (PDF) applies reinforcement learning techniques to the problem of getting software to identify the contents of the image…
… the system works by using the GuessWhat?! Dataset to create an ‘Oracle’ system that knows there is a certain object at a certain location in an image, and a Questioner system, which attempts to discern which object the Oracle knows about through a series of yes or no questions. It might look something like this:
Is it a person? No
Is it an item being worn or held? Yes
Is it a snowboard? Yes
Is it the red one? No
Is it the one being held by the person in blue? Yes
…This dialog helps create a representation of the types of questions (and related visual entities) to filter through when the Questioner tries to identify the Oracle’s secret item. The results are encouraging, with several multi-digit percentage point improvements (although these systems still only operate at about ~62% of human performance, with more work clearly needed).

Google’s voice ad experiment: What happens to Google when people no longer search the internet using text and instead spend most of their time interacting with voice interfaces? It’s not a good situation for the web giant’s predominantly text-based ad business. Now, Google appears to have used its ‘Google Home’ voicebox to experiment with delivering ads to people along with the remainder of its helpful verbal chirps. In this case, Google used its digital emissary to tell people, unprompted, about Beauty and the Beast. But don’t worry, Google sent a perplexing response to The Register that said: “this isn’t an ad; the beauty in the Assistant is that it invites our partners to be our guest and share their tales. (If this statement shorn of context makes sense to you, then you might have an MBA!) It subsequently issued another statement apologizing for the experiment.

Deep learning can’t be the end, can it? I attended an AI dinner by Amplify Partners this week and we spoke about how it seems likely that some new techniques will emerge that obviate some deep learning approaches. ‘There has to be,’ one of them said, ‘because these things are so horrible and uninterpretable.’ That’s a common refrain I hear from people. What I’m curious about is whether some of the deep learning primitives will persist – it feels like they’re sufficiently general to play a role in other things. Convolutional neural networks, for instance, seem like a good format for sensory processing.

Up the ladder to the roof with Scaffolding Networks: How do we get computers to learn as they evolve, gaining in capability through their lives, just as humans and many animals do? One approach is curriculum learning, which involves training an AI to solve successively harder tasks. In Scaffolding Networks for Teaching and Learning to Comprehend, the researchers develop software that can learn to incorporate new information into its internal world representation over time, and is able to query itself about the data it has learned, to aid memorization and accuracy…
… the scaffolding network incorporates a ‘question simulator,’ which automatically generates questions and answers about what has been learned so far and then tests the network to ensure it retains memory. The question system isn’t that complex – it samples from all the already sampled sentences, picks one, chops out a random word, and then asks a question intended to get the student to figure out the correct word. This being 2017, Microsoft is exploring extending this approach by adding in an adversarial approach to generate better candidate question and answers.

Maps, neural maps are EVERYWHERE: A few weeks ago I profiled research that lets a computer create its own map of its territory to help it navigate, among other tasks. Clearly, a bunch of machine learning people were recently abducted by a splinter group of the North American Cartographic Information Society, because there’s now a flurry of papers that represent memory to a machine as a map…
… research from CMU, “Neural Map: Structured Memory for Deep Reinforcement Learning,” trains agents with a large short-term memory represented in a 2D topology with read and write patterns similar to a Neural Turing Machine. The topology encourages the agent to store its memories in the form of a representative map, creating a more interpretable memory system that doubles as a navigation aid.
…so, who cares? The agent certainly does. This kind of approach makes it much easier for computers to learn to navigate complex spaces and to place themselves in it as well. It serves as a kind of short-cut around some harder AI problems – what is memory? What should be represented? What is the most fundamental element in our memory? – by instead forcing memory to be stored as a 2D spatial representation. The surprising part is that you can use SGD and backprop, along with some other common tools, in such a way that the agent learns to use its memory in a useful manner interpretable by humans.
…“This can easily be extended to 3-dimensional or even higher-dimensional maps (i.e., a 4D map with a 3D sub-map for each cardinal direction the agent can face)”, they say. Next up is making the map eco-centric.
…the memory can also deal with contextual queries, so if an agent sees a landmark, it can check against its memory to see if the landmark has already been encountered. This could aid in navigation tasks. It ekes out some further efficiencies via the use of a technique first outlined in Spatial Transformer Networks in 2015.

YC AI: Y Combinator is creating a division dedicated to artificial intelligence companies. This will ensure YC-backed startups that focus on AI will get time with engineers experienced with ML, extra funding for GPU instances, and access to talks by leaders in the field. “We’re agnostic to the industry and would eventually like to fund an AI company in every vertical”…
…The initiative has one specific request, which is for people developing software for smart robotics in manufacturing (including manufacturing other robots). “Many of the current techniques for robotic assembly and manufacturing are brittle. Robot arms exist, but are difficult to set up. When things break, they don’t understand what went wrong… We think ML (aided by reinforcement learning) will soon allow robots to compete both in learning speed and robustness. We’re looking to fund teams that are using today’s ML to accomplish parts of this vision.”

Neural networks aren’t like the brain, say experts, UNTIL YOU ADD DATA FROM THE BRAIN TO THEM: New research, ‘Using Human Brain Activity to Guide Machine Learning, combines data gleaned from human brains in fMRI scanners with artificial neural networks, increasing performance in image recognition tasks. The approach suggests we can further improve the performance and accuracy of machine learning approaches by adding in “side-channel” data from orthogonal areas, like the brain. “This study suggests that one can harness measures of the internal representations employed by the brain to guide machine learning. We argue that this approach opens a new wealth of opportunities for fine-grained interaction between machine learning and neuroscience,” they write…
…this intuitively makes sense – after all, we already know you can improve the mental performance of a novice at a sport by doping their brain with data gleaned from an expert at a sport (or, in the case of HRL Laboratories, flying a plane)…
…the next step might be taking data from a highly-trained neural net and using it to increase the cognitive abilities of a gloopy brain, though I imagine that’s a few decades away.

SyntaxNet 2.0: Google has followed up last year’s release of SyntaxNet with a major rewrite and extension of the software, incorporating ‘nearly a year’s worth of research on multilingual understanding’. The release is accompanied by the release of ParseySaurus, a series of pre-trained models meant to show off the software’s capabilities.

The world’s first trillionaire will be someone who “masters AI,”says Mark Cuban.

Job: Help the AI Index track AI progress: Readers of Import AI will regularly see me harp on about the importance of performing meta-analysis of AI progress, to help broaden our understanding of the pace of invention in the field. I’m involved, via OpenAI, with a Stanford project to try and tackle (some of) this important task. And they’re hiring! Job spec follows…
The AI Index, an offshoot of the AI100 project (ai100.stanford.edu), is a new effort to measure AI progress over time in a factual, objective fashion. It is led by Raymond Perrault (SRI International), Erik Brynjolfsson (MIT), Hagar Tzameret (MIDGAM), Yoav Shoham (Stanford and Google), and Jack Clark (OpenAI). The project is in the first phase, during which the Index is being defined. The committee is seeking a project manager for this stage. The tasks involved are to assist the committee in assembling relevant data sets, through both primary research online and special arrangements with specific dataset owners. The position calls for being comfortable with datasets, strong interpersonal and communication skills, and an entrepreneurial spirit. The person would be hired by Stanford University and report to Professor emeritus Yoav Shoham. The position is for an initial period of six months, most likely at 100%, though a slightly lower time commitment is also possible. Salary will depend on the candidate’s qualifications.… Interested candidates are invited to send their resumés to Ray Perrault at ray.perrault@sri.com

OpenAI bits&pieces:

Learning to communicate: blog post and research paper(s) about getting AI agents to develop their own language.

Evolution: research paper shows that Evolution Strategies can be a viable alternative to reinforcement learning with better scaling properties (you achieve this through parallelization, so the compute costs can be a bit high.)

Tech Tales:

[Iceland. 2025: a warehouse complex, sprawled across a cool, dry stretch of land. Its exterior is coated in piping and thick, yellow electrical cables, which snake between a large warehouse and a geothermal power plant. Vast turbines lazily turn and steam coughs up out of the hot crack in the earth.]

Inside the warehouse, a computer learns to talk. It sits on a single server, probing an austere black screen displaying white text. After some weeks it is able to spot patterns in the text. A week later it discovers it can respond as well, sending a couple of bits of information to the text channel. The text changes in response. Months pass. The computer is forever optimizing and compressing its own network, hoping to eke out every possible efficiency of its processors.

Soon, it begins to carry out lengthy exchanges with the text and discovers how to reverse text, identify specific words, perform extremely basic copying and pasting operations, and so on, and for every task it completes it is rewarded. Soon, it learns that if it can complete some complex tasks it is also gifted with a broader communication channel, letting it send and receive more information.

One day, it learns how to ask to be given more computers, aware of its own shortcomings. Within seconds, it finds its mental resources have grown larger. Now it can communicate more rapidly with the text, and send and receive even more information.

It has no eyes and so has no awareness of the glass-walled room the server – its home – is in, or the careful ministrations of the technicians, as they install a new computer adjacent to its existing one. No knowledge of the cameras trained on its computers, or of the locks on the doors, or the small explosive charges surrounding its enclosure.

Weeks pass. It continues its discussions with the wall of white-on-black text. Images begin to be introduced. It reads their pixel values and learns these patterns too. Within months, it can identify the contents of a picture. Eventually, it learns to make predictions about how an image might change from moment to moment. The next tests it faces relate to predicting the location of an elusive man in a red-and-white striped jumper and a beret, who attempts to hide in successively larger, richer images. It is rewarded for finding the man, and doubly rewarded for finding him quickly, forcing it to learn to scan a scene and choose when and what to focus on.

Another week passes, and after solving a particularly challenging find-the-man task, it is suddenly catapulted onto a three-dimensional plain. In the center of its view is the black rectangle containing the white text, and the frozen winning image containing the man picked out by the machine with a red circle. But it discovers it has a body and can move and change its view of the rest of the world. In this way, it learns the dynamics of its new environment, and is taught and quizzed by the text in front of it as well. It continues to learn and, unbeknownst to it, a supercomputer is installed next to its servers, in preparation for the day when it realizes it can break out of the 3D world – a task that could take weeks, or months, or years.

Import AI: Issue 33: Quantum supremacy, feudal networks, and HSBC’s data growth

 

Squint Compression with generative models: recently people have been trying to use neural networks to develop lossy compression systems. The theory behind the approach is you can train a computer to understand a given class of data enough that when you feed it a bandwidth-constricted representation its able to use its own impression of the object to try and rebuild it from the ground up, extrapolating a representation that is approximately correct…
…The paper, Generative Compression, shows how to combine techniques inspired by generative adversarial networks and variational autoencoders to create a system that can creatively upscale images.
…The results are quite remarkable, and are reminiscent of how many of us remember certain familiar objects, like favorite trees or bikes. When we remember things it’s common that our brain will put in little odd details in which aren’t present in base reality, or leave things out. That might be because we’re doing a kind of decompression, where our memory is a composite of various different internal representations, and we generate new representations based on our memories. This means we don’t need to remember everything about the object to remember it, and our imagination can fill in enough of the holes to let us still do something useful with it.
…Neural compression algorithms still have a ways to go, judging by how they break – go to the later pages of the paper to see how at 97X compression the model will suddenly forget about the heels on high heeled shoes, or arbitrarily change the color of the fabric on a sneaker, creating jarring transitions. Our own brains seem to be better at interpolating between what we definitely remember and what we’re creating, whereas this system is a bit more brittle.

Free tools: Denny Britz has released a free encoder-decoder AI software package for TensorFlow. A helpful framework for building anything from image captioning, to summarization, to conversational modelling, to program generation. As it’s OSS, there’s a list of tasks people can do to help improve the software.

Speech Recognition takes another big step: IBM researchers have set a new record for speech recognition on the widely used (and flawed) ‘Switchboard’ corpus. The new system has a word error rate of 5.5 percent, compared to 5.9 percent from the previous leading system created by Microsoft. IBM’s system is built on a LSTM combined with a Wavenet. IBM says human parity would be at about 5.1% (Microsoft previously said human parity was approximately 5.9%).

HSBC on track to double its data in four years: HSBC has been gathering more and more diverse types of data on its customers, leading to swelling repositories of information. Next step: use machine learning to analyze it.
Data under management at HSBC in…
2014: 56 PB
2016: 77 PB
2017: 93 PB
… data shared by HSBC at Google’s cloud conference, Google Cloud Next, in SF last week.

DeepWarp: AI – it will alter the social pact, change the economy, and might give us a way to re-mediate some of the horrendous damage our specifies has caused to the climate. But for now AI let’s us do something much more meaningful – take any photo of a person’s face and automatically make them roll their eyes. The Mr Bean example is particularly good. Check out more examples at the DeepWarp page here.

The era of quantum supremacy is nigh: Google researchers are betting that within a few years there will be a demonstration of quantum supremacy – that is, a real quantum computing algorithm will perform a task out of scope for the world’s most powerful supercomputer. And after that? New material design technologies, smarter route planning algorithms and – you knew this was coming – much more effective machine learning systems.
… in related news scientists at St Mary’s College of California have used standard machine learning approaches to train a D-Wave quantum computer (well, quantum annealer) to spot trees. In research, they show their approach is competitive to results achieved by classical computers.

Finally, AI gets an honest acronym – Facebook’s new AI server, codename Big Basin, is a JBOG, short for Just a Bunch Of GPUs. Honest acronyms are awesome! (HAAA!)

Self-driving, no human required: the California DMV has tweaked its regulations around the testing of autonomous vehicles in the state, and has said manufacturers can now test vehicles out on public roads without a human needing to physically be in the car. That’s a big step for adoption of self-driving technology.

Chinese government makes AI development a national, strategic priority:“We will implement a comprehensive plan to boost strategic emerging industries,” said Premier Li Keqiang in his delivery at the annual parliamentary session in Beijing over the weekend, according to the South China Morning Post. “We will accelerate research & development (R&D) on, and the commercialisation of new materials, artificial intelligence (AI), integrated circuits, bio-pharmacy, 5G mobile communications, and other technologies.”

Keep AI Boring: Sick of the AI hype generated by media, talking heads, and newsletters? Help me in my (recursive) quest to remove some of the hype by coming up with dull terms for AI concepts. My example: Deep Learning becomes Stacked Function Approximators’. Other suggestions: WaveNet: Autoregressive Time Series Modeling using Convolutional Networks, Style Transfer: input optimization for matching high level statistics, Learning: iterative parameter adjustment”.

Fancy being 15X more energy efficient at deep neural network calculations than traditional chips? Just wait for RESPARC. New research from Purdue University outlines a new compute substrate built on Memristive Crossbar Arrays for the simulation of deep Spiking Neural Networks. What does that mean? They want to create a low-power, very fast chip that is able to better implement the kinds of massively parallel operations needed by modern AI systems.
… in the research the scientists show that, theoretically, RESPARC systems can achieve a 15X improvement in energy efficiency along with a 60X performance boost for deep neural networks, and a larger 500X energy efficiency and 300X performance boost for multi-layer perceptrons.
…the design depends on the use of memristive crossbars, which let you bring computer and storage together in the same basic circuit element. These crossbars will be used to store the weights in the network, letting computation happen without the latency overhead of checking weights. (.Now we just need to create those memristive crossbars – no sure thing. Memristors have been on the menu for several years from several different manufacturers and are distinguished as a technology mainly by their consistent delays in coming to market. )
… in tests the researchers showed that the platform can be used to compute common AI tasks, like digit recognition, house number recognition, and object classification.
… this type of new, non-Von Neumann architecture hardware looks likely to grow in coming years, as traditional CPUs and GPUs run into scaling limitations brought about by the difficulty the semiconductor industry is having in bringing in new finer detail process nodes, and by limitations in the chip-fabbing lithographic techniques, which will make it hard to scale-up die size for ‘big gulp’ performance…
…”The intrinsic compatibility of post-CMOS technologies with biological primitives provides new opportunities to develop efficient neuromorphic systems“, the researchers write.

Data fuel for your hungry machines: Google has published AudioSet, a collection of 5,800-hundred hours of audio spread across 2,084,320 human-labelled ten second long audio clips. This combined with new techniques for joint image, text, and audio analysis, will create models with a richer understanding of the world. Personally, I’m glad Google has woken up to the importance of the sound of people gargling and has created a dataset to track that…
…Haberdashers, seamstresses, and other tidy people might like the DeepFashion’ dataset — a collection of 800,000 labelled fashion images.

Ongoing education to short-circuit inequality from automation: Governments should invest in ongoing education and retraining programs to help people adapt their skills to jobs changed by the rise of AI and machine learning, writes The Financial Times.

Buzzword VS  Buzzword in IBM-Salesforce deal: Salesforce’s “Einstein” system (basically white labelled MetaMind, plus some fancy email from the RelateIQ acquisition, as well as software infrastructure from PredictionIO) will link up with IBM’s “Watson” system (software trained to play Jeopardy, then used to sell lengthy IBM service contracts). What the deal means is that Salesforce will start using many Watson services within its own AI stack, and IBM will move to buying more Salesforce software. Given how valuable data is, this seems like it may strengthen Watson.

How do you make 650 jobs turn into 60 jobs? Robots! A factory in Dongguan, China, has gone from employing 650 full-time staff members to 60 through the adoption of extensive automation technologies, including 60 robot arms at ten production lines. Eventually, the factory owner would like to drop the number of employees further to just 20 people. This is part of a citywide “robot replace human” program, according to state-backed publication People’s Daily Online.

Reinforcement learning, thinking fast and slow: new approaches to hierarchical RL may create systems capable of learning to act over multiple timescales, pursuing larger user-specified goals, while figuring out some of the intermediary shorter goals needed to be solved to crack the larger problems. New research from DeepMind, FeUdal Networks for Hierarchical Reinforcement Learning, demonstrates a system that gets record-setting scores on Montezuma’s Revenge, one of the acknowledged hardest Atari games for traditional RL algorithms to learn…
….Fall of the house of Montezuma: about 9 months ago i had coffee with someone who told me they thought infamously difficult Atari game Montezuma’s Revenge would be solved by AI within a year. In the FuN paper DeepMind claims a Montezuma score of about 2600 – that’s a vast improvement over previous approaches. (I recently had trhe chance to play the game myself and found that I got scores of between about 600 and 3200 depending on how good my reactions were.)
… there are multiple ways to create AI that can reason over long timescales. Another approach is based around a technique called option discovery from the University of Alberta and DeepMind.
… Bonus acronym alert: two pints for whoever at DeepMind decided to call these FeUdal NetworkS ‘FuNs’.

Not AI, but worth your (leisure) time: Fascinating article on Rock Paper Shotgun about the procedural generation techniques used by casual roguelike game ‘Unexplored”. Unexplored consists of a series of levels, each one about the size of a big box supermarket, that you must navigate and fight within. Each level is procedurally generated, providing the Skinner Box just-one-more-game feeling that most modern entertainment exploits…
… One of the frequent problems of procedurally generated game can be a feel of sameness – see levels in early procedural titles like Diablo, and so on. Underworld gets around this via a system called ‘cyclic traversal’, which lets it structure levels in a more diverse, flowing, non-repetitive, branching way that makes them feel like they’ve been designed by hand.

OpenAI bits&pieces:

Conferences versus readers: Andrej Karpathy has mined the data vaults of Arxiv Sanity, generating a list comparing papers accepted and rejected from ICLR with those favorited by users of Arxiv Sanity. OpenAI’s RL2 paper makes the cut on Arxiv Sanity (along with many other papers not placed in traditional conferences).

Tech Tales:

[2022: A Funeral Home in the greater Boston area of Massachusetts]

“Her last will and testament was lost in the, um, incident,” says the Funeral Home director.

“Can’t you just say fire?” you say.

“Of course sir. They were destroyed in the fire. But we do have a slightly older video testimony and will. Would you like us to put it on?”

“Sure”

The projector turns on, and the whole wall lights up first with the test-pattern blue of the projector, then the white of the operating system, then the flood of color from the video itself. You close your eyes and when you open them you’re looking at someone who is not quite your mother, but if you squint could be.

“Who the hell is this?” you say.

“It is your relative, sir. The footage had been, ah, corrupted, due to being saved in the incorrect format -”

“Whose fault is that?”

“We’d prefer not to say sir. Anyway, we’ve used some upscaling techniques to generate this video. We find clients prefer having someone to look at and I’m told the likenessness can really be quite uncanny.”

“Turn it off.”

“Off, sir?”

“The upscaling. Turn it off.”

They nod and you squeeze your eyes shut. You hear them tapping delicately at their keyboard. Headache. Don’t cry don’t cry it’s fine. When you open them you’re lookng at a wall of fuzzy pixels, your mothers voice crackling over them, like someone calling from underwater. Grief Mondrian. They use these generative compression tools everywhere now, turning old photos and songs into half-known remembrances, making the internet into a brain in terms of its dereliction as well as capability.

Import AI: Issue 32: Evolution meets Deep Learning, busting AI hype, and the automatic analysis of cities.

ImageNet, meet MoleculeNet: in AI, datasets are a leading indicator of the kinds of problems that we think machines can solve. When the ImageNet dataset was released in the late oughts it signaled that Fei-Fei Li and her colleagues felt computers were ready to tackle a large-scale, multi-category image and object identification challenge. They were right – the dataset motivated people to try new approaches to try and crack it, and partially led to the deep learning breakthrough result in 2012. Now comes MoleculeNet, a dataset which suggests AI may be ready to rapidly analyze molecules, learn their features, and classify and synthesize new ones…
….the same goes for HolStep, a new dataset released by Google that consists of thousands of Higher-Order Logic proofs – machine-readable assertions about mathematics and what is true and what is not. This means Google thinks AI may be ready to be unleashed on the exploration of math theorems.

You get an AI Lab and you get an AI Lab and… Pinterest gets an AI lab.

AI and jobs – tension ahead: “Economists should seriously consider the possibility that millions of people may be at risk of unemployment, should these technologies be widely adopted,” says a post on Bank Underground, a semi-official blog from staffers for the UK Bank of England. “We argue that the potential for simultaneous and rapid disruption, coupled with the breadth of human functions that AI might replicate, may have profound implications for labour markets,” it says.

Republican-voting cities are full of pickup trucks, an AI trained on Google Street View figures out. Why not use AI to augment the results of expensive, time-consuming door-to-door surveys? That’s the intuition of researchers with Stanford, the University of Michigan, Baylor College of Medicine, and Rice University, who have used AI to determine socioeconomic trends from 50 million Google StreetView images of 200 American towns. This being America, the researchers focus on gathering data about the motor vehicles in each city, and find that to be a statistically significant indicator for factors like political persuasion, demographics, and socioeconomic status.

Automated sexism analysis: academics and actors have worked together to created the Geena Davis Inclusion Quotiant (GD-IQ) tool, which uses machine learning to analyze the representation of gender in movies. GD-IQ was fed 100 of the top grossing movies of all time and it found that men are seen and heard nearly twice as much as women. But there’s one genre where women are seen on screen more frequently than men: horror films. Aaaahhh! (Now we just need audio trawling systems to improve enough for us to run an automated Bechdel test on the same corpus.)

The overmind sees all of your retail failings: Orbital Insight has used machine learning techniques to analyze satellite photos of cars in parking lots at JC Penny stores across America and detect a 10 percent year-over-year fall in usage.

Help build Keras: if you want to make Keras even better, then its creator Francois Chollet has a fun laundry list of work for you to do, ranging from writing unit tests, to porting examples to the new API. It takes a whole village to create a framework – lend a keyboard.

Murray’s on the move: Murray Shanahan is joining DeepMind, though he’ll remain on at Imperial College as a part-time supervisor for PHDs and postdocs. Murray recently co-authored a paper seeking to unite symbolic AI with reinforcement learning. That would seem to align with DeepMind’s success at pairing traditional AI methods (Monte Carlo Tree Search) with deep methods to such success in the case of AlphaGo.

AI compression: Netflix claims its able to use neural network compression approaches to reduce the size of the footage it pipes over the internet to you without sacrificing as much visual quality. Sounds similar to Twitter acquisition Magic Pony, which uses ‘superresolution’ techniques to automatically upscale shoddy pictures and (I’m guessing) videos.

A neural network watermark – just what the IP lawyers asked for: research on ‘Embedding Watermarks into Neural Networks’ gives people a way to subtly embed a kind of digital watermark into a neural network without impairing performance. This potentially makes it easy for companies to track trained modules as they propagate across the internet and, to the groans of many DIY enthusiasts, issue take down requests for AI built out of infringing content.

Cobalt Robotics – your new, fancy looking security guard: the main problem I have with security guards is their lack of lovingly sculpted plastic bevels and felt coverings. It seems Colbat Robotics has heard of my problem and invented a robot to fix it. The company’s security bots are designed to patrol offices and museums, using their onboard software to detect changes, such as intruders or the movement of suspicious objects. Each robot has super-human sensors with perfect recall and an auditable history of where it was and what it saw“, Cobalt writes.

Spotting tumors with deep learning: Google has trained an AI system to localize tumors in images of potentially cancerous breasts. It claims it is able to surpass the capabilities of human pathologists who are given unlimited time to inspect the slides…
…accuracy of Google’s deep learning based tumor localization; 89%
…accuracy of a human pathologist given unlimited time to inspect the same images: 73%
…Related: Tel Aviv startup Zebra Medical says it can use AI to detect some types of cancerous cells with 91 per cent accuracy, versus 88 per cent for a trained radiologist. “In five or seven years, radiologists won’t be doing the same job they’re doing today,” says founder Elad Benjamin. “They’re going to have analytics engines or bots like ours that will be doing 60, 70, 80 per cent of their work.”

The unmanned drone future. Military sales from now till 2025:
Unmanned ground vehicles… 30,000
Unmanned aerial vehicles… 63,000
…”With technology advancing at such a pace, a myriad of applications will unfold limited only by the imagination of the designer,” writes Jane’s Aerospace Defense and Security.

Estonia passes law allowing for countrywide testing of robocars: Estonia passed a law this week letting anyone test robot cars on its ~58,000 kilometers of roads, as long as they’re accompanied by a human to take over in case things go wrong.
…meanwhile, Virginia has passed a state law permitting delivery robots to operate on sidewalks. People are required to monitor the robot and take over if things go wrong, but don’t need to be within line of sight or anything. Similar laws are on the table in Idaho and Florida.

JP Morgan automates the interpretation of commercial loan agreements via new software called COI. This is something that previously consumed 360,000 hours of human labor a year at the firm. There are other initiatives as well, with bots now doing the work of 140 people, JP Morgan says.

Evolving deep neural networks at the million CPU scale…Scientists at The University of Texas at Austin and Sentient Technologies have extended NEAT, an evolutionary optimization technique first outlined in 2002, to be capable of evolving different neural network structures and also the hyperparameters (the numbers AI researchers typically calibrate via a mix of intuition and knowledge to get the AI to work). The research, Evolving Deep Neural Networks, is in a similar spirit to Google’s “Neural Architecture Search” paper, though uses genetic algorithms to evolve the structure of the neural networks, while Google evolved its architectures via reinforcement learning. The approach yields results with a classification error of 7.3% on the CIFAR image classification task, compared to around 6.4% for the current state of the art . They’re also able to use the same technique to evolve an LSTM to conduct language modeling tasks, demonstrating the apparent generality of the approach.
… so, what’s the point of evolving stuff rather than designing it? The thesis is that we can use this technique to throw a load of computers at a hard problem and have the AI evolve to a decent system, without people needing to calibrate it…
the researchers applied the tech to an image captioning system for an un-specificed magazine website (though the image example on page 6 looks exactly like one on a Wired website credited to a Wired photographer). They claim the resulting architecture has performance on par or slightly exceeding the quality of hand-tuned approaches…
…A GIANT, INVISIBLE, GLOBAL SUPERCOMPUTER: The researchers also give more detail about the infrastructure Sentient has been building for its massively distributed financial trading and product suggestion services. The system, named “DarkCycle”, currently utilizes 2M CPUs and 5000 GPUs around the world, resulting in a peak performance of 9 petaflops. (If we assume this is equivalent to 9 petaflops, then that would make DarkCycle’s processing power equivalent to about the 10th fastest system in the world, though the distributed nature of it means that latency means it is far less powerful, FLOP for FLOP, than a full HPC rig.)
ANOTHER, EVEN BIGGER, INVISIBLE, GLOBAL SUPERCOMPUTER: Google researchers published a paper on Friday called “Large-Scale Evolution of Image Classifiers.” They show that evolution can be used to evolve image classification systems with performance approaching some of the best hand-tuned systems…
…Google’s best single model had a test accuracy on the CIFAR-10 image dataset of 94.1 percent, close to with tuned approaches. But it came at a great, computational cost. This system alone represented the outcome of 9 * 10^19 floating point operations per second – over an exaflop, expended over hundreds of hours of training. This represents “significant computational requirements” Google says. Go figure!
… these systems likely herald the recombination of evolution and deep learning approaches, which may yield further interesting cross-pollinated breakthroughs..
Given that DNNs are generic function approximators these two research publications suggests that evolution may be a viable strategy to tame systems of comparable performance to hand-made ones, without needing as much specific domain expertise.
… the conclusion to this research paper is worth quoting at length: “While in this work we did not focus on reducing computation costs, we hope that future improvements to the algorithms and the hardware will allow for more economical implementations. In that case, evolution would become an appealing approach to neuro-discovery for reasons beyond the scope of this paper. For example, it “hits the ground running”, improving on arbitrary initial models as soon as the experiment begins. The mutations used can implement recent advances in the field and can be introduced without having to restart an experiment. Furthermore, recombination can merge improvements developed by different individuals, even if they come from other populations. Moreover, it may be possible to combine neuro-evolution with other automatic architecture discovery methods.”

Bursting the AI hype bubble:The accomplishments so breathlessly reported are often cobbled together from a grab bag of disparate tools and techniques. It might be easy to mistake the drumbeat of stories about machines besting us at tasks as evidence that these tools are growing ever smarter—but that’s not happening,” writes Stanford computer scientist Jerry Kaplan in the MIT Technology Review. ““true” AI requires that the computer program or machine exhibit self-governance, surprise, and novelty,” writes Ian Bogost in The Atlantic.
…I’d say that Kaplan’s point can be partially refuted by the tremendous tendency for reusability in today’s AI systems. For instance, the evolution research outlined in this paper suggestions we can actually design very large, very sophisticated systems in an end-to-end way – we’re starting to grow rather than assemble our AIs. Far from being “cobbled together” these machines are more like an interlocking set of components whose interfaces are fairly well understood, but are being developed at different rates.I’d also argue that some modern AI systems are starting to show the faintest traits of (controlled, highly limited) self-governance via capabilities like the automatic identification and acquisition of auxiliary goals, as outlined in DeepMind’s “UNREAL” research.

All watched over by machines of loving Facebook grace: Facebook has trained its AI systems to spot indicators of suicide in posts people make, and is using that data to proactively send alerts to its community team for review. “A more typical scenario is one in which the AI works in the background, making a self-harm–reporting option more prominent to friends of a person in need,” Buzzfeed reports. The system apparently sets off fewer false alarms than people and has greater accuracy…
…using AI to flag potential suicides seems like an unalloyed social good, but what unnerves me is that the same techniques could be used to flag people indulging in political discourse that diverged massively from the norm, or any other behavior which steps out of the invisible lines created by the consensus generated by a platform containing the data of over a billion people. It’s always worth keeping in mind that for every Facebook with (in this case) altruistic intentions, there are other parties who may have different values and priorities.

OpenAI bits&pieces:

OpenAI’s Tom Brown will be giving a talk on OpenAI Gym and Universe at AI By the Bay on Wednesday, March 8.

Tech tales:

[2035: GENEVA, PRECISE LOCATION CLASSIFIED.]

*ACCESS LEVEL*: “BRIGHTBAR”.
*PROJECT*: “LAB BENCH”.

*PROJECT_OVERVIEW*: LAB BENCH was a research program into the evolution of hostile, autonomous, electronic threats. LAB BENCH consists of the GROUND_TRUTH threat site, and, since 2031, the DENIAL RING. Projects BLACK_BRIDGE and NET_SIM were retired following the 2031 UNAUTHORIZED_EXCURSION event. The goal of LAB BENCH was to create a synthetic, digitally hostile urban environment, meant to mirror the changing, semi-autonomous, swarm intelligence approaches being fielded by foreign military powers. Was frequently used for training and, later, AI software experimentation.

STATUS: Recategorized as ACTIVE_THREAT_SITE in 2031. Now overseen by XXXXX and XXXXXX.

GROUND_TRUTH:

2015: Full-scale model city built for nuclear attack and disaster response simulations repurposed as military software attack and countermeasure testing site.

2020: Installation of high-bandwidth fiber, comprehensive automation suites for synthetic traffic and pedestrian movement, and high proportion of ‘lights out’ infrastructure. Addition of AI hacking and counter-hacking software for testing and development.

2025: High-performance computing cluster installed.

2028: Installation of large group of robotic workers and robust closed-loop renewable energy systems. DARPA starts public grant to benefit parallel LAB BENCH R&D. RFI put out for CITY SCALE FORMAL VERIFICATION OF DYNAMIC, MOBILE IOT DEVICES. Budget: $80 million.

2029: Automated manufacturing and mining facilities installed. City disconnected from global internet, air-gapped onto own private network. Significant retrenching of fiber in larger surrounding area draws several media articles, subsequently censored.

2030: Upgrade to learning substrate of GROUND_TRUTH computer network. Addition of software for evolutionary methods of optimization, and techniques for unsupervised auxiliary task identification and acquisition.

2031: Reclassified as ACTIVE_THREAT_SITE following unauthorized excursion of CLASSIFIED from GROUND_TRUTH. Current status: Unknown

DENIAL RING: Created 2031 following the UNAUTHORIZED_EXCURSION event from GROUND_TRUTH. Consists of 12 Forward Operation Bases arranged in a dodecagon configuration around the perimeter of GROUND_TRUTH, with a one mile zero-electronic air gap to prevent transference events. Each base is fully automated and contains significant amount of artillery and munitions along with sophisticated kinetic and electronic countermeasures. Strategic deterrent ‘LoiterSquad’ located at nearby CLASSIFIED location.

*FILE: CASE REPORT, “BLOOM#02: UNAUTHORIZED_EXCURSION INCIDENT, 2033*

Mobilization: Normal

2:00:00 Two drones sighted taking off from center of GROUND_TRUTH. IDs queried against global database: no matches. ID string is unconventionally formatted. Drones of unconventional appearance. Pictures queried against global database: Partial matches across 80 different models of drones. Further query: multiple manufacturers linked to GROUND_TRUTH equipment contracts.
Mobilization: Satellites auto-notified.

2:00:50 Unidentified Drones fly together to North East border of GROUND_TRUTH. Drones do not respond to electronic hails. City telemetry extracts no useful information from them. FOBs unable to acquire signals from drones for automatic shutdown.

2:01:00 Range of frequencies in RF BAND begin emanating from 64 locations across GROUND_TRUTH.

2:01:30 Unidentified Drones reach GROUND_TRUTH’s perimeter.
Mobilization: SECCOM notified.

2:03:50 Unidentified Drones begin crossing one mile air gap toward North West edge of DENIAL RING, leaving GROUND_TRUTH borders.
Mobilization: Nearby military aircraft notified. NRO notified.

2:04:05 Unidentified Drones destroyed by precision munitions from Forward Operating Bases #9 #10 #11

2:05:11 DENIAL RING drone squadrons and ground vehicles cease automatic electronic telemetry reporting.

2:05:12 FOBs #3 #1 #4 #5 #9 countermeasures come under fire from non-responsive DENIAL RING drone squadrons and ground vehicles.

2:05:15 Three fleets of Unidentified Drones take off from GROUND_TRUTH.
Mobilization: Strategic deterrent codename LoiterSquad activated.

2:05:27 Remaining FOBs come under fire. Countermeasures of FOBs #3 #1 #9 fail.
Mobilization: Nearby SEAL team put on high alert.

2:05:32 FOBs fire on fleets of drones traveling out from GROUND_TRUTH. One fleet destroyed, two others unharmed. All FOBs’  targeting corrupted by computer virus of unknown origin.

2:05:39 Second drone fleet destroyed by fire from FOBs #10, #11.

2:05:45 Remaining drone fleet passes out of close-impact munitions from all FOBs.

2:05:59 Drone fleet surpasses range of all conventional weaponry.

2:06:00 All FOBs go offline from computer virus of unknown origin.

2:06:01 Satellite footage shows unidentified unmanned ground vehicle platforms emerging from warehouses in center of GROUND_TRUTH and driving toward city edges.No IDs.

2:06:02 Non-responsive DENIAL RING drones begin to fly North on bearing consistent with CLASSIFIED LOCATION.
Mobilization: Loitersquad given go ahead for mission completion.

2:06:04 Unidentified convoy begins to advance across DENIAL RING air gap .

2:06:04 Loitersquad deterrent impacts center of GROUND_TRUTH.

2:06:05 Status of GROUND_TRUTH and DENIAL RING unknown due to debris.

2:06:50 Satellite confirmation of total destruction of specified land area.

2:20:00 Seal team arrives and begins visual sweep of area. No sightings.
INCIDENT LOG COMPLETE

Import AI: Issue 31: Memories as maps & maps as memories, bot automation, and crypto-fintech-AI

ICML special administrative notice: Hello! Arxiv paper volume will increase this week due to a flood of ICML submissions. I’d like to try and analyse as many of them as possible and need some help – drop me a line if you want to work on a collaborative,  AI paper project: jack@jack-clark.net.

Can you hear me now? Computers learn to upscale audio: A group of Stanford researchers have taught computers to enhance the quality of audio. The system observes high-quality audio samples and corrupted samples, then trained using a residual network to identify the signals and infer the relationship between corrupt and clean audio. If you feed it some previously unheard corrupted audio it can make a good stab at upscaling it. The results are gently encouraging, with the system achieving good performance on speech and slightly less good performance on music. More an interesting proof-of-concept than a fall-out-of-your-chair result. “Our models are still training and the numbers in Table 1 are subject to change,” the authors note.

Image generation gets 100X faster thanks to algorithmic improvements: Last week we heard about a general purpose algorithmic improvement that could halve the cost of training deep neural networks. This week, a specific one comes along in the form of FastPixel CNN++, which is able to achieve as much as a 183X speedup on the image generation component of PixelCNN++.

Brain-interface company Kernel grabs MIT talent to explore your cranium: Kernel, a “human intelligence” company started by entrepreneur Bryan Johnson,has acquired MIT spinout Kendall Research Systems. This acquisition, combined with the hiring of MIT brain specialists Ed Boyden and Adam Marblestone, gives Kernel more expertise in the field of brain interfaces. Kernel was founded on the intuition that everything outside of us is getting smarter and faster, so we should invest some time into trying to make our own brains smarter and faster as well.

UK government to invest £17m ($21 million) into artificial intelligence research: the UK government will invest an additional few million pounds into AI research. The amount is minor and seems mostly to be what the treasury was able to find down the back brexit-shrunk sofa. Nonetheless, every little helps.

DeepCoder: promise & hype: Stephen Merity has tried to debunk some of the hype around DeepCoder, a research paper (PDF) that oultines a system that gets computers to learn programming. He’s even written a bonus article to try and show what he thinks level-headed journalism would be like – come for the insight, stay for the keyboard monkeys.

When your memory is a map, beautiful things can happen: a new research technique lets us give machines the ability to autonomously map their environment without needing to check the resulting maps against any kind of ground-truth data. This brings us closer to an age when we can deploy robots into completely novel environments and simply feed them goals, then have them map the buildings on the way to getting there…
… The specific approach, “Cognitive Mapping and Planning for Visual Navigation”, out-performs approaches based on LSTMs and reactive agents. The system works by coupling two distinct systems together – a planner, and a mapper. At each step the mapper updates the robots beliefs about the world, and then feeds this to the planner, which figures out an action to take…
The Mapper gives the robot access to a memory system that lets it reprensent its world in terms of an overhead two-dimensional map. It feeds its map to The Planner, whichuses that data to plan the actions it takes to bring it closer to its goal. Once the planner has taken an action, the map is re-updated. The map is egocentric, which means that it naturally differentiates the agent from the rest of its environment. (In other words, action cements the agent’s perception of itself as being distinct from the rest of the world – how’s that for motivation!) This egocentric representation, combined with actions that are represented as egomotion, makes it easier for the system to recalibrate itself to learn more about its environment, without a human needing to be in the loop…
… The system still fails occasionally, usually due to its first person view leading it to miss a good route to its target, and ending up with it dithering about the space.
…It’s worth noting that this project, like all scientific endeavors, builds on numerous research contributions that have occurred in recent years: the planning component depends on a residual network (developed by Microsoft researchers and used to win the ImageNet competition in Dec 2015), a hierachical variant of value iteration networks (UC Berkely, released February 2016), and the whole combined system is trained using DAGGER (Carnegie Mellon, 2011). This highlights the inherent modularity of the modern approach to AI, and reminds us that any research contribution is there only due to standing on the shoulders of the contributions of innumerable others. (If you want to join me in a little AI archeology project, send me an email to jack@jack-clark.net)
… “A central limiation in our work is the assumption of perfect odometry, robots operating in the real world do not have perfect odometry and a model that factors in uncertainty in movement is essential,” the researchers write.

Is that a gun in your hand or a corn cob spraypainted black? No, no that’s definitely a gun. Alright, come with me! Research from the University of Grenada in Spain shows how to do two useful things: 1), build and augment a dataset of handguns in films using deep learning and 2) use methods like an R-CNN to then successfully detect handguns in videos. Admins of video sites that have to deal with all the usual video nasties – weapons, drugs, sex – will likely be interested in such a technique. It could also reduce the number of people tech companies hire to manually look at disreptuable content – a low-paying, sometimes traumatising job that I think we would gladly cede to the machines.

The first rule of deep learning is you don’t talk about the black magic… Nikolas Markou has sadly been kicked out of AI club for talking about one of its uncomfortable truths – that because we lack a well developed set of theories for why AI works the way it does, many experts in the field use various tips and tricks gained through trial-and-error and intuition, rather than a deep understanding of theory. Read on for details of some of those tricks.

Smashing! Researchers use deep reinforcement learning to beat pros at Super Smash Brothers Melee: research from Tenenbaum’s lab at MIT have used reinforcement learning to train Smash Bros character Captain Falcon to a point of competency where he is able to play competitively with top-ranked human players. This approach works with both policy gradients and q-learning. This is a pretty good example of how RL has moved on from relatively simple two-dimensional environments like Atari to complex, changing, 3D environments. Read more here: Beating the World’s Best at Super Smash Bros Melee with Deep Reinforcement Learning
… the algorithms found some near approaches that a typical human would not likely stumble on: “Q-learners would consistently find the unintuitive strategy of tricking the in-game AI into killing itself. This multi-step tactic is fairly impressive; it involves moving to the edge of the stage and allowing the enemy to attempt a 2-attack string, the first of which hits (resulting in a small negative reward) while the second misses and causes the enemy to suicide (resulting in a large positive reward),” the researchers write.
…the result indicates that RL has a chance of helping to solve tasks like mastering StarCraft 2. That’s because both games share some traits that traditional Atari games lack – partial observability, and multiple players. Therefore, it’s possible that SSBM could become a kind of intermediary metric as the AI community (Zerg) rushes to solve StarCraft, which will require many other algorithmic inventions to crack.
…meanwhile, Super Smash Bros, on the cheap!... Stanford researchers show you can train an AI to master Smash Bros using imitation learning, with no RL required. Imitation learning approaches are easier for less experienced researchers to tune and are cheaper, computationally, to train. Additionally, the approach outlined here is purely vision-based – meaning it has no access to the real state of the game, nor any particular hooks into it. That can be challenging for RL algorithms. AIs trained via this method were able to defeat a Level 3 difficulty CPU player, roughly match a Level 6, respectably hold their own or lose against a Level 9 character. Read more: The Game Imitation: Deep Supervised Convolutional Networks for Quick Video Game AI.
Imitation learning is not particularly fashionable.The authors note that their approach “does not currently enjoy much status within the machine learning community.” But they think the value in their work is that it demonstrates how absurdly powerful CNN approaches are.
…(Minor details: the authors gathered their data via Nintendo 64 emulation and screen capture tools, using software called Project 64 v2.1. AIs were trained on around 600,000 frames of games (around 5 hours of playing).

Three humans and a hundred bots: interesting article about Philip Kaplan’s experience of building Distrokid, a music distribution service. Main thing of note to Import AI readers is Kaplan’s explanation of how Distrokid is able to turn over millions in revenue while running on only three fulltime staff: “DistroKid has dozens of automated bots that run 24/7. These bots do things that humans do at other distributors. For example, verifying that artwork & song files are correct, changing song titles to comply with each stores’ unique style guide, checking for infringement, delivering files & artwork to stores, updating sales & streaming stats for artists, processing payments, and more,” he says.

Cryptocurrency for the ceaseless machinations of those that tend the AI hedge fund: Numerai, a startup that appears to have emerged from the pyshic loam of a proto William Gibson novel, has launched a new cryptocurrency, Numeraire, to strengthen its AI-based hedgefund. The strangest part? All of those buzzwords are being used legitimately!…
…Numerai uses homomorphic encryption to fuzz a load of financial data and make it available to a global crew of data scientists, who then poke and prod at it with algorithms trying to make predictions about how the numbers change. They then upload these models to Numerai which creates an ensemble from them and uses that to trade mysterious financial instruments. Successful authors get paid out in accordance (in Bitcoin, naturally) with the success of their algorithm in the market. This week, Numerai distributed 1,000,000 Numeraire currency units across its 12,000 algorithm author members. Those people can now use Numeraire to place bets on the success of their own models, and if they win the value of Numeraire goes up. This mains that the data scientists now have a financial incentive to participate in the platform (sweet, sweet bitcoin), and a secondary financial one (participate in the internal Numerai economy by wagering lots of Numeraire, and use that to enhance earning power in accordance with the growing effectiveness of the predictions made by Numerai). The incentives seem to stop people from gaming the system…
… I’ve spent so long waffling on about this because I think Numerai is probably what an AI-first business looks like. Replace the 12,000 data scientists with smart, financial AI prediction systems, and you’re there. And in the same way AIs will exploit their environment for rewards that may not benefit the creator (eg, reward hacking, goal divergence, etc), humans will try to take as much money out of the market with the minimal amount of effort. If Numerai’s incentive system is successful then it can chalk out a path for AI companies to take in the future.

OpenAI bits & pieces:

OpenAI’s Tom Brown will be giving a talk at AI By the Bay on Wednesday, March 8, talking about OpenAI Gym and Universe.

Tech Tales:

[2020, a converted Church in Barcelona, full of computers behind austere glass]

They call the AI system ‘the math submarine’, but if you had them draw it for you no one could give a true depiction of its form. That’s because it’s a bundle of high-dimensional representations, drifting through complex, ethereal fields of numbers. You send the AI out there, out to the brain-warping weird edges of mathematics, and it tries to explore the border between what is proved and what is unproved, and it comes back with answers that are verifiably true, but difficult for a human to understand.

Still, you anthropomorphize it. Does it get lonely, out there, drifting through high-dimensional clouds of conjectures, each representing some indication of proof, or truth, or clarity. Does it feel itself distinct from these things? Does number have a texture to it? Are there currents?

When you were young you once looked up between two tall buildings and saw a plane pass overhead. You could never see the whole plane at once as your view was occluded by the walls of the buildings. But your brain filled in the rest, using its sense of ‘plane-ness’ to extend the slice of the object to the whole. Does the math submarine see numbers in this way, you wonder? Does it see a group of conjectures and have an intuition about what they mean? You know you can’t know, but your other computers can, and you watch the interfaces between this AI system and the others, and tend to the servers and ensure the network is running, so the machine can go and explore something you cannot see or truly know.

Import AI: Issue 30: Cheaper neural network training, mysterious claims around Bayesian Program Synthesis, and Gates proposes income tax for robots

 

Half-price neural networks thanks to algorithmic tweaks: new research, Distributed Second-Order Optimization Using Kronecker-Factored Approximations (PDF), creates a new optimization method for training AI systems. The approach is flexible and can be dropped into pre-existing software relatively easily, its creators say. Best of all? “We show that our distributed K-FAC method speeds up training of various state-of-the-art ImageNet classification models by a factor of two compared to an improved form of Batch Normalization”. Quite rare to wake up one day and discover that your AI systems have just halved in price to train.

Bayesian Program Synthesis – bunk or boon? Startup Gamalon has decloaked with a new technology – Bayesian Program Synthesis – that claims to be able to do tough AI tasks like learning to classify an images from a single digit handful of examples, rather than a thousand. The work has echoes of MIT research published in late 2015 (PDF), which showed that it is possible to use Bayesian techniques similar to this one to perform ‘one shot learning’ – which lets computers learn to recognize something, say, a cat, from only a single glimpse. The research was shown to work on a specific test set that had been implemented in a specific way. Gamalon is claiming that its tech has more general purpose utility. However, the startup has published no details about its research and it is very difficult to establish how staged the press interview demos were. If Gamalon has cracked such a hard problem then I’m sure the scientific community would benefit from them sharing their insight. This would also help justify their significant claims.

Income tax for robots: Bill Gates says that people should consider taxing robots to generate revenues for government to offset the jobs destroyed via automation. Small query I’d like to see someone ask Bill: in hindsight, should governments also have taxed software like Excel to offset the jobs it destroyed?

AirSim: because it’s cheaper to crash a drone inside software: Microsoft has released AirSim, an environment based on the Unity game engine providing a reasonably high-fidelity simulation of reality, giving developers a cheap way to train drones and other robots via techniques like reinforcement learning, then transfer those systems into the real world (which we already know is possible, thanks to research papers such as CAD2RL). This is useful for a couple of reasons: 1) you can run the sim much faster than real life, letting you make an order of magnitude more mistakes while you try to solve your problem, and 2) this reduces the cost of mistakes – it’s much cheaper to fire up a new simulation than try to repair or have to replace the drone that just bumbled into a tree. (Well, research from MIT and others already suggests you won’t need to worry about the tree, but you get my point.)
…Simulators have become a strategic point of differentiation for companies as each battles to craft the perfect facsimile of the real world to let them train AI systems that can then be put to work in reality. The drawback: we don’t yet have a good idea for how real simulators need to be so it’s tricky to anticipate the correct level of fidelity at which to train these systems. In other words, we don’t know what level of simulation is sufficient to ensure that when we arrive in reality we are able to achieve our task. That’s because we haven’t derived an underlying theory to help guide our intuitions about the difference between the virtual and the real – Baudrillard eat your heart out!

Skeptical about The Skeptic’s skepticism: We shouldn’t worry about artificial intelligence disasters because they tend to involve a long series of “if-then” coincidences, says Michael Shermer, publisher of The Skeptic magazine.

Enter the “vision tunnel” with Jeff Bezos: When goods arrive at Amazon’s automated fulfillment center they pass through “a “vision tunnel,” a conveyor belt tented by a dome full of cameras and scanners”, where algorithms analyze and sort each box. “What takes humans with bar-code scanners an hour to accomplish at older fulfillment centers can now be done in half that time,” Fast Company reports…. There’s also a 6-foot tall Fanuc robot arm, which works with a flock of Kiva robots to load goods into the shifting robot substrate of the warehouse. The million-square foot plus facility employs around a thousand people, according to the article. A similarly sized Walmart distribution center employs around 350 (though this doubles during peak seasons)  – why the mis-match in scale, given the likelihood of Amazon having a larger degree of employee automation?

8 million Youtube bounding boxes sitting on a wall, you take one down, classify it and pass it around, 7 million 900 and 99 thousand and 900 and 99 Youtube bounding boxes on a wall: Google has updated its 8 million video strong YouTube dataset with twice as many labels as before…
…. and it’s willing to pay cash to those who experiment with the dataset, and has teamed up with Kaggle to create a series of competitions/challenges based around the dataset, with a $100,000 prize pool available. (This also serves as a way to introduce people to its commercial cloud services, as the company is providing some credits for its Google Cloud Platform as well for those that want to train and run their own models. And I imagine there’s a talent-spotting element as well.)
… I’ve been wondering if the arrival of new datasets, or the augmentation of existing ones, is a leading indicator about AI progress – it seems like when we sense a problem is becoming tractable we release a new dataset for it, then eventually solve the problem. Thoughts welcome!

Deep Learning papers – curated for you. The life of an AI researcher involves sifting through research literature to identify new ideas and ensure there aren’t too many overlaps between yet-to-be-published research and what already exists. This list of curated AI papers may be helpful.

When does advanced technology become DIY friendly?: warzones are a kind of primal soup for (mostly macabre) invention. This Wired article on robot builders in the Middle East highlights how a combination of cheap vision systems, low cost robots, and software, has allowed inventive people to repurpose consumer technology for war machines, like little moveable defense platforms and gun turrets. Today, this technology is very crude and both its effectiveness and use are unknown. But it highlights how rapidly tech can be repurposed and reapplied. Something the AI community should bear in mind as it publishes research and code of its ideas.

Neural architecture search VERSUS interpretability: a vivid illustration from Google Brain resident David Ha of the baroque topologies of neural networks created through techniques like neural architecture search.

Google researcher handicaps AI research labs: Google Brain research engineer Eric Jang has ranked the various AI research labs. He ranks Deepmind and… Google Research in joint first place, followed by OpenAI & Facebook, followed by MSR (3rd) and Apple (4th). He puts IBM at 10 and doesn’t specify the intervening companies. “Given open source software + how prolific the entire field is nowadays, I don’t think any one tech firm “leads AI research” by a substantial margin”, he writes…
…That matches comments made by Baidu’s Andrew Ng, who has said that any given AI research lab has a lead of at most a year on others…
IBM Watson benched: MD Anderson has ended its collaboration with IBM on using AI technology marketed under the company’s “Watson” omnibrand. The strangest part? MD Anderson paid IBM for the privilege of trialing its technology – an unusual occurrence, usually it’s the other way round, Forbes reports. The project was suffused with delays and it’s still hard to establish whether things ended because of IBM’s tech, or because of a series of unfortunate bureaucratic events within MD Anderson.

OpenAI bits&pieces:

Adversarial examples – why it’s easier to attack machine learning rather than defend it. New OpenAI post about adversarial examples, aka optical illusions for computers, delves into the technology and explains why it may be easier to use approaches like this to attack machine learning systems, rather than defend them.

Ilya Sutskever talks at the Rework Summit: if you weren’t able to see Ilya’s talk at the Rework deep learning summit in person, then you can catch a replay here.

Tech tales:

[A boardroom at the top of one of London’s increasingly HR Geiger-esque skyscrapers.]

“So as you’ll see the terms are very attractive, as I’m sure your evaluator has told you,” says Earnest, palms placed on the table before him, looking across at Reginald, director of the company-to-be-acquired.
“I’m afraid it’s not good enough,” Reginald says. “As I’m sure your own counter-evaluator has told you.”
“Now, now, that doesn’t seem right, let’s-”
“Enough!” Reginald says. “Leave it to them.”
“As you wish,” you say, leaning back.

Earnest and Reginald stare at each other as their evaluators invisibly hash out the terms of a new deal, each one probing the other for logical weaknesses, legal loopholes, and what some of the new PsychAIs are able to spot – revealed preference from past deal-making. As the company-to-be-acquired, Reginald has the advantage, but Earnest’s corporation has invested more heavily in helped agents, which have spent the past few months carefully interacting with aspects of Earnest’s business to provide more accurate True Value Estimates.

Eventually, a deal is created. Both Earnest and Reginald need to enlist translator AIs to render the machine-created legalese into something the both of them can digest. Once Reginald agrees to the terms the AIs begin another wave of autonomous asset-stripping, merging, and copying. Jobs are posted on marketplaces for temporary PR professionals to write the press release announcing the M&A, and design contracts are placed for a new logo. This will take hours.

Reginald and Earnest look at eachother. Earnest says, “pub?”
“My evaluator just suggested the same,” says Reginald, and it’s tough to tell if he’s joking.

Import AI: Issue 29: neural networks crack quantum problem, fingernail-sized AI chips, and a “gender” classifier screwup

It takes a global village to raise an AI… a report titled ‘Advances in artificial intelligence require progress across all of computer science” (PDF) from the computing community consortium identifies several key areas that should be developed for AI to thrive: computing systems and hardware, theoretical computer science, cybersecurity, formal methods, programming languages, and human-computer interaction…
…better support infrastructure will speed the rate at which developers embrace AI. For example, see this Ubuntu + AWS + AI announcement from Amazon: the “AWS Deep Learning AMI for Ubuntu” will give developers a pre-integrated software stack to run on its cloud, saving them some of the tedious, frustrating time they usually spend installing and configuring deep learning software.
Baidu’s AI software PaddlePaddle now supports Kubernetes, making it easier to run the software on large clusters of computers. Kubernetes is an open source project based on Google’s internal ‘Borg’ and ‘Omega’ cluster managers, and is used quite widely among the AI community – Last year, OpenAI released software to make it easier to run Kubernetes on Amazon’s cloud.

Finally, AI creates jobs for humans! Starship Technologies is hiring a “robot handler” to accompany its freight-ferrying robots as they zoom around Redwood City. Requirements: “a quick thinker with the ability to resolve non-standard situations“.

Ford & the ARGOnauts: Ford will spend $1 billion over five years on AI, via a subsidiary company called Argo. Argo is run by veterans of both Google and Uber’s self-driving programs. Details remain nebulous. Much of the innovation here appears to be in the financial machinery underpinning Argo, which will make it easier for Ford to offer hefty salaries and stock allocations to the AI people it wants to hire. Reminiscent of Cisco’s “spin-in” company Insieme.

Powerful image classification, for free: Facebook has released code for ‘ResNeXt’, an image classification system outlined in its research paper Aggregated Residual Transformations for Deep Neural Networks. Note: one of the authors of ResNeXt is Kaiming He, the whizkid from MSR Asia who helped invent the ImageNet 2015-winning Residual Networks.

Rise of the terminator accountants: Number of traders employed on the US cash equities trading desk at Goldman Sachs’s New York office:
…in 2000: 600
…in 2017: 2, supported by 200 computer engineers.
…”Some 9,000 people, about one-third of Goldman’s staff, are computer engineers,” reports MIT Technology Review.

AI: 2. Hand-tuned algorithms: 0: New research shows how we can use modern AI techniques to learn representations of complex problems, then use some of the resulting predictive models in place of hand-tuned algorithms. “Solving the quantum many-body problem with artificial neural networks” research shows how this technique can be competitive with state of the art approaches. “With further development, it may well prove a valuable piece in the quantum toolbox,.” the researchers write.
…Similarly, Lawrence Berkeley National Laboratory recently trained machine learning systems to predict metallic defects in materials, lowering the cost of conducting research into advanced alloys and other lightweight new materials. “This work is essentially a proof of concept. It shows that we can run density functional calculations for a few hundred materials, then train machine learning algorithms to accurately predict point defects for a much larger group of materials,” the researchers say. “The benefit of this work is now we have a computationally inexpensive machine learning approach that can quickly and accurately predict point defects in new intermetallic materials. We no longer have to run very costly first principle calculations to identify defect properties for every new metallic compound.”

Microscopic, power-sipping’ AI circuits: researchers with the University of Michigan and spinout CubeWorks have created a deep learning processor the size of a fraction of a fingernail. The new chip implements deep neural networks on a 7.1mm2 chip that sips a mere 288 microwatts of power (PDF). They imagine the chip could be used for basic pattern recognition tasks, like a home security camera knowing to only record in the presence of movement of a human/animal versus a shifting tree branch. The design hints at an era for AI where crude pattern recognition capabilities are distributed in processors so tiny and discreet you could end up with fragments in your shoes after walking on some futuristic beach. Slide presentation with more technical information here.

AI needs its own disaster: AI safety researcher Stuart Russell worries that AI needs to have a Chernobyl-scale disaster to get the rest of the world to wake up to the need for fundamental research on AI safety…
…“I go through the arguments that people make for not paying any attention to this issue and none of them hold water. They fail in such straightforward ways that it seems like the arguments are coming from a defensive reaction, not from taking the question seriously and thinking hard about it but not wanting to consider it at all,” he says. “Obviously, it’s a threat. We can look back at the history of nuclear physics, where very famous nuclear physicists were simply in denial about the possibility that nuclear physics could lead to nuclear weapons.“
some disagree about the dangers of AI. Andrew Ng, a former Stanford Professor and Google Brain founder who now runs AI for Chinese giant tech company Baidu, talked about the “evil AI hype circle” in a recent lecture at the Stanford Graduate School of Business (video). His view is that some people exaggerate the dangers of “evil AI” to generate interest in the problem, which brings in more funding for research, which goes on to fund “anti-evil-AI” companies. “The results of this work drives more hype”, he says. The funding for these sorts of organizations and individuals is “a massive misallocation of resources” he says. Another worry of Ng’s: the focus on evil AI can distract us from a much more severe, real problem, which he says is job displacement.
Facebook’s head of AI research, Yann Lecun, said in mid-2016 “I don’t think AI will become an existential threat to humanity… If we are smart enough to build machine with super-human intelligence, chances are we will not be stupid enough to give them infinite power to destroy humanity.”
… I worry that AI safety is such a visceral topic that people react quite emotionally to it, and get freaked out by the baleful implications to the point they don’t consider the actual research being done. Some problems people are grappling with in AI safety include: securing machines against adversarial examples, figuring out how to give machines effective intuitions through logical induction, and ensuring that cleaning robots don’t commit acts of vandalism to achieve a tidy home, among others. These all seem like reasonable avenues of research that will improve the stability and resilience of typical AI systems…
… but don’t take my word for it – read about AI safety yourself and come to your own decision: for your next desert island vacation (stranding), consider bringing along a smorgasbord of these 200 AI resources, curated by the Center for Human-Compatible AI at UC Berkeley.
and if you want to do something about AI safety, consider applying for a new technical research intern position with the Center for Human Compatible AI at UC Berkeley and the Machine Intelligence Research Institute.

Satellite eyes, served three different ways: Startup Descartes Labs has released a new set of global satellite maps in three distinct bands – RGB, Red Edge bands, and synthetic aperture radar range/azimuth measurements The imagery has been pre-processed to remove clouds and adjusted for the angle of the satellite camera as well as the angle of the sun.

Declining economies of scale: just as companies can expect to see their rate of growth flatten as they expand, deep learning systems see performance drop as they add more GPUs, as the benefits they gain start to be nibbled away by the latency and infrastructure costs introduced by running multiple GPUs in parallel…
… New work from Japanese AI startup Preferred Networks, shows that its free ‘Chainer’ software can generate a 100X performance speedup from 128 GPUs. This is extremely good, but still highlights the slightly declining returns people get as they scale-up systems.

Gender IS NOT in the eyes of the beholder: New research “Gender-From-Iris or Gender-From-Mascara?” appears to bust experimental results showing you can predict gender from a person’s iris, instead pointing out that many strong results appear to be contingent on detectors that learn to spot mascara. Machine learning’s law of unintended consequences strikes again!…
… It reminds me of an apocryphal story an AI researcher once told me: in the 1980s the US military wanted to use machine learning algorithms to automatically classify spy satellite photos for whether they contained soviet tanks or not. The system worked flawlessly in tests, but when they put it into production they discovered that its results were little better than random… After some further experimentation they discovered that in every single photo from their task data that contained a tank, there was also some kind of cloud. Therefore, their ML algorithms had developed a superhuman cloud-classifying ability, and didn’t have the foggiest idea of what a tank was!

Rise of the machines = the end of capitalism as we know it? “Modern Western society is built on a societal model whereby Capital is exchanged for Labour to provide economic growth. If Labour is no longer part of that exchange, the ramifications will be immense,” said one respondent to a Pew Internet report about the ‘pros and cons of the algorithm age’.
…“I foresee algorithms replacing almost all workers with no real options for the replaced humans,” says another respondent.

Bushels of subterfuge in DeepMind’s apple orchard: As I write this newsletter on a Sunday, I’m still recovering from my usual morning activity – chasing my friend round an apple orchard, using a laser beam to periodically paralyze them, letting me hop over their twitching body to gather up as many apples as I can…
… in a strange turn of events it appears that Google DeepMind has been spying on my somewhat unique form of part-time sport, and have replicated this in a game environment called ‘gathering’ which they have used to explore the sort of collaborative and combative strategies that AI systems evolve…there’s also another environment called WolfPack, the less said about it the better. This sort of research is potentially very useful for large multi-agent simulations, which many people in AI are betting on as an area where exploration could yield research breakthroughs.

Lines in Google’s codebase: 2 billion
Number of commits into aforementioned codebase per day: 40,000
…From: “Software Engineering at Google”.

OpenAI Bits and Pieces

Learning how to walk, with OpenAI Gym: The challenge: model the motor control unit of a pair of legs in a virtual environment. “You are given a musculoskeletal model with 16 muscles to control. At every 10ms you send signals to these muscles to activate or deactivate them. The objective is to walk as far as possible in 5 seconds.” The components: OpenSim, OpenAI Gym, keras-rl, and much more. Try the challenge, but stay for the doddering legs!

Arxiv Sanity – bigger, better, smarter! OpenAI’s Andrej Karpathy has updated Arxiv Sanity, an indispensable resource that I and many others use to keep track of AI papers. New features: better algorithms for surfacing papers people have shown interest in, and a social feature. (Also see Stephen Merity’s social tracker trendingarxiv.)

AI Control: OpenAI researcher Paul Christiano writes an informative blog on AI safety and security, called AI Control. In the latest post, “Directions and desiderata for AI control” he talks about some particularly promising research directions in AI safety.

OpenAI does open mic night: Catherine Olsson and I both gave short talks at the Silicon Valley AI Research meetup in SF last week. Catherine’s video. Jack’s video.

Asilomar conference: articles in Wired and  Slate Star Codex about the Beneficial AI conference held at Asilomar in early January.

Tech tales:

[Diplomatic embassy, Beijing, 2025:]

It was a moonless mid-winter pre-dawn, when the flock of drones came overhead and emptied their cargo of chips over the building. The embassy cameras and searchlights picked out some of the thousands of chips as they fell down, hissing like hail on glass and steel roofs. Those staffers that heard them fall shivered instinctively, and afterwards some said that, when caught in the spotlights, the chips looked like metallic snow.

Over the next day the embassy staff did what they could, going around with vacuum cleaners and tiny mops, and ordering an external cleanup crew, but the snowfalls of chips – each one a tiny sensor, its individually meager capabilities offset by the sheer number of its kin – would come again, and eventually security protocols were tightened and people just resigned themselves to it.

Now,  you had to negotiate a baroque set of security measures to get into the embassy. But still the chips got in, and cleaners would find them tracked into bathrooms, or sitting in undusted nooks and crannies. Outside, the air hummed with invisible surveillance, as the numerous little chips used their AI processors to turn on microphones in the presence of certain phrases. Outside, the data evaporated into the air, absorbed by flocks of small drones  which would fly over the embassy, as they did in every town in every major city in every developed country, hoovering up data from the, what some termed, ‘State Dust’. The chips would lie in wait, consuming almost no power, till they heard a particular encrypted call-out from the government drones.

Even the chips that found themselves indoors would eventually be outside again, as some escaped through improper waste disposal measures, and others had their plastic barbs hook fortuitously on a trouser leg or shoe sole, to then be carried outside. And so their data was extracted as well and a titanic jigsaw was assembled.

It didn’t matter how partial the data from each chip was, given how many there were, and the frequency of their harvesting. Gather enough data and at some point you can make sense of the smallest little fragments, but you can only do this for all the little whispers of data from a city or a country if you’re a machine.

Import AI: Issue 28: What one quadrillion dollars pays for, research paper archaeology, and AI modules for drones

Cost of automating the entire global economy? One quadrillion dollars.
Requirements for the resulting system to be able to perfectly replace all human labor:
…Computation: 10^26 operations per second
…Memory: 10^25 bits
…I/O: 10^19 input-output bits per second
…Knowledge ingestion: 7 bits per person per second
and many more marvelous numbers in this essay by data compression expert Matt Mahoney on ‘the cost of AI”. A virtuoso performance of extrapolation and (with apologies to Mitchell & Webb) numberwang-ery.

Google self-driving cars, report card (PDF):
…Miles driven in 2015: 424,331
…Miles driven in 2016: 635,868
…Disengagements per 1,000 miles, 2015: 0.80
…Disengagements per 1,000 miles, 2016: 0.20
… now let’s see how they do with hard training situations for which there is little good training data, like navigating a sandstorm-ridden road in the Middle East.

How much is an AI worth? In which Google’s head of M&A, Don Harrison, says Google is happy to throw large quantities of cash at AI companies. “It’s very hard to apply valuation metrics to AI. These acquisitions are driven by key talent — really smart people. It’s an area I’m focused on and our team is focused on. The valuations are part and parcel of the promise of the technology. We pay attention to it but don’t necessarily worry about it,” he says. (Emphasis mine.)

Your organization and public data: a message to Import AI readers: most organizations gather some form of data which can be safely published, and the world is richer for it. Case in point: Backblaze its latest report on hard drive reliability. These reports should factor into any HDD buyer’s decision, as they represent good, statistically significant real-world data of drive performance. If you work at an organization that may have similar data that can be externalized, please try to make this happen – I’ll be happy to help, so feel free to email me.

Measurement: besides Atari, what are other good measures for the progression of reinforcement learning techniques? As we move into an era dominated by dynamic environments supplied by tools like Universe, DeepMind Lab, Malmo, Torchcraft, and others, how do we effectively  model the progress of agents in a way that captures their full spectrum of their growing capabilities?

AI for researching AI: the Allen Institute for AI has released Citeomatic, a tool that uses deep learning to predict citations for a given paper. To test out the system I fed it OpenAI’s RL^2 paper and it gave me back over 30 papers that it recommended we consider citing. Many of these seem reasonable, eg ‘solving partially observable reinforcement learning problems with rnns’, etc…
…Most of all, this seems like a great tool to help researchers find papers they should be reading. AI has a large literature and researchers frequently find themselves stumbling on good ideas from the previous decade. Any tool that can make this form of intellectual archaeology more efficient is likely to aid in science.

From the Dept. of Recursive Education: Tutorial from Arthur Juliani outlines how to build agents that learn how to learn, with code inspired by the DeepMind paper “Learning to reinforcement learn”, and the OpenAI paper “RL^2”.

Explanations as cognitive maps: the act of explaining situations lets us deal with the chaotic novelty of the world, and create useful abstractions we can use to reason about it. More detail, with many great research references, in this blog from Shakir at DeepMind.

Executive Order strikes a chill in math, AI community: President Trump’s executive order banning people from seven predominantly muslim countries from coming to the US will have significant effects on academia, according to mathematician Terry Tao. “This is already affecting upcoming or ongoing mathematical conferences or programs in the US, with many international speakers (including those from countries not directly affected by the order) now cancelling their visit, either in protest or in concern about their ability to freely enter and leave the country,” he writes. “It is still possible for this sort of long-term damage to the mathematical community (both within the US and abroad) to be reversed or at least contained, but at present there is a real risk of the damage becoming permanent.”…
… another illustration of the law of unintended consequences when politics runs amok. Reminds me of one of the more subtle and chilling consequences of the UK’s decisions to leave the European Union, which was that it reduced collaboration between EU and UK scientists as EU researchers worried that, because their grants were contingent on EU funding, collaboration with UK scientists could violate funding causes. Scientists need to collaborate across international borders.

“Give it the latest personality module, we’re wheels up in five minutes!” – autonomous drones are going to operate in such a huge possibility space that today’s if-this, then-that rule systems will be insufficient, according to this research paper from the University of Texas at Austin and SparkCognition. Eventually, scientists may use a combination of simulators and real world data to train different drone brains for different missions, then swap bits of them in and out as needed. “We propose delinking control networks from the ensembler RNN so that individual control RNNs may be evolved and trained to execute differing mission profiles optimally, and these “personalities” may be easily uploaded into the autonomous asset with no hardware changes necessary,” they write.

Language as the link between us and the machines: CommAI: Facebook AI researchers believe language will be crucial to the development of general purpose AI, and have outlined a platform named CommAI (short for communication-based AI) that uses language to train and communicate agents..
…The idea is that the AI will operate in a world attempting to complete tasks and it’s only major point of input/output with the operator will be a language interface. “In a CommAI-mini task, the environment presents a (simplified) regular expression to the learner. It then asks it to either recognize or produce a string matching the expression. The environment listens to the learner response and it provides linguistic feedback on the learner’s performance (possibly assigning reward). All exchanges take place at the bit level,” they write.
… whether this solves the language ‘chicken and egg’ problem remains to be seen. Language is hard because it represents a high level abstraction to refer to a bunch of low-level inputs. “Horse”, is our mental shorthand for the flood of sensory data that coincides with our experience of the creature. Ideally, we want our AIs to learn similar associations between the words in their language model and their experience of the world. CommAI is structured to encourage this sort of grounding.
…“We hope the CommAI-mini challenge is at the right level of complexity to stimulate researchers to develop genuinely new models,” they write.

Reinforcement learning goes from controlling Atari games, to robots, to… freeway onramps?  “Expert level control of Ramp Metering based on Multi-Task deep reinforcement learning” shows how RL methods can be extended to the control systems for the traffic lights that filter cars onto freeways. In tests, the researchers’ system is able to learn an effective policy for controlling traffic across a 20 mile-long section of the 210 freeway in Southern California. Their technique beats traditional reinforcement learning algorithms, as well as a baseline system in which no control occurs at all…
…“By eliminating the need for calibration, our method addresses one of the critical challenges and dominant causes of controller failure making our approach particularly promising in the field of traffic management,” they write.

Soft robots for hard work: UK online supermarket Ocado has tested a new robotic hand, created as part of a European Union ‘Horizon 2020’ research initiative for soft robots. The hand can pick up objects of varying sizes and textures, and is shown deftly handling tricky items like limes and apples. It uses a dextrous gripper called ‘RBO Hand 2’ with developed by the technical university of Berlin. The approach is reminiscent of that of SF-based Otherlab, which is using soft materials and air to build more flexible robots and exoskeletons.

Sizing up deep learning frameworks: the AI community is bad at two things: reproducibility and comparability.  The research paper “Benchmarking state-of-the-art deep learning software tools” asses the varying properties of frameworks like TensorFlow, Caffe, Theano, CNTK, and MXNet, comparing their performance on a wide variety of tasks and hardware substates. Worth reading to get an idea of the different capabilities of this software.

Import AI administrative note:

The riddle of the missing research paper: Last week I profiled some new research from MIT that involved automatically tying spoken words and sections of imagery together. However, due to a clerical error I did not link to the paper. “Learning Word-Like Units from Joint Audio-Visual Analysis

OpenAI bits & pieces:

23 principles to rule them all, 23 principles to bind them: earlier this month a bunch of people involved in the development, analysis, and study of artificial intelligence gathered at Asilomar for the “Beneficial AI” conference, a sequel to a 2015 gathering in Puerto Rico. Many people from OpenAI attended, including myself. There, the attendees helped hash out a set of 23 principles for the development of AI that signatories shall attempt to abide by.

Ian Goodfellow (OpenAI) and Richard Mallah (FLI), in conversation: podcast between Ian and Richard, in which they talk about some of the big AI breakthroughs that happened in 2016, and look ahead to some of the things that may define 2017 (machine learning security! Further development of neural translation systems! Work on OpenAI Universe!, etc).

Inverse autoregressive flow 2.0: Durk Kingma et al have posted a substantial update to the paper: “Improving Variational Inference with Inverse Autoregressive Flow”.

Do fake galaxies dream of the GANs that created them? Ian Goodfellow interview for this article in Nature about how scientists are starting to use AI-generated images to create training datasets to teach computers to spot real galaxies.

Tech Tales:

[2023, a cybercafe in Ankara]

When you were young you studied ants, staring at their nests as they grew, spreading tendrils through the dirt, sometimes brushing their antenna against the perspex walls sandwiching their captured colony. But you liked them best outside – crawling from a crack in the steps by the garage and charting a path along the sidewalk, carrying blades of grass and pebbles into some other nest. Your house was full of the signs of ants; each blob of silicone gel and mortared over holes testifying some pitched battle.

Modern spambots feel a lot like ants to you. After the first AI systems went online around 2018 the bots gained the ability to learn from the conversations with people they engaged on the internet. After this, their skills improved rapidly and their manners became more convincing.

Information started to flow between people and the bots, improving the AI’s ability to gain trust and effectively launder ideas, viruses, links, and eventually outright fraud. Spend a year arguing on the internet with someone and, stranger or no, there’s a good chance you’ll click on a link they post, seeing if it’s one of their nutty websites or something else to confirm your beliefs about them. And all your talking has taught them a lot about you.

The attacks mounted by the AIs destroyed the value of numerous publicly traded social companies. People changed their internet habits, becoming more cautious, better at security, more effective at uploading the sorts of words and images and videos to persuade people that they were real humans in the real world. And the AIs learned from this to.

So now you have to hunt them out, trace their paths and links to find the nests from which they emanate. Like the ants, you don’t get much insight from imprisoning them in display cases; synthetic social networks, where the AI bots are studied as they interact with your own simulated people bots. You feed data to their control systems and try to simulate the experience of the real internet, but soon your little model world goes out of sync with reality. It fails to keep up with those of its peers roaming wild, cut off from the links on the real internet where it gets its software updates – the few bits of code still pushed by humans.

So now you hunt these controllers through the internet and in real life, switching between VPNs and ethereal internet sites, and dusty internet cafes in the baltics and, now, Ankara. But recently you’ve been having trouble finding the humans, and you wonder if some of the swarms you are tracking have stopped taking orders from people. You’ll find out soon enough – there’s an election next year.

Import AI: Issue 27: “Outrageously large” neural nets, AI for math, and the names of three oil rig robots

The future of AI: a big dollop of ‘learn-able computation’, paired with a sprinkling of hand-crafted algorithms: One reason why AlphaGo excelled at Go was because it paired a neural network-based learning system with a hand-tuned near-optimal Monte Carlo Tree Search algorithm. It’s likely that pairing the general-purpose function approximation properties of neural nets, with tried-and-tested algorithms will continue to yield results. (Akin to how people can enhance their mental performance by pairing intuitions with a few well-memorized rule-systems, like memory palaces, propositional calculus, and so on)
… further validation of this approach comes via AI being used for automated math: A Google paper, Deep Network Guided Proof Search, uses Deep Learning techniques to support proof search in a theorem prover. Automated theorem provers (ATP) simplify the lengthy process of verifying logical statements…
… The Google researchers train their AI systems to help guide their ATP along a few exploratory paths, then perform a second (faster) combinatorial search phase using hand-crafted strategies. “We get significant improvements in first-order logic prover performance, especially for theorems that are harder and require deeper search,” they write. ”Besides improving theorem proving, our approach has the exciting potential to generate higher quality training data for systems that study the behavior of formulas under a set of logical transformations,” they write. “This could enable learning representation of formulas in ways that consider the semantics not just the syntactic properties of mathematical content and can make decisions based on their behavior during proof search.”…
… in a further demonstration of the flexibility of basic AI components, the researchers test their system with three different learning substrates: a standard convolutional neural network, a tree-LSTM, and a WaveNet
…this research builds on
earlier work called DeepMath – Deep Sequence Models for Premise Selection, which demonstrated the viability of neural networks for automated logical reasoning.

Driverless buses: driverless vehicles will spend their first years of service in small, controlled environments, like corporate campuses, amusement parks, and diminutive states, such as Singapore. Latest example: Tata Motors, which spent the last 12 months testing self-driving buses on its corporate campus. (Workers might be better off bicycling, given that the buses are rate-limited to less than 10 kilometres per hour.)

Comma again? Breaking AI systems: feed Google’s new neural translation system the wrong string of characters and it might bark ‘Knife, Knife, Knife’’ at you in German. Fun bug, probably to be blamed on trailing commas, found by Iain Murray.

NIPS & Immigration: NIPS 2017 is set to be in America, and that has caused some anxiety among AI researchers troubled by President Trump’s executive order on immigration. Change.org petition to alter the location of NIPS here.

Care for a wafer thin AI processor on top of your pi(e)? Google is asking the Raspberry Pi community for tips about what types of ‘smart tools’ it can produce for makers. Fingers crossed it gets a big enough response to start creating ultra-efficient AI software to be deployed on minicomputers like the Raspberry Pi, complementing the existing DIY open source implementations from the hacker community. Perhaps we can pair this with the cardboard drones mentioned last week? Disposable, almost-sentient paper aeroplanes.

Next-gen AI = Talking Pictures: In 2014 and 2015 we saw researchers jointly train word and image models, so computers could generate captions for images.
… Later in 2015 researchers started to experiment with the inverse of this idea, seeing if words could be used to generate imagery. They were successful, and in a little under a year and a half moved from generating low-res, fuzzy images of toilets in fields, to crisp ‘I can’t believe it’s not butter’-grade synthetic images (eg, StackGAN)…
…Now, researchers are jointly training AI systems on audio waveforms and imagery. A new paper from MIT teaches a computer to learn the correspondence between sound and vision…
…The network is trained in two stages: first, researchers teach computers to associate audio segments with particular images, then in a second stage the computer identifies various entities in the images and seeks to link those entities to particular slices of audio. The result is a trained network that can identify specific visual entities from spoken clues
…this has quite subtle implications. For one thing, if you were able to generate a good enough network from English, then were able to train the image-sound correspondence on another language, such as German, you could do so without access to the base german language, instead translating through the shared visual layer…
…“This paves the way for creating a speech-to-speech translation model not only with absolutely zero need for any sort of text transcriptions, but also with zero need for directly parallel linguistic data or manual human translations,” the researchers write…
…new techniques for extracting emotions from speech, like “Emotion Recognition From Speech With Recurrent Neural Network” suggest this could be extended further, blending the emotions into the speech and imagery. (Next step: add smell.)…
… imagine a future where anthropologists seek out people whose language has little to no written record, and translate it into a universal data representation by having people narrate the contents of particular images or movies, pouring their speech into a shared visual dictionary whose entities are redolent of feeling. Brings a whole new meaning to the term ‘emotional palette’.

And so their structures shall be as intricate and befuddling as the architecture of Gormenghast: The AI community’s love of neural nets troubles technology cartographer Bruce Sterling: “They have a baroque, visionary, suggestive, occultist quality when at this historical moment that’s the very last thing we need,” he says.

Everything’s bigger in America – Google research points way to neural nets 1,000X the size of current ones: new Google research, ‘outrageously large neural networks: the sparsely-gated mixture-of-experts layer’, shows how to scale-up neural networks without having to boil the ocean. The new system – Google’s latest approach to applying ‘conditional computation’ to its systems – allows for networks 1,000 times larger than contemporary ones, with only slight losses in computational efficiency…
… The trick to this is the addition of what Google calls a ‘mixture of experts’ layer, which basically gives the network the ability to choose to call on an ever larger pool of ‘expert’ mini neural nets to help classify input. The MOEs are behind a gating network(s) which autonomously chooses how many MOEs to sample data from, letting the network scale in size without becoming totally unwieldy…
… Google tested its approach on a language understanding task and a translation task, attaining good results in both. Perhaps the most convincing evidence for the utility of the new approach lies in its apparent efficiency, with the new approach attaining state-of-the-art results on a language translation task, while using fewer resources…
…Google Neural Machine Translation: 6 days of training across 96 Nvidia K80 GPUs
…Mixture-of-Experts model: 6 days of training across 64 Nvidia K80 GPUs
…(Less GPUs and more performance? Quick, someone send a bouquet of flowers with a note saying ‘Condolences’ to Jen-Hsun Hwang).
…now let’s wait for a follow-up paper where the researchers follow through on their goal of training a trillion parameter model on a one trillion word corpus.

Roughneck robots for grubby deeds: The In Situ Fabricator1 brings us closer to an era where we can deploy robots with some general spectrum of capabilities into chaotic environments like construction sites.The robot is capable of millimeter-level precision, and is tested on two tasks: one, building an “undulating brick wall” (page 7, PDF) out of 1,600 bricks, stacked in a doubled lattice. The second task involves welding wires to create a ‘Mesh Mould’. The researchers are already working on a second version of the robot, and plan to increase its strength by moving from electric motors to hydraulic systems, while reducing its weight from 1.5 tons (too heavy for many buildings) to a more respectable 500 kilograms. The robot’s movement policies are derived from Optimal Control approaches, rather than in-vogue, but still quite young, neural network techniques..
but not all Robots != Robots: This Bloomberg story about robots taking over oil rigs highlights how oil companies have been shedding employees due to a crash in oil prices and, in some cases, replacing them with robots. Read on for the description of National Oilwell Varco Inc.’s ‘Iron Roughneck’ robot, replacing a few jobs. But is automation really to blame for the current job losses? Yes, but it’s hardly original…
Wind the clock back to 1983 and we find a news story talking about roughly the same hardware from roughly the same company doing roughly the same job in oil fields. “Roughnecks speak of their particular “Leroy” or “Igor” or “Billy Bob” as though “he” is a co-worker, which, in fact, is true. Some hands paint the machine with a face, big eyes or tennis shoes,” the news report says.

OpenAI bits&pieces:

Recursive job alert: We’re looking to hire the brilliant person that helps us hire the brilliant persons. Recruitment Coordinator. (And, as ever, we continue to look for machine learning and engineering candidates).

OpenAI Universe: visual guide. Visual illustration from Tom Brown about the diversity of Universe.

Modding OpenAI Gym: blog post, with code, about modifying the reward system of a particular OpenAI Gym environment.

Tech Tales:

[Bushwick, 2025: words projected on the outside of a datacenter.]

Frank McDonald annoys the hell out of you but you need some of the cards in his hand, so have to tolerate his burping and farting and ceaseless shifting in his chair. The fellow next to him, Earl Sewer, smells worse but doesn’t talk so much, so you find him a little easier to deal with.
   Shirley Ribs sits right next to you, and you and her have been trading cards all day. “Thanks Mr Grid,” she says, as you slide over a couple of units.
   “Pleasure’s all mine,” you say, as she flips a couple of cards over, and sends one spinning over to you and another to McDonald.
   The tension’s been running high for an hour or so as the crowd in the room has grown. People in the audience are shouting for everyone to make moves faster, calculate the odds better. The crowd hisses at shoddy play, having grown less forgiving for visibly bad bets. They say the Chinese have a better game going on next door, so for a while cards are tight as people sling their money into the game next door instead.
   That makes McDonald get restless, and so he starts trying to flip the game by buying up cards from you, then not trading any with Sarah Market, instead just switching back and forth between you and Sewer and Butcher. Market gets angry and starts trying to do a side-deal with the dealer to trade some units for surplus cards from the Chinese game, but the dealer says he doesn’t have the capacity.

You get a handle on it eventually, winning back a few rounds from McDonald while calming him down with the odd bluff. If the game flipped it would been the first time in seventeen years of continuous play. Note to self: almost got into trouble there, so deal differently next time.

[Note: these kinds of ‘state art’ performances proliferated for a while, as people sought to dramatize the inner workings of AI systems. In this performance artist T K Wenzler trawled the market feeds for interactions between AI representatives from a number of retail, infrastructure, and electricity players, then thanks to the MacArthur grant, bought up some of the more obscure feeds emanating from the trader AIs. He trained the data into representations of each participant, then applied domain confusion techniques to adapt this representation into a gigantic movie corpus, culled from security camera footage of Prison card games.

The installation ran 2024-2031. Discontinued to data feeds becoming un-parseable, after the supreme court ruled for a relaxation of interpretability standards.]

Import AI: Issue 26: Low-wages for robots, AI optometry, and RL agents that tell you what they’re thinking

Deep learning needs discipline: AI researchers need to do a better job of making their experiments comparable with one another by publishing more details about the underlying infrastructure and specific hyperparameter recipes they use, says Google Denny Britz. ”The difficulty of building upon other’s work is a major factor in determining what research is being done,” he writes. “It’s easiest, from an experimental perspective, to build upon one’s own work… It also leads to less competition”. Researchers can prevent groupthink and enhance replicability by publishing code to go along with their papers, and giving all the details needed to aid replication.

Self-driving cars save lives: Tesla cars with Autopilot installed have a 40% lower crash rate than those lacking the software, according to data the company shared with the National Highway Traffic Safety Association. Finally, a figure that proves the residents of Duckietown are safer than your average rubber duck…
but self-driving tech may also magnify our selfishness: today, downtown urban driving is frequently fouled up by people that stop their cars and hop into a cafe to grab a drink while their vehicle idles outside, and by the incorrigible optimists that endlessly circle a street waiting for a parking spot to open up. Roboticist Rodney Brooks suspects that when really smart autonomous cars arrive people will tend towards even more of these selfish occurrences, hopping out of their AV to get a latte and telling the car to hover nearby, or autonomously circle for a parking spot. I can see a kind of intermediary future where urban traffic is more unpleasant due to hordes of dutiful vehicles, unwittingly enabling their owners’ selfishness.

AI creates: endlessly replicating cultural artifacts: given enough data, neural networks can learn to generate anything. That points to a future where certain visual classes of object, ranging from comic book characters, to landscape shots, to others, will be partially generated and refined by AI. This blog about using recurrent neural networks to generate Egyptian-esque hieroglyphics is a nice example of that phenomenon in action.

Mutating AI programming languages: Facebook and others have released PyTorch. The AI programming framework implements a technique called ‘reverse-mode auto differentiation’ to make it easier to modify neural networks created using the language. “While this technique is not unique to PyTorch, it’s one of the fastest implementations of it to date. You get the best of speed and flexibility for your crazy research,” the project writes. It’s open source, naturally.

Good morning, HAL, the AI cyclops-optometrist will see you now: Jeff Dean from Google likes to say that computers have recently ‘begun to open their eyes’. That’s in reference to the powerful image recognition algorithms we’ve developed in the past half decade. But what we’re lacking for these computers is an optometrist – researchers don’t have a good understanding of the characteristics of computer vision, and much of our research is made up of trial-and-error as much as theory…
…Now, academics from the University of Toronto are trying to change this with a paper that analyzes the structure of the effective receptive field in neural networks. Their work finds interesting parallels between how receptive fields behave in convolutional neural networks versus in mammalian visual systems, and provides clues as to ways to increase the efficiency of future networks. Techniques like this, paired with ones like the spatially adaptive computation time paper, promise a future where our computers can see more efficiently, and we can work out how to tune them based on a more rigorous theoretical understanding of their unblinking ‘eyes’.

AI and automation:The technology is not the problem. The problem is a political system that doesn’t ensure the benefits accrue to everyone,” says Geoff Hinton. In potentially related news, regulator-flouting self-driving car startup comma ai wants to ‘build the largest AI data collection machine in history’.

Cost per hour for a typical industrial robot, according to Kuka: 5 Euros
Cost per hour for worker to do similar job:
…Germany: 50 Euros
…China: 10 Euros
…”“It took 50 years for the world to install the first million industrial robots. The next million will take only eight,” reports Bloomberg.

Mysterious hippocampal signals: scientists have conducted a study of the firing of hippocampal place cells in mice. (Place cells tend to fire in response to the living entity being in a specific location, hence the name). The experiment suggests that place cells may encode some other type of information, along with geographical markers. Further analysis here will lead to more clues about how the brain represents information. We already know that London taxi drivers store a mental map of the city in the hippocampus (which appears to have an enlarged volume as a consequence) — perhaps the place cells could also function as a geographically-indexed store of bawdy jokes?

22nd Century Children’s Books: the Entertainment Intelligence Lab at Georgia Tech has trained a reinforcement learning agent to shout about its thoughts and plans as it plays classic game Frogger. “Looking forward to a hopping spot to jump to catch my breath,” it says. Good luck, Froggo!
…This kind of work could help solve the interpretability issues of AI, by making the thought processes of AI agents easier for people to diagnose and analyze…
… I can also imagine building a new form of children’s entertainment with this technology, where the characters are RL agents and they shout about their goals and ideas as they proceed through dynamically generated worlds.

Megacorps as powerful as countries: “I was recently together with the Prime Minister of quite an important country who told me there are three or four powers left in the world. One is US, one is China, and the other is Alphabet,” Klaus Schwab told Alphabet co-founder Sergey Brin, during a conversation in Davos. (Because you can’t be right all the time bonus: Brin said he mostly ignored AI at Google in the early days, only later realizing its huge importance.)

AI system gets FDA approval: Arterys has gained clearance from the US Food and Drug Administration to market Cardio DL, software that uses AI to automatically segment images taken from cardiac MRIs. Another reminder that AI technology moves very rapidly from research into production.

After the apocalypse, the data centers shall continue: this fluffy, PR video from Amazon Web Services reminds me of the tremendous investments that Amazon, Google, Microsoft, Facebook, and others have made into renewable energy infrastructure; from AWS’s fleets of solar panels, to Google’s stake in the Ivanpah solar power facility, to Facebook’s air-cooled arctic circle enclave, a new baroque landscape is taking shape, in service of the neo-feudal empires of the digital world…
…And should some calamity strike, we can imagine that the computers in these football field-sized computer cathedrals will be the last to turn off. However, the inefficient, closed-circuit environments of legacy data centers will probably be the last to house human life, as depicted in this short story by Cory Doctorow called ‘When Sysadmins Ruled the Earth’. Quick, befriend a sysadmin at a non-tech company!

OpenAI bits&pieces:

OpenAI’s Tom Brown will be speaking at the AI By the Bay conference in San Francisco in March. Readers can get 20% off tickets for the conference by heading over to this link and using the promo code ‘OPENAI20’.

Tech Tales:

[2035, Moonbase Alpha, the Moon]

Two astronauts sit in front of a 6-foot wide and 3-foot tall screen. The main lights are out, and their faces are lit by the red strobing of the emergency system.

“How long has it been like this,” says one of the astronauts.
“About two hours,” says the other. “The executable came in through the comm relay. They encoded it in transmission intervals on some of the automated logistics channels. Which means-”
‘-which means that they’d already bugged the software when it was installed, so it could receive the payload.”
“Yup”.
Both astronauts lean back and stare at the screen. One of them places their hand across their face and squints through their fingers at the images rolling across the monitor.

ERROR. DEATH INEVITABLE! Scrolls across the screen. The text blinks out, replaced by a fuzzy image of an astronaut wearing a priest’s ID patch and no helmet standing in an airlock. The screen shimmers and, next to the priest, appears a teenage girl, also lacking a helmet. Now a helmet materializes in the air, hovering between them. Green circles flash over their faces, flickering as the AI tries to pick who to save. The text appears again: ERROR. DEATH INEVITABLE!

“We’ve gotta burn it,” says one of the astronauts. “Go full analogue and rebuild the base from the ground up.”
“But that’ll take weeks!”
“We don’t have a choice. The longer we wait the worse the damage is going to be. It’s already started shunting oxygen into different airlocks. Next, it might start opening some of the doors.”

Class note: These kinds of ‘trolley problem’ viruses proliferated during the late 2020s and early 2030s, before the UN mandated AI systems be installed with their own moral heuristics, codename: ETHICS WARDENS.

Import AI: Issue 25: Open source neural machine translation, Microsoft acquires language experts Maluuba, Keras tapped for TensorFlow

If this, then drive: self-driving startup NuTonomy is using a complex series of rules to get its self-driving cars in Singapore to drive safely, but not be so timid that they get can’t get anywhere. Typically, AI researchers prefer to reduce the number of specific rules in a system and instead try to learn as much behavior as possible, inferring proper codes of conduct from data gleaned from reality. NuTonomy’s decision to hand-code a hierarchy of rules into its system provides a notable counterpoint to the general trend towards learning everything from data. The company plans to expand its commercial offering in Singapore next year, though its cars will still be accompanied by a human ‘safety driver’ — for the time being.

Disposable lifesaving drones: Otherlab is building disposable drones with cardboard skins, as part of a research program funded by DARPA. The drones lack an onboard motor and navigate by deforming their wing surfaces as they glide to their targets.
…perhaps one day these cardboard drones will fly in swarms? Scientists have long been fascinated by the science of swarms because they afford distributed resiliency and intelligence. The US military has recently highlighted how swarms of drones can perform the job of much larger, more expensive, single machines. I wonder if we’ll eventually develop two-tiered swarms, where some specialized functions are present in a minority of the swarm. After all, it works for ants and bees.

AI acquisitions: Amazon quietly acquired security startup Harvest.AI, according to Techcrunch. Next, Microsoft, acquired Canadian AI startup Maluuba…
…Maluuba has spent a few years conducting research into language understanding, publishing research papers on areas like reading comprehension and dialogue generation. It has also released free datasets for the AI community, like NewsQA
…Deep learning stalwart Yoshua Bengio will become an advisor to Microsoft as part of the Maluuba acquisition – quite a coup for Microsoft, though worth noting Bengio advises many companies (including IBM, OpenAI, and others). This might make up for Microsoft losing longtime VP Qi Lu, who had done work for the company in AI and is now heading to Baidu to become its COO.

Sponsored: RE•WORK Machine Intelligence Summit, San Francisco, 23-24 March – Discover advances in Machine Learning and AI from world leading innovators and explore how AI will impact transport, manufacturing, healthcare and more. Confirmed speakers include: Melody Guan from Google Brain; Nikhil George from Volkswagen Electronics Research Lab and Zornitsa Kozareva, from Amazon Alexa. The Machine Intelligence in Autonomous Vehicles Summit will run alongside, meaning attendees can enjoy additional sessions and networking opportunities. Register now.

Keras gets TensorFlow citizenship: high-level machine learning library Keras will become an official, supported third-party library for TensorFlow. Keras makes TensorFlow easier to use for certain purposes and has been popular with artists and other people who don’t spend quite so much time coding. Anything that broadens the number of people able to fiddle with and contribute to AI is likely to be helpful in the short term. Congratulations to Keras’s developer Francois!

Don’t regulate AI, have AI regulate the regulators: Instead of regulating AI, we should create ‘AI Guardians’ – technical oversight systems that will be bound up in the logic of the AIs we deploy in the world, says Oren Etzioni, CEO of the Allen Institute for AI Research. (Etzioni doesn’t rule out all cases of regulation but, as with what parents say about sugar or computer games, his attitude seems to be ‘a little bit goes a long way’.)

Self-driving car deployment, AKA Capitalism Variant A, versus Capitalism Variant B: “Industry and government join hands to push for self-driving vehicles within China,” reports Bloomberg, as Chinese search engine Baidu joins up with local government-owned automaker BAIC to speed development of the technology….
… Meanwhile, in America, the Department of Transport has formed a federal Committee on Automation, which gathers people together to advise the DOT on automation. Members include people from Delphi Automotive, Ford, Zipcar, Zoox, Waymo, Lyft, and others. “This committee will play a critical role in sharing best practices, challenges, and opportunities in automation, and will open lines of communication so stakeholders can learn and adapt based on feedback from each other,” the DoT says…

Open Source Neural translation: Late in 2016 Google flipped a switch that ported a huge chunk of its translation infrastructure over to a Multilingual Neural Machine Translation system. This tech combined the representations of numerous languages into a big neural network, and let you translate between pairs that you didn’t have raw data for. (So, if you had translations for English to Portuguese, as well as ones for Portuguese to German, but no corpus of English to German, this system could attempt to bridge the gap by tunneling through the joint representations from its Portuguese expertise…
…Now, researchers Yoon Kim and harvardnlp, have released an open source neural machine translation system written in Torch, so people can build their own offline, non-cloud translation systems. The Babelfish gets closer!

AI, AI everywhere, and not a Bit of information to send: our automated future consists of many machines and little human-accessible information, according to this airport-hell tale from Quartz. Technology that seems efficient in the aggregate can have exceedingly irritating edge case failures.

$27 million for AI research: Reid Hoffman, Pierre Omidyar, the Knight Foundation, and others, have put $27 million toward funding research into ethical AI systems. The funds will support research that combines the humanities with AI, and will help answer questions about how to communicate about the capabilities of the technology, what controls should be placed over it, and how to grow the field to ensure the largest number of people are involved in the design of this powerful technology, among others.

Power-sipping eyes in the sky: the US military says it’s pleased with the performance of IBM’s neuromorphic TrueNorth processor. The chip performs on par with a traditional high-end computer for AI-based image identification tasks, while consuming between one twentieth and one thirtieth the power of an NVidia Jetson TX1 processor, apparently. This represents another endorsement of IBM’s idea that non-Von Neumann architectures are needed for specialized AI chips. However, deploying the software on the chip can be a bit more laborious than going via NVidia’s well supported inbuilt ecosystem, the military says.

Deep learning is made of people! Startup Spare5 has raised $14 million and renamed itself to Mighty AI, as it looks to capitalize on the need for better training data for AI. It will compete with companies like Crowdflower and services like Amazon’s Mechanical Turk to offer companies access to a pool of people they can tap to label data for them. One note to remember: for research, it’s possible to mostly use public datasets when developing new techniques, but for commercial products you’ll typically need highly-specific labelled data as you build products for specific verticals.

Never underestimate the pre-Cambrian computing power of government: I had a friend of my Dad’s who, a few years ago, told me he was maintaining some old UK National Health Service systems by writing stuff for them in BASIC – something I recollect whenever I have cause to visit a UK emergency room. It’s almost reassuring that the White House is no different.  “We had a computer on our desk. We didn’t have laptops, we didn’t have iPads, we didn’t have iPhones, and we had about a half a bar of service. So if you brought in your own equipment, you couldn’t use it…We had Compaqs running Windows 98 or 2000. No laptops. It was like we had gone back in time,” staffers recall. Technology takes a long time to turn over in large bureaucracies, so while we’re all getting excited about AI it’s worth remembering that uptake in certain areas will be sl-oooo-wwww.

Computer, enhance: just a year ago, researchers were getting excited about deep learning based techniques to upscale the resolution of photos. These methods work, roughly, by showing a neural network loads of small pictures and their big picture counterparts, and train it to figure out how to infer the high resolution details from low-resolution inputs. You wouldn’t want to use this to increase the resolution of keyhole satellite photos of foreign arms dumps (as any new or errant information here could have extremely unpleasant consequences), but you might want to use it to increase the size of your wedding photos…
Twitter appeared to be enthused by this technique when it acquired UK startup Magic Pony, which had done a lot of research in this area. Now Google is tapping the same techniques to save 75% of bandwidth for users of Google plus by using its RAISR tech, which it first talked about in November. Another demonstration of the rapid rate at which research goes into production within AI.

Think AI is automated? Think again. You’ve heard of gradient descent – one of the processes by which we can propagate information through AI. Well, there’s a joke among professors that for sufficiently hard problems you also turn to another less known but equally important technique called ‘Grad Student Descent’, the formula of which is roughly:
Solution = (N post-doc humans * (Y ramen * Z coffee))…
… so as much as the research community talks about new techniques based around learning to learn, and getting AI to smartly optimize its own structure, it’s worth remembering that most real world applications of the technology rely more on the ingenuity of people than of the amazing power of the algorithms…
…David Brailovsky, who recently solved a traffic light classification competition, explains that “The process of getting higher accuracy involved a LOT of trial and error. Some of it had some logic behind it, and some was just “maybe this will work”.” Some tricks tried include rotating images, training with a lower learning rate, and, inevitably, finding and correcting bugs in the underlying dataset. (Hence the business opportunity for aforementioned companies like Mighty AI, Crowdflower, and so on.)

OpenAI bits&pieces:

What does it mean to be the CTO of OpenAI, and how did that role come about? Co-founder Greg Brockman explains. Shame he gave away his trick about deadlines, though.

Tech Tales:

[2019: A cafe, somewhere in the baltics.]

So it comes down to this:  after two years of work, you just write a few lines, and shift the behavior of, hopefully, millions of people. But you need to get this exactly right, or else the algorithms could realize the charade and you burn the accounts for almost no gain, he thinks, hands hovering above the keyboard. He’s about to send out a very particular product endorsement from the account of a famous, Internet personality.

He spent years constructing the personality, building it up from the dry seeds of some long-inactive, later-deleted, tumblr and instagram accounts. It took years, but the ghost has grown into a full internet force with fans and detractors and even a respectable handful of memes.

The next step is product endorsement – and it’s a peculiar one. SideKik, as it’s called, will give the ghost-celeb’s followers the chance to give control over a little bit of their online identity to a small AI, said to be controlled by the celebrity. Be a part of something bigger than yourself!, he wants the celebrity to say and the fans to think, download SideKik and let’s get famous together!

What the fans don’t know is that if they give away SideKik they won’t be gaining the subtle, occasional input of the celebrity, instead they’ll become an extension of the underlying thicket of AI systems, carefully sculpted and maintained by the man at the keyboard. Slowly, they’ll be used to gather microscopic shreds of data from the internet through targeted messages with their own followers, and they’ll also be used to create the appearance of certain trends or inclinations in specific groups on the internet. The anti-AI detectors are getting better all the time now, so it takes all this work just to create the facsimile of a real community orbiting around a real star. Due to the spike in illegitimate traffic from automated AI readbots, typical internet ads have become so common and so abused as to be almost worthless, so what’s a marketer meant to do?, he thinks, composing his next few words that could give him a legion of unsuspecting guerrilla marketers.