Import AI: Issue 32: Evolution meets Deep Learning, busting AI hype, and the automatic analysis of cities.

by Jack Clark

ImageNet, meet MoleculeNet: in AI, datasets are a leading indicator of the kinds of problems that we think machines can solve. When the ImageNet dataset was released in the late oughts it signaled that Fei-Fei Li and her colleagues felt computers were ready to tackle a large-scale, multi-category image and object identification challenge. They were right – the dataset motivated people to try new approaches to try and crack it, and partially led to the deep learning breakthrough result in 2012. Now comes MoleculeNet, a dataset which suggests AI may be ready to rapidly analyze molecules, learn their features, and classify and synthesize new ones…
….the same goes for HolStep, a new dataset released by Google that consists of thousands of Higher-Order Logic proofs – machine-readable assertions about mathematics and what is true and what is not. This means Google thinks AI may be ready to be unleashed on the exploration of math theorems.

You get an AI Lab and you get an AI Lab and… Pinterest gets an AI lab.

AI and jobs – tension ahead: “Economists should seriously consider the possibility that millions of people may be at risk of unemployment, should these technologies be widely adopted,” says a post on Bank Underground, a semi-official blog from staffers for the UK Bank of England. “We argue that the potential for simultaneous and rapid disruption, coupled with the breadth of human functions that AI might replicate, may have profound implications for labour markets,” it says.

Republican-voting cities are full of pickup trucks, an AI trained on Google Street View figures out. Why not use AI to augment the results of expensive, time-consuming door-to-door surveys? That’s the intuition of researchers with Stanford, the University of Michigan, Baylor College of Medicine, and Rice University, who have used AI to determine socioeconomic trends from 50 million Google StreetView images of 200 American towns. This being America, the researchers focus on gathering data about the motor vehicles in each city, and find that to be a statistically significant indicator for factors like political persuasion, demographics, and socioeconomic status.

Automated sexism analysis: academics and actors have worked together to created the Geena Davis Inclusion Quotiant (GD-IQ) tool, which uses machine learning to analyze the representation of gender in movies. GD-IQ was fed 100 of the top grossing movies of all time and it found that men are seen and heard nearly twice as much as women. But there’s one genre where women are seen on screen more frequently than men: horror films. Aaaahhh! (Now we just need audio trawling systems to improve enough for us to run an automated Bechdel test on the same corpus.)

The overmind sees all of your retail failings: Orbital Insight has used machine learning techniques to analyze satellite photos of cars in parking lots at JC Penny stores across America and detect a 10 percent year-over-year fall in usage.

Help build Keras: if you want to make Keras even better, then its creator Francois Chollet has a fun laundry list of work for you to do, ranging from writing unit tests, to porting examples to the new API. It takes a whole village to create a framework – lend a keyboard.

Murray’s on the move: Murray Shanahan is joining DeepMind, though he’ll remain on at Imperial College as a part-time supervisor for PHDs and postdocs. Murray recently co-authored a paper seeking to unite symbolic AI with reinforcement learning. That would seem to align with DeepMind’s success at pairing traditional AI methods (Monte Carlo Tree Search) with deep methods to such success in the case of AlphaGo.

AI compression: Netflix claims its able to use neural network compression approaches to reduce the size of the footage it pipes over the internet to you without sacrificing as much visual quality. Sounds similar to Twitter acquisition Magic Pony, which uses ‘superresolution’ techniques to automatically upscale shoddy pictures and (I’m guessing) videos.

A neural network watermark – just what the IP lawyers asked for: research on ‘Embedding Watermarks into Neural Networks’ gives people a way to subtly embed a kind of digital watermark into a neural network without impairing performance. This potentially makes it easy for companies to track trained modules as they propagate across the internet and, to the groans of many DIY enthusiasts, issue take down requests for AI built out of infringing content.

Cobalt Robotics – your new, fancy looking security guard: the main problem I have with security guards is their lack of lovingly sculpted plastic bevels and felt coverings. It seems Colbat Robotics has heard of my problem and invented a robot to fix it. The company’s security bots are designed to patrol offices and museums, using their onboard software to detect changes, such as intruders or the movement of suspicious objects. Each robot has super-human sensors with perfect recall and an auditable history of where it was and what it saw“, Cobalt writes.

Spotting tumors with deep learning: Google has trained an AI system to localize tumors in images of potentially cancerous breasts. It claims it is able to surpass the capabilities of human pathologists who are given unlimited time to inspect the slides…
…accuracy of Google’s deep learning based tumor localization; 89%
…accuracy of a human pathologist given unlimited time to inspect the same images: 73%
…Related: Tel Aviv startup Zebra Medical says it can use AI to detect some types of cancerous cells with 91 per cent accuracy, versus 88 per cent for a trained radiologist. “In five or seven years, radiologists won’t be doing the same job they’re doing today,” says founder Elad Benjamin. “They’re going to have analytics engines or bots like ours that will be doing 60, 70, 80 per cent of their work.”

The unmanned drone future. Military sales from now till 2025:
Unmanned ground vehicles… 30,000
Unmanned aerial vehicles… 63,000
…”With technology advancing at such a pace, a myriad of applications will unfold limited only by the imagination of the designer,” writes Jane’s Aerospace Defense and Security.

Estonia passes law allowing for countrywide testing of robocars: Estonia passed a law this week letting anyone test robot cars on its ~58,000 kilometers of roads, as long as they’re accompanied by a human to take over in case things go wrong.
…meanwhile, Virginia has passed a state law permitting delivery robots to operate on sidewalks. People are required to monitor the robot and take over if things go wrong, but don’t need to be within line of sight or anything. Similar laws are on the table in Idaho and Florida.

JP Morgan automates the interpretation of commercial loan agreements via new software called COI. This is something that previously consumed 360,000 hours of human labor a year at the firm. There are other initiatives as well, with bots now doing the work of 140 people, JP Morgan says.

Evolving deep neural networks at the million CPU scale…Scientists at The University of Texas at Austin and Sentient Technologies have extended NEAT, an evolutionary optimization technique first outlined in 2002, to be capable of evolving different neural network structures and also the hyperparameters (the numbers AI researchers typically calibrate via a mix of intuition and knowledge to get the AI to work). The research, Evolving Deep Neural Networks, is in a similar spirit to Google’s “Neural Architecture Search” paper, though uses genetic algorithms to evolve the structure of the neural networks, while Google evolved its architectures via reinforcement learning. The approach yields results with a classification error of 7.3% on the CIFAR image classification task, compared to around 6.4% for the current state of the art . They’re also able to use the same technique to evolve an LSTM to conduct language modeling tasks, demonstrating the apparent generality of the approach.
… so, what’s the point of evolving stuff rather than designing it? The thesis is that we can use this technique to throw a load of computers at a hard problem and have the AI evolve to a decent system, without people needing to calibrate it…
the researchers applied the tech to an image captioning system for an un-specificed magazine website (though the image example on page 6 looks exactly like one on a Wired website credited to a Wired photographer). They claim the resulting architecture has performance on par or slightly exceeding the quality of hand-tuned approaches…
…A GIANT, INVISIBLE, GLOBAL SUPERCOMPUTER: The researchers also give more detail about the infrastructure Sentient has been building for its massively distributed financial trading and product suggestion services. The system, named “DarkCycle”, currently utilizes 2M CPUs and 5000 GPUs around the world, resulting in a peak performance of 9 petaflops. (If we assume this is equivalent to 9 petaflops, then that would make DarkCycle’s processing power equivalent to about the 10th fastest system in the world, though the distributed nature of it means that latency means it is far less powerful, FLOP for FLOP, than a full HPC rig.)
ANOTHER, EVEN BIGGER, INVISIBLE, GLOBAL SUPERCOMPUTER: Google researchers published a paper on Friday called “Large-Scale Evolution of Image Classifiers.” They show that evolution can be used to evolve image classification systems with performance approaching some of the best hand-tuned systems…
…Google’s best single model had a test accuracy on the CIFAR-10 image dataset of 94.1 percent, close to with tuned approaches. But it came at a great, computational cost. This system alone represented the outcome of 9 * 10^19 floating point operations per second – over an exaflop, expended over hundreds of hours of training. This represents “significant computational requirements” Google says. Go figure!
… these systems likely herald the recombination of evolution and deep learning approaches, which may yield further interesting cross-pollinated breakthroughs..
Given that DNNs are generic function approximators these two research publications suggests that evolution may be a viable strategy to tame systems of comparable performance to hand-made ones, without needing as much specific domain expertise.
… the conclusion to this research paper is worth quoting at length: “While in this work we did not focus on reducing computation costs, we hope that future improvements to the algorithms and the hardware will allow for more economical implementations. In that case, evolution would become an appealing approach to neuro-discovery for reasons beyond the scope of this paper. For example, it “hits the ground running”, improving on arbitrary initial models as soon as the experiment begins. The mutations used can implement recent advances in the field and can be introduced without having to restart an experiment. Furthermore, recombination can merge improvements developed by different individuals, even if they come from other populations. Moreover, it may be possible to combine neuro-evolution with other automatic architecture discovery methods.”

Bursting the AI hype bubble:The accomplishments so breathlessly reported are often cobbled together from a grab bag of disparate tools and techniques. It might be easy to mistake the drumbeat of stories about machines besting us at tasks as evidence that these tools are growing ever smarter—but that’s not happening,” writes Stanford computer scientist Jerry Kaplan in the MIT Technology Review. ““true” AI requires that the computer program or machine exhibit self-governance, surprise, and novelty,” writes Ian Bogost in The Atlantic.
…I’d say that Kaplan’s point can be partially refuted by the tremendous tendency for reusability in today’s AI systems. For instance, the evolution research outlined in this paper suggestions we can actually design very large, very sophisticated systems in an end-to-end way – we’re starting to grow rather than assemble our AIs. Far from being “cobbled together” these machines are more like an interlocking set of components whose interfaces are fairly well understood, but are being developed at different rates.I’d also argue that some modern AI systems are starting to show the faintest traits of (controlled, highly limited) self-governance via capabilities like the automatic identification and acquisition of auxiliary goals, as outlined in DeepMind’s “UNREAL” research.

All watched over by machines of loving Facebook grace: Facebook has trained its AI systems to spot indicators of suicide in posts people make, and is using that data to proactively send alerts to its community team for review. “A more typical scenario is one in which the AI works in the background, making a self-harm–reporting option more prominent to friends of a person in need,” Buzzfeed reports. The system apparently sets off fewer false alarms than people and has greater accuracy…
…using AI to flag potential suicides seems like an unalloyed social good, but what unnerves me is that the same techniques could be used to flag people indulging in political discourse that diverged massively from the norm, or any other behavior which steps out of the invisible lines created by the consensus generated by a platform containing the data of over a billion people. It’s always worth keeping in mind that for every Facebook with (in this case) altruistic intentions, there are other parties who may have different values and priorities.

OpenAI bits&pieces:

OpenAI’s Tom Brown will be giving a talk on OpenAI Gym and Universe at AI By the Bay on Wednesday, March 8.

Tech tales:



*PROJECT_OVERVIEW*: LAB BENCH was a research program into the evolution of hostile, autonomous, electronic threats. LAB BENCH consists of the GROUND_TRUTH threat site, and, since 2031, the DENIAL RING. Projects BLACK_BRIDGE and NET_SIM were retired following the 2031 UNAUTHORIZED_EXCURSION event. The goal of LAB BENCH was to create a synthetic, digitally hostile urban environment, meant to mirror the changing, semi-autonomous, swarm intelligence approaches being fielded by foreign military powers. Was frequently used for training and, later, AI software experimentation.

STATUS: Recategorized as ACTIVE_THREAT_SITE in 2031. Now overseen by XXXXX and XXXXXX.


2015: Full-scale model city built for nuclear attack and disaster response simulations repurposed as military software attack and countermeasure testing site.

2020: Installation of high-bandwidth fiber, comprehensive automation suites for synthetic traffic and pedestrian movement, and high proportion of ‘lights out’ infrastructure. Addition of AI hacking and counter-hacking software for testing and development.

2025: High-performance computing cluster installed.

2028: Installation of large group of robotic workers and robust closed-loop renewable energy systems. DARPA starts public grant to benefit parallel LAB BENCH R&D. RFI put out for CITY SCALE FORMAL VERIFICATION OF DYNAMIC, MOBILE IOT DEVICES. Budget: $80 million.

2029: Automated manufacturing and mining facilities installed. City disconnected from global internet, air-gapped onto own private network. Significant retrenching of fiber in larger surrounding area draws several media articles, subsequently censored.

2030: Upgrade to learning substrate of GROUND_TRUTH computer network. Addition of software for evolutionary methods of optimization, and techniques for unsupervised auxiliary task identification and acquisition.

2031: Reclassified as ACTIVE_THREAT_SITE following unauthorized excursion of CLASSIFIED from GROUND_TRUTH. Current status: Unknown

DENIAL RING: Created 2031 following the UNAUTHORIZED_EXCURSION event from GROUND_TRUTH. Consists of 12 Forward Operation Bases arranged in a dodecagon configuration around the perimeter of GROUND_TRUTH, with a one mile zero-electronic air gap to prevent transference events. Each base is fully automated and contains significant amount of artillery and munitions along with sophisticated kinetic and electronic countermeasures. Strategic deterrent ‘LoiterSquad’ located at nearby CLASSIFIED location.


Mobilization: Normal

2:00:00 Two drones sighted taking off from center of GROUND_TRUTH. IDs queried against global database: no matches. ID string is unconventionally formatted. Drones of unconventional appearance. Pictures queried against global database: Partial matches across 80 different models of drones. Further query: multiple manufacturers linked to GROUND_TRUTH equipment contracts.
Mobilization: Satellites auto-notified.

2:00:50 Unidentified Drones fly together to North East border of GROUND_TRUTH. Drones do not respond to electronic hails. City telemetry extracts no useful information from them. FOBs unable to acquire signals from drones for automatic shutdown.

2:01:00 Range of frequencies in RF BAND begin emanating from 64 locations across GROUND_TRUTH.

2:01:30 Unidentified Drones reach GROUND_TRUTH’s perimeter.
Mobilization: SECCOM notified.

2:03:50 Unidentified Drones begin crossing one mile air gap toward North West edge of DENIAL RING, leaving GROUND_TRUTH borders.
Mobilization: Nearby military aircraft notified. NRO notified.

2:04:05 Unidentified Drones destroyed by precision munitions from Forward Operating Bases #9 #10 #11

2:05:11 DENIAL RING drone squadrons and ground vehicles cease automatic electronic telemetry reporting.

2:05:12 FOBs #3 #1 #4 #5 #9 countermeasures come under fire from non-responsive DENIAL RING drone squadrons and ground vehicles.

2:05:15 Three fleets of Unidentified Drones take off from GROUND_TRUTH.
Mobilization: Strategic deterrent codename LoiterSquad activated.

2:05:27 Remaining FOBs come under fire. Countermeasures of FOBs #3 #1 #9 fail.
Mobilization: Nearby SEAL team put on high alert.

2:05:32 FOBs fire on fleets of drones traveling out from GROUND_TRUTH. One fleet destroyed, two others unharmed. All FOBs’  targeting corrupted by computer virus of unknown origin.

2:05:39 Second drone fleet destroyed by fire from FOBs #10, #11.

2:05:45 Remaining drone fleet passes out of close-impact munitions from all FOBs.

2:05:59 Drone fleet surpasses range of all conventional weaponry.

2:06:00 All FOBs go offline from computer virus of unknown origin.

2:06:01 Satellite footage shows unidentified unmanned ground vehicle platforms emerging from warehouses in center of GROUND_TRUTH and driving toward city edges.No IDs.

2:06:02 Non-responsive DENIAL RING drones begin to fly North on bearing consistent with CLASSIFIED LOCATION.
Mobilization: Loitersquad given go ahead for mission completion.

2:06:04 Unidentified convoy begins to advance across DENIAL RING air gap .

2:06:04 Loitersquad deterrent impacts center of GROUND_TRUTH.

2:06:05 Status of GROUND_TRUTH and DENIAL RING unknown due to debris.

2:06:50 Satellite confirmation of total destruction of specified land area.

2:20:00 Seal team arrives and begins visual sweep of area. No sightings.