Import AI: Issue 18: Snooper’s Charter&AI, MXNet, and Microsoft’s Quantum Computer Bet

by Jack Clark

Delicious data for the state-backed Deep Learning gods: the UK passed the Investigatory Powers Act 2016, known colloquially as the ‘snooper’s charter’, into law. It forces internet providers to keep a record of all websites visited by all people in the country for up to one year. Let that sink in for a minute! Now have a gin&tonic and a lie down. Better? OK! Since this also includes a temporal component of what websites (domains, not specific pages) people visit, it will give government an incredibly useful dataset to run complex AI-based inference algorithms on. Spooky agencies will be able to divine odd traits about the national mood by analyzing the rise and fall of the popularity of certain websites, and it’ll be possible to profile people and group them according to their habits, then analyze their activities and watch for correlations or disconnects with other groups. The applications of modern Ml techniques to this sort of data are vast and disquieting.

Ethical machine learning: should we conduct experiments, even if they seem to be offensive? The answer to that question was ‘yes’ from a few Import AI readers, who took issue with my characterization of the ‘automated inference on criminality using face images’ paper from last week. Some readers pointed out that this could be an interesting experiment to run, and I countered by saying I’d need a much larger section of the research paper given over to an evaluation of the ethical and moral context of the experiment.

Battle of the frameworks: Microsoft has CNTK, Google has Tensorflow, and Amazon has… MXNet, as of this week. Amazon has put its weight behind the MXNet deep learning framework, making the software the default framework for running deep learning on Amazon Web Services. MXNet’s elevation at Amazon is likely due to its longtime association with Carnegie Mellon professor (and recent Amazon hire) Alex Smola, who has sought to increase the usage of MXNet for a number of years (PDF). DSSTNE, an Amazon-developed DL library, will likely become a subcomponent of MXNET. It’s likely that only one or two deep learning frameworks will end up being widely used,and whoever controls the framework will be able to extract some economic advantages through building cloud services and products around it that benefit from the broad community uptake. The next two years will likely be critical for establishing the winners and losers in this category.

Municipal Muni-Mind Mangled In Miraculous Manipulation: Further proof that we live in a timeline imagine by Neil Stephenson and William Gibson comes in the form of the public transportation computer hack in San Francisco this weekend. “‘You Hacked, ALL Data Encrypted.’ That was the message on San Francisco Muni station computer screens across the city, giving passengers free rides all day on Saturday,” reports CBS.

DeepMind + NHS: Getting ahold of healthcare data is notoriously tricky due to the many (sensible) laws around data protection. Google DeepMind’s solution is to partner with the Royal Free London NHS Foundation Trust to get some useful modern software into the hands of clinicians, and eventually incorporate machine learning components as it establishes trust and credibility. Much of the NHS runs on an arcane system of paper records, so any digitization is a good thing. It’s likely Google/DeepMind will face some opposition and probing from citizens and politicians over its usage and stewardship of their data. The onus is on Google DeepMind to prove that partnership schemes like this can work for patients above all.

Microsoft’s wacky quantum computer bet: Microsoft plans to make a prototype of a new type of quantum computer in a bet that the technology is ready to jump out of theory and into practical reality. Microsoft has taken a different tack to Google with its quantum computing approach and is betting its farm on a technology called a ‘topological quantum computer’. That’s a somewhat more far-out technology than the types of computer being explored by Google. The company has enlisted a bunch of quantum experts to help it build the machine, including Matthias Troyer of E.T.H Zurich. (There’s an occasional argument among AI experts, typically after a few beers, as to whether consciousness emerges from a quantum substrate. Physicist Roger Penrose has a pet theory that consciousness comes out of quantum activities inside ‘microtubules’ inside brain neurons, though evidence for this is scant at best.)

Cities conjured up from lines scratched into sand… and much more in this fantastic paper ‘Image-to-Image translation with Conditional Adversarial Nets’. The authors outline a system that lets you train an AI to pair one image input, like a satellite photograph of a city, and generate an output, like a Google Map with bounding boxers around buildings. The technique works across domains and can be used to, say, draw a woman’s handbag and use that to create a synthetic ‘photograph’ of the bag, or take a picture of a landscape in the day and show it at night….
   …The work has already been extended by Opendotlab for the ‘Invisible Cities’ project to create a system that can take a satellite photo of, say, Milan, and re-interpret it as though the buildings all come from Los Angeles. Terracotta roofs turn to concrete flattop & public squares become asphalt. Canals become freeways. It’s a marvelous, stimulating experiment, and a wonderful example of how art will be changed by the arrival of machine learning
   …so with all the possibility of new forms of creation from the combination of deep learning and art it’s great to see the launch of, a company formed by a bunch of European AI hackers to spread AI-enabled aesthetics into studios and agencies across the world. AI is becoming just another lens through which we see the world, and it has the potential to show us things our puny four-dimensional minds have trouble imagining. T-SNE goggles, kind of thing.

Computational fluid dynamics meets deep learning.. The previous sentence will be true of many, many things in coming years: “kitchen-shift scheduling meets deep learning”, “insurance claim analysis meets deep learning”, and so on. But the amazing thing about this code release from Google is that you can train a neural network to handle some of the gnarlier equations involved in CFD. It creates some amazing visualisations, but don’t try this in your nuclear reactor yet, kids.

AI turns everything into a prediction problem: the rise of low-cost machine intelligence systems will see people in companies across the world work to turn as many of their problems as possible into problems of prediction, says the Harvard Business Review. That’s because “the first effect of machine intelligence will be to lower the cost of goods and services that rely on prediction. This matters because prediction is an input to a host of activities including transportation, agriculture, healthcare, energy manufacturing, and retail,” it writes.

What does it take to build a strong AI?… not as much as you’d think, suggests Yann Lecun in this wide-ranging speech at Carnegie Mellon University. Fast forward to around 32 minutes into the video to hear Yann’s views on how to build super-smart machines. Most AI experts have their own workable theories for how to build super-intelligence AI systems, but are usually held back by the relatively meager capabilities of modern computers and a lack of the right sort of data. We’re moving into an era where both of this scarcities will be less severe, so we can expect development here.

OpenAI bits&pieces:

Government Talk: We will be giving testimony on artificial intelligence at the Subcommittee on Space, Science, and Competitiveness’s hearing on “The Dawn of Artificial Intelligence” on Wednesday. Tune-in!


[2020: An architect’s office in Seattle. A person wearing a black turtleneck stands in front of a floor-to-ceiling screen. Small white earbuds (no cable) dangle out of their ears. They gesture at the screen.]

“So as you can see, the house itself can change its appearance according to the different styles and textures you’re wearing on the day. We use style-transfer techniques to modify the textures on the walls according to what you’re wearing. This can really help you express yourself and make an impact, especially when hosting get togethers. Turn on your webcam and I’ll show you!”

The screen splits in two. A woman wearing a red scarf over a blue jean jacket blinks into view on the left-hand side, and a modern, boxy house appears on the right hand side.

“Let me demonstrate!” The person gestures from the woman on the left side of the screen to the house on the right. On screen, the house flickers and its walls change from a grey color to a blue to match the jean jacket. The door and window frames turn from white to a vivid, cross-hatched red.

“Now try without the scarf!”. The woman on screen removes her scarf. The house responds, its screens turning from blue to a smooth beige. “In a few years, it won’t just be the appearance of the house that changes, its geometry will change as well.