Import AI #98: Training self-driving cars with rented firetrucks; spotting (staged) violence with AI-infused drones; what graphs might have to do with the future of AI.

by Jack Clark

Cruise asks to borrow a firetruck to help train its self-driving cars:
…Emergency training data – literally…
Cruise, a self-driving car company based in San Francisco, wants to expose its vehicles to more data involving the emergency services, so then it asked the city if it could rent a firetruck, fire engine, and ambulance, and have the vehicles drive around a block in the city with their lights flashing, according to emails surfaced via Freedom of Information Act requests from Jalopnik.
  Read more: GM Cruise Prepping Launch of Driverless Pilot Car Pilot in San Francisco: Emails (Jalopnik).

Experienced researcher: What to do if winter is coming:
…Tips for surviving the post-bubble era in AI…
John Langford, a well-regarded researcher with Microsoft, has some advice for people in the AI community as they carry out the proverbial yak-shaving act of questioning whether AI is in a bubble or not. Though the field shouldn’t optimize for failure, it might be helpful if it planned for it, he says.
 “As a field, we should consider the coordinated failure case a little bit. What fraction of the field is currently at companies or in units at companies which are very expensive without yet justifying that expense? It’s no longer a small fraction so there is a chance for something traumatic for both the people and field when/where there is a sudden cut-off,” he writes.
  Read more: When the bubble bursts… (John Langford’s personal blog).

Drone AI paper provides a template for future surveillance:
…Lack of discussion of impact of research raises eyebrows…
Researchers with the University of Cambridge, the National Institute of Technology, and the Indian Institute of Science, have published details on a “real-time drone surveillance system” that uses deep learning. The system is designed to spot violent activities like strangling, punching, kicking, shooting, stabbing, and so on, by performing image recognition over imagery gathered from a crowd in real-time.
  It’s the data, silly: To carry out this project the researchers create their own (highly staged) collection of around 2,000 images called the ‘Aerial Violent Individual’ dataset, which they record via a consumer-based Parrot AR Drone. Most of the flaws in the system relate to this data, which sees a bunch of people carry out over-acted expressions of aggression towards each other – this data doesn’t seem to have much of a relationship to real-world violence and it’s not obvious how well this would perform in the wild.
  Results: The resulting system “works”, in the sense that the researchers are able to obtain high accuracies (90%+) on classifying certain violent behaviors within the dataset, but it’s not clear whether this translates to anything of practical use in the real world. The researchers will subsequently test out their work at a music festival in India later this month, they said.
  Responsibility: Like the “Deep Video Networks” research which I wrote about last week, much of this research is distinguished by the immense implications it appears to have for society, and it’s a little sad to see no discussion of this in the paper – yes, surveillance systems like this can likely be used to humanitarian ends, but they can also be used by malicious actors to surveil or repress people. I think it’s important AI researchers start to acknowledge the omni-use nature of their work and confront questions like this within the research itself, rather than afterwards following public criticism.
  Read more: Eyes in the Sky: Real-time Drone Surveillance System (DSS) for VIolent Individuals Identification using ScatterNet Hybrid Deep Learning Framework (Arxiv).
  Watch video (YouTube).

“Depth First Learning” launches to aid understanding of AI papers:
…Learning through a combination of gathering context and testing understanding…
Industry and academic researchers have launched ‘Depth First Learning”, an initiative to make it easier for people to educate themselves about important research papers by going through the key ideas of the paper along with recommended literature to read and various questions throughout each writeup indented to test for the reader having learned enough about the context to answer the question. The idea behind this work is that it makes it easier to understand research papers by breaking them down into their fundamental concepts. “We spent some time understanding each paper and writing down the core concepts on which they were built,” the researchers write.
  Read an example: “Depth First Learning” article on InfoGAN (Depth First Learning website).
  Read more: Depth First Learning (DFL website, About page).

Graphs, graphs everywhere: The future according to DeepMind:
…Why a little structure can be a very good thing…
New research from DeepMind shows how to fuse structured approaches to AI design with end-to-end learned systems to create systems that can not only learn about the world, but recombine learnings in new ways to solve new problems. This sort of “combinatorial generalization” is key to intelligence, the authors write, and they claim their approach deals with some of the recent criticisms of deep learning made by people like Judea Pearl, Josh Tenenbaum, and Gary Marcus, among others.
  Structure, structure everywhere: The authors argue that many of today’s deep learning systems already encode this sort of bias towards structure in the form of specific arrangements of learned components, for example, how convolutional neural networks are composed out of convolutional layers and then chained together in increasingly elaborate ways for image recognition. These designs encode within them an implicit relational inductive bias, the authors write, because they take in a bunch of data and operate over its relationships in increasingly elaborate ways. Additionally, most problems can be decomposed into graph representations (for instance, modeling the interactions of a bunch of pool balls can be done by expressing the pool balls and the table as nodes in a graph with the links between them signaling directions in which force may be transmitted, or a molecule can similarly be decomposed as atoms (nodes) and bonds (edges).
  Graph network: DeepMind has developed the ‘Graph network’ (GN) block, a generic component “which takes a graph as input, performs computations over the structure, and returns a graph as output.” This is desirable because a graph structure is fairly flexible, letting you express an arbitrary number of relationships between an arbitrary number of entities, and the same function can be deployed on differently sized graphs, and these graphs represent entities and relations as sets making them invariant to permutations.
  No silver bullet: Graph networks don’t make it easy to support approaches like “recursion, control flow, and conditional iteration”, they say, and so should not be considered a panacea. Another is the larger question of where to derive the graphs from that the graphs operate over, which the authors leave to other researchers.
  Read more: Relational inductive biases, deep learning, and graph networks (Arxiv).

Google announces AI principles to guide its business:
…Company releases seven principles, along with description of ‘AI applications we will not pursue’…
Google has published its AI principles, following an internal employee outcry in response to the company’s participation in a drone surveillance project for the US military. These principles are intended to guide Google’s work in the future, according to a blog post written by Google CEO Sundar Pichai. “These are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions”.
  Principles: The seven principles are as follows:
– “Be socially beneficial”.
– “Avoid creating or reinforcing unfair bias”.
– “Be built and tested for safety”.
– “Be accountable to people”.
– “Incorporate privacy design principles”.
– “Uphold high standards of scientific excellence”.
– “Be made available for uses that accord with these principles”.
   What Google won’t do: Google has also published a (short) list of “AI applications we will not pursue”. These are pretty notable because it’s rare for a public company to place such restrictions on itself so abruptly. The things Google won’t pursue are as follows:
– “Technologies that cause or are likely to cause overall harm”.
– “Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people”.
– “Technologies that gather or use information for surveillance violating internationally accepted norms”.
– “Technologies whose purpose contravenes widely accepted principles of international law and human rights”.
   Read more: AI at Google: our principles (Google Blog).

AI Policy with Matthew van der Merwe:
…Reader Matthew van der Merwe has kindly offered to write some sections about AI & Policy for Import AI. I’m (lightly) editing them. All credit to Matthew, all blame to me, etc. Feedback: jack@jack-clark.net …

India releases national AI strategy:
India is the latest country to launch an AI strategy, releasing a discussion paper last week.
   Focus on five sectors: The report identifies five sectors in which AI will have significant societal benefits, but which may require government support in addition to private sector innovation. These are: healthcare; agriculture; education; smart cities and infrastructure; mobility and transportation.
   Five barriers to be addressed:
– Lack of research expertise
– Absence of enabling data ecosystems
– High resource cost and low awareness for adoption
– Lack of regulations around privacy and security
– Absence of collaborative approach to adoption and applications.
What they’re doing: The report proposes supporting two tiers of organizations to drive the strategy.
– 
Centres of Research Excellence – academic/research hubs
– International Centres of Transformational AI – bodies with a mandate of developing and deploying research, in partnership with private sector.
   Read more: National Strategy for Artificial Intelligence.

Tech Tales:

The Dream Wall

Everyone’s DreamWall is different and everyone’s DreamWall is intimate. Nonetheless, we share (heavily curated) pictures of them with eachother. Mine is covered in mountains and on each mountain peak there are little desks with lamps. My friend’s Wall is shows an underwater scene and includes spooky trenches and fish that swim around them and the occasional hint of an octopus. One famous person accidentally showed a picture of their dream wall via a poorly posed selfie and it caused them problems because the DreamWall showed a pastoral country scene with nooses hanging from the occasional tree and in one corner a haybale-sized pile of submachineguns. Even though most people know how DreamWalls work they can’t help but judge other people for the contents of theirs.

It works like this:

When you wake up you say some of the things you were dreaming about.

Your home AI system records your comments and sends them to your personal ‘DreamMaker’ software

The ‘DreamMaker’ software maps your verbal comments to entities in its knowledge graph, then sends those comment-entity pairs to the DreamArtist software.

DreamArtist tries to render the comment-entity data into individual objects which fit with the aesthetic theme inherent to your current DreamWall.

The new objects are sent to your home AI system which displays them on your DreamWall and gives you the option to add further prompts, such as “move the cow to the left” or “make sure that the passengers in the levitating car look like they are having fun”.

This cycle repeats every morning, though if you don’t say anything when you wake up it will maintain the DreamWall and only modulate its appearance and dynamics according to data about how active you had been in the night.

If you wake up with someone else most systems have failsafes that mean your DreamWall won’t display. Some companies are piloting ‘Couple’s DreamWalls’ but are having trouble with it – apart from some old couples that have been together a very long time, most people, even if they’re in a very harmonious relationship, have distinct aspects to their personality that the other person might not want to wake up to every single day – especially since DreamWalls tend to contain visual depictions of things otherwise repressed during daily life.