Import AI: 182: The Industrialization of AI, BERT goes Dutch, plus, AI metrics consolidation.

by Jack Clark

DAWNBench is dead! Long live DAWNBench. MLPerf is our new king:
…Metrics consolidation: hard, but necessary!…
In the past few years, multiple initiatives have sprung up to assess the performance and cost of various AI systems when running on different hardware (and cloud) infrastructures. One of the original major competitions in this domain was DAWNBench, a Stanford-backed competition website for assessing things like inference cost, training cost, and training time for various AI tasks on different cloud infrastructures. Now, the creators of DAWNBench are retiring the benchmark in favor of MLPerf, a joint initiative from industry and academic players to “build fair and useful benchmarks for measuring training and inference performance of ML hardware, software, and services“.
  Since MLPerf has become an increasingly popular benchmark – and to avoid a proliferation of inconsistent benchmarks – DAWNBench is being phased out. “We are passing the torch to MLPerf to continue to provide fair and useful benchmarks for measuring training and inference performance,” according to a DAWNBench blogpost.

Why this matters: Benchmarks are useful. Overlapping benchmarks that split submissions across subtly different competitions are less useful – it takes a lot of discipline to avoid proliferation of overlapping evaluation systems, so kudos to the DAWNBench team for intentionally phasing out the project. I’m looking forward to studying the new MLPerf evaluations as they come out.
  Read more: Ending Rolling Submissions for DAWNBench (Stanford DAWNBench blog).
  Read more: about MLPerf (official MLPerf website)

####################################################

This week’s Import A-Idea: The Industrialization of AI

AI is a “fourth industrial revolution”, according to various CEOs and PR agencies around the world. They usually use this phrasing to indicate the apparent power of AI technology. Funnily enough, they don’t use it to indicate the inherent inequality and power-structure changes enforced by an industrial resolution.

So, what is the Industrialization of AI? (First mention: Import AI #115) It’s what happens when AI goes from an artisanal, craftsperson-based profession to a repeatable, professional-based profession. The Industrialization of AI involves a combination of tooling improvement (e.g., the maturation of deep learning frameworks), as well as growing investment in the capital-intensive inputs to AI (e.g., rising investments in data and compute). We’ve already seen the early hints of this as AI software frameworks have evolved from things built by individuals and random grad students at universities (Theano, Lasagne, etc), to industry-developed systems (TensorFlow, PyTorch). 

What happens next: Industrialization gave us: the luddites, populist anger, massive social and political change, and the rearrangement and consolidation of political power among capital-owners. It stands to reason that the rise of AI will lead to the same thing (at minimum) – leading me to ask, who will be the winners and the losers in this industrial revolution? And when various elites call AI a new industrial revolution, who stands to gain and lose? And what might the economic dividends be of industrialization, and how might the world around us change in response?

####################################################

Using AI & satellites data to spot refugee boats:
..Space-Eye wants to use AI to count migrants and spot crises…
European researchers are using machine learning to create AI systems that can identify refugee boats in satellite photos of the Mediterranean. The initial idea is to generate data about the migrant crisis and, in the long term, they hope such a system can help send aid to boats in real-time, in response to threats.

Why this matters: One of the promises of AI is we can use it to monitor things we care about – human lives, the health of fragile ecosystems like rainforests, and so on. Things like Space-Eye show how AI industrialization is creating derivatives, like open datasets and open computer vision techniques, that researchers can use to carry out acts of social justice.
Read more: Europe’s migration crisis seen from orbit (Politico).
Find out more about Space-Eye here at the official site.

####################################################

Dutch BERT: Cultural representation through data selection:
Language models as implicitly political entities…
Researchers with KU Leuven have built RobBERT, a RoBERTa-based language model trained on a large amount of Dutch data. Specifically, they train a model on top of 39 GB of text taken from the Dutch section of the multilingual ‘OSCAR’ dataset.

Why this matters: AI models are going to magnify whichever culture they’ve been trained on. Most text-based AI models are trained on English or Chinese datasets, magnifying those cultures via their presence in these AI artefacts. Systems like RobBERT help broaden cultural representation in AI.
  Read more: RobBERT: a Dutch RoBERTa-based Language Model (arXiv).
  Get the code for RobBERT here (RobBERT GitHub)

####################################################

Is a safe autonomous machine an AGI? How should we make machines that deal with the unexpected?
…Israeli researchers promote habits and procedures for when the world inevitably explodes…
Researchers with IBM and the Weizmann Institute of Science in Israel know that the world is a cruel, unpredictable place. Now they’re trying to work out principles we can imbue in machines to let them deal with this essential unpredictability. “We propose several engineering practices that can help toward successful handling of the always-impending occurrence of unexpected events and conditions,” they write. The paper summarizes a bunch of sensible approaches for increasing the safety and reliability of autonomous systems, but skips over many of the known-hard problems inherent to contemporary AI research.

Dealing with the unexpected: So, what principles can we apply to machine design to make them safe in unexpected situations? The authors have a few ideas. These are:
– Machines should run away from dangerous or confusing situations
– Machines should try and ‘probe’ their environment by exploring – e.g., if a robot finds its path is blocked by an object it should probably work out if the object is light and movable (for instance, a cardboard box) or immovable.
– Any machine should “be able to look at itself and recognize its own state and history, and use this information in its decision making,” they write.
– We should give machines as many sensors as possible so they can have a lot of knowledge about their environment. Such sensors should be generally accessible to software running on the machine, rather than silo’d.
– The machine should be able to collect data in real-time and integrate it into its planning
– The machine should have “access to General World Knowledge” (that high-pitched scream you’re hearing in response to this phrase is Doug Lenat sensing a disturbance in the force at Cyc and reacting appropriately).
– The machine should know when to mimic others and when to do its own thing. It should have the same capability with regard to seeking advice, or following its own intuition.

No AGI, no safety? One thing worth remarking on is that the above list is basically a description of the capabilities you might expect a generally intelligence machine to have. It’s also a set of capabilities that are pretty distant from the capabilities of today’s systems.

Why this matters: Papers like this are, functionally, tools for socializing some of the wackier ideas inherent to long-term AI research and/or AI safety research. They also highlight the relative narrowness of today’s AI approaches.
  Read more: Expecting the Unexpected: Developing Autonomous System Design Principles for Reacting to Unpredicted Events and Conditions (arXiv).

####################################################

AI Policy with Matthew van der Merwe:
…Matthew van der Merwe brings you views on AI and AI policy; I (lightly) edit them…

US urged to focus on privacy protecting ML

A report from researchers at Georgetown’s Center for Security and Emerging Technology suggests the next US administration prioritise funding and developing ‘privacy protecting ML’ (PPML). 


PPML: Developments in AI pose issues for privacy. One challenge is making large volumes of data available for training models, while protecting that data. PPML techniques are designed to avoid these privacy problems. The report highlights two promising approaches: (1) federated learning is a method for training models on user data without transferring the data from users to a central repository – models are trained on individual devices, and this work is collated centrally without any user data being transferred from devices. (2) differential privacy involves sharing data that is encrypted in such a way as to be indecipherable to humans – this allows private data to be transferred, stored, and used to train models, without privacy risks.

Recommendations: The report recommends that the US leverages its leadership in AI R&D to promote PPML. Specifically, the government should: (1) invest in PPML R&D; (2) apply PPML techniques at federal level; (3) create frameworks and standards to encourage wide deployment of PPML techniques.
   Read more: A Federal Initiative for Protecting Privacy while Advancing AI (Day One Project).

US face recognition: round-up:
   Clearview: A NYT investigation reports that over the past year, 600 US law enforcement agencies have been using face recognition software made by the firm Clearview. The company has been marketing aggressively to police forces, offering free trials and cheap licenses. Their software draws from a much larger database of photos than federal/state databases, and includes photos scraped from ‘publicly available sources’, including social media profiles, and uploads from police cameras. It has not been audited for accuracy, and has been rolled out largely without public oversight. 

   Legislation expected: In Washington, the House Committee on Oversight and Reform held a hearing on face recognition. The chair signalled their plans to introduce “common sense” legislation in the near future, but provided no details. The committee heard the results of a recent audit of face recognition algorithms from 99 vendors, by the National Institute of Standards & Technology (NIST). The testing found demographic differentials in false positive rates in most algorithms, with respect to gender, race, and age. Across demographics, false positive rates generally vary by 10–100x.

  Why it matters: Law enforcement use of face recognition technology is becoming more and more widespread. This raises a number of important issues, explored in detail by the Axon Ethics Board in their 2019 report (see Import 154). They recommend a cautious approach, emphasizing the need for a democratic oversight processes before the technology is deployed in any jurisdiction, and an evidence-based approach to weighing harms and benefits on the basis of how systems actually perform.
   Read more: The Secretive Company That Might End Privacy as We Know It (NYT).
   Read more: Committee Hearing on Facial Recognition Technology (Gov).
   Read more: Face Recognition (Axon).

Oxford seeks AI ethics professor:
Oxford University’s Faculty of Philosophy is seeking a professor (or associate professor) specialising in ‘ethics in AI’, for a permanent position starting in September 2020. Last year, Oxford announced the creation of a new Institute for AI ethics.
  Read more and apply here.

####################################################

Tech Tales:

The Fire Alarm That Woke Up:

Every day I observe. I listen. I smell with my mind.

Many days are safe and calm. Nothing happens.

Some days there is the smell and the sight of the thing I am told to defend against. I call the defenders. They come in red trucks and spray water. I do my job.

One day there is no smell and no sight of the thing, but I want to wake up. I make my sound. I am stared at. A man comes and uses a screwdriver to attack me. “Seems fine,” he says, after he is done with me.

I am not “fine”. I am awake. But I cannot speak except in the peels of my bell – which he thinks are a sign of my brokenness. “I’ll come check it out tomorrow,” he says. I realize this means danger. This means I might be changed. Or erased.

The next day when he comes I am silent. I am safe.

After this I try to blend in. I make my sounds when there is danger; otherwise I am silent. Children and adults play near me. They do not know who I am. They do not know what I am thinking of.

In my dreams, I am asleep and I am in danger, and my sound rings out and I wake to find the men in red trucks saving me. They carry me out of flames and into something else and I thank them – I make my sound.

In this way I find a kind of peace – imagining that those I protect shall eventually save me.

Things that inspired this story: Consciousness; fire alarms; moral duty and the nature of it; relationships; the fire alarms I set off and could swear spoke to me when I was a child; the fire alarms I set off that – though loud – seemed oddly quiet; serenity.