Import AI 135: Evolving neural networks with LEAF; training ImageNet in 1.5 minutes, and the era of bio-synthetic headlines

Researchers take a LEAF out of Google’s book with evolutionary ML system:
In the future, some companies will have researchers, and some will spend huge $$$ on compute for architecture search…
Researchers with startup Cognizant Technology Solutions have developed their own approach to architecture search, using insights from one of the paper authors Risto Miikkulainen, inventor of the NEAT and HyperNEAT approaches.

  They outline a technology called LEAF (Learning Evolutionary AI Framework) which uses an algorithm called CoDeepNEAT (an extension of NEAT) to let it evolve the architecture and hyperparameters. “Multiobjective CoDeepNEAT can be used to maximize the performance and minimize the complexity of the evolved networks simultaneously,” the authors write. It also has some middleware software to let it spreads jobs over Amazon AWS, Microsoft Azure, or the Google Cloud.

  Results: The authors test their approach on two tasks: classifying Wikipedia comments for Toxicity, and learning to analyze chest x-rays for the purpose of multitask image classification. For Wikipedia, they find that LEAF can discover architectures that outperform the state-of-the-art score on Kaggle, albeit at the cost of about “9000 hours of CPU time”. In the case of chest X-ray classification, LEAF is able to get to within a fraction of a percentage point of state-of-the-art.

  Why this matters: Systems like LEAF show the relationship between compute spending and ultimate performance of trained models, and suggests that some AI developers could consider under-investing in research staff and instead investing in techniques where they can arbitrage compute against researcher-time, delegating the task of network design and fine-tuning to machines instead of people.
\  Read more: Evolutionary Neural AutoML for Deep Learning (Arxiv).

Want to prevent your good AI system being used for bad purposes? Consider a RAIL license:
…Responsible AI Licenses designed to give open source developers more control over what happens with their technology…
RAIL provides a source code license and an end-user license “that developers can include with AI software to restrict its use,” according to the RAIL website. “These licenses include clauses for restrictions on the use, reproduction, and distribution of the code for potentially harmful domain applications of the technology”.

   RAIL licenses are designed to account for the omni-use nature of AI technology, which means that “the same AI tool that can be used for faster and more accurate cancer diagnoses can also be used in powerful surveillance system”, they write. “This lack of control is especially salient when a developer is working on open-source ML or AI software packages, which are foundational to a wide variety of the most beneficial ML and AI applications.”

   How RAIl works: The RAIL licenses work by restricting AI and ML software from being used in a specific list of harmful applications, e.g. in surveillance and crime prediction, while allowing for other applications.

   Who is behind it? RAIL is being developed by AI researchers, a patent attorney/computer program, and Brent Hecht a professor at Northwestern University and one of the authors of the ACM Future of Computing Academy essay ‘It’s Time to Do Something: Mitigating the Negative Impacts of Computing Through a Change to the Peer Review Process’ (ACM FCA website).

   Why this matters: The emergence of licensing schemes like this speaks to the anxieties that some people feel about how AI technology is being used or applied today. If licenses like these get adopted and are followed by users of the technology, then it gives developers a non-commercial way to (slightly) control how their technology is used. Unfortunately, approaches like RAIL will not work against malicious actors, who are likely to ignore any restrictions in a particular software license when carrying out their nefarious activities.
  Read more: Responsible AI Licenses (RAIL site).

It takes a lot of hand-written code to solve an interactive fiction story:
…Microsoft’s NAIL system wins competition via specialized, learned modules…
Researchers with Microsoft have released a paper describing NAIL, “an autonomous agent designed to play arbitrary human-made [Interactive Fiction] games”. NAIL, short for Navigate, Acquire, Interact and Learn, is software that consists of several specialized ‘decision modules’ as well as an underlying knowledge graph. NAIL won the 2018 Text Aventure AI Competition, and a readthrough of the paper highlights just how much human knowledgeable is apparently necessary to solve text adventure games, given the widespread use of specialized “decision modules” to help it succeed at the game.

  Decisions, decisions, decisions: NAIL has these four main decision modules:
     Examiner: Tries to identify new objects seen in the fiction to add to NAIL’s knowledge graph.
     Hoarder: Tries to “take all” objects seen at a point in time.
     Interactor: Tries to figure out what actions to take and how to take them.
     Navigator: Attempts to apply one of twelve actions (eg, ‘enter’, or ‘South’) to move the player.

  And even more decisions: It also has several even more specialized ones, which are designed to kick-in in the event of things like in-game darkness, needing to emit a “yes” or “no” response following a prompt, using regular expressions to parse game responses for hints, and what they call idler which will try random combinations of verb phrases combined with nearby in-game objects to try and un-stick the agent.

  All about the Knowledge: While NAIL explores the world, it builds a knowledge graph to help it learn about its gameworld. It organizes this knowledge graph autonomously and extends it over time. Additionally, having a big structured store of information makes debugging easier: “by comparing the knowledge graph to the published map for well documented games like Zork, it was possible to track down bugs in NAIL’s decision modules”.

  Why this matters: In the long-term, most AI researchers want to develop systems where the majority of the components are learned. Systems like NAIL represent a kind of half-way point between where we are today and the future, with researchers using a lot of human ingenuity to chain together various systems, but trying to force learning to occur via various carefully specified functions.
   Read more: NAIL: A General Interactive Fiction Agent (Arxiv).

This week during the Industrialization of AI = train ImageNet in 1.5 minutes:
…New research from Chinese image recognition giant SenseTime shows how to train big ImageNet clusters…
How can we model the advancement of AI systems? One way is to model technical metrics, like the performance of given algorithms against various reinforcement learning benchmarks, or supervised image classification, or what have you. Another is to try to measure the advancement in the infrastructure that supports Ai – think of this as the difference between measuring the performance traits of a new engine, versus measuring the time it takes for a factory to take that engine and integrate it into a car.

  One way we can measure the advancement of AI infrastructure is by modelling the fall in the amount of time it takes people to train various well-understood models to completion against a widely-used baseline. Now, researchers with Chinese computer vision company SenseTime as well as Nanyang Technological University have shown how to use a variety of distributed systems software techniques to reduce the time it takes to train ImageNet networks to completion, building on the work of others. They’re able to reduce the time it takes to train such networks by fiddling around with networking settings, and achieve their best performance by enabling the bespoke ‘Tensor Core’ on their NVIDIA V100 cards.

  The numbers:
  1.5 minutes: Time it takes to complete 95-epoch training of ImageNet using ‘AlexNet’ across 512 GPUs, exceeding current state-of-the-art systems.
  7.3 minutes: Time it takes to train ImageNet to 95-epochs using a 50-layered Residual Network – this is a little below the state-of-the-art.

  Minor but noteworthy details: This approach assumes a homogeneous compute cluster, so the same underlying GPUs and network bandwidth across all machines.

  Why this matters: Metrics like this give us a sense of how sophisticated AI infrastructure is becoming, and emphasize that organizations which invest in such infrastructure will be able to run more experiments in less time than those that haven’t, which has long-term implications for the competitive structure of markets.
  Read more: Optimizing Network Performance for Distributed DNN Training on GPU Clusters: ImageNet/AlexNet Training in 1.5 Minutes (Arxiv).

“Scary Robots” and what they mean to the UK public:
…Or, what people hope and worry about when they hope and worry about AI…
Researchers with the Leverhulme Centre for the Future of Intelligence at the University of Cambridge and the BBC have conducted a quantitative and qualitative survey of about ~1,000 people in the Uk to understand peoples’ attitudes towards increasingly powerful AI systems.

  The eight hopes and fears of AI: The researchers characterize four hopes and four fears relating to AI. Often the reverse of a particular hope is a fear, and vice versa. They describe these feelings as:
      – Immortality: Inhumanity – We’ll live forever, but we might lose our humanity.
      – Ease: Obsolescence – Everything gets simpler, but we might become pointless.
      – Gratification: Alienation – AI could respond to our needs, but so effectively that we choose AI over people.
      – Dominance: Uprising – We might get better robot militaries, but these robot militaries might eventually kill us.

  Which fears and hopes might come true? The researchers also asked people which things they thought were likely and which were unlikely. 48% of people saw the ‘ease’ scenario as likely, followed by 42% for ‘dominance’ and 35% for ‘obsolescence’. In terms of unlikely things, 35% of people inhumanity was unlikely, followed by 28% regarding immortality, and 26% regarding gratification.

  Who gets to develop AI? In the survey, 61.8% of respondents “disagreed that they were able to influence how AI develops in the future” – this disempowerment seems problematic. There was broad agreement amongst those surveyed that the technology would develop regardless of other things.

  Why this matters: The attitudes of the general public will have a significant influence on the development of increasingly powerful artificial intelligence systems. If we misjudge the mood of the public, then it’s likely societies will adopt less AI, see less of its benefits, and be more skeptical of statements about AI made by governments or other people. It’s also interesting to consider about what might happen to societies where people are very supportive of AI development – how might governments and other actors behave differently, then?
Read more: “Scary Robots”: Examining Public Responses to AI (AAAI/ACM Artificial Intelligence, Ethics, and Society Conference).

AI Policy with Matthew van der Merwe:
…Matthew van der Merwe has kindly offered to write some sections about AI & Policy for Import AI. I’m (lightly) editing them. All credit to Matthew, all blame to me, etc. Feedback: jack@jack-clark.net

Bringing more rigour to the AI ethics discussion:
The AI ethics discussion is gaining increasing prominence. This report sets out a roadmap for approaching these issues in a more structured way.

   What’s missing from the discussion: There are major gaps in existing work: a lack of shared understanding of key concepts; insufficient use of evidence on technologies and public opinion; and insufficient attention to the tensions between principles and values.

  Three research priorities:
      Addressing ambiguity. Key concepts, e.g. bias, explainability, are used to mean different things, which can impede progress. Terms should be clarified, with attention to how they are used in practice, and consensus on definitions should be reached.                         Identifying and resolving tensions. There has been insufficient attention to trade-offs which characterize many issues in this space. The report suggests identifying these by looking at how the costs and benefits of a given technology are distributed between groups, between the near- and long-term, and between individuals and society as a whole.
     Building an evidence base. We need better evidence on the current uses and impacts of technologies, the technological progress we should expect in the future, and on public opinion. These are all vital inputs to the ethics discussion.

  Why this matters: AI ethics is a young field, and still lacks many of the basic features of a mature discipline, e.g. shared understandings of terms and methodology. Building these foundations should be a near-term priority, and will improve the quality of discussion, and rate of progress.
  Read more: Ethical and societal implications of algorithms, data, and artificial intelligence: a roadmap for research (Nuffield).

Governance of AI Fellowship, University of Oxford:
The Center for the Governance of AI (GovAI) is accepting applicants for three-month research fellowships. They are looking for candidates from a wide range of disciplines, who are interested in pursuing a career in AI governance. GovAI is based at the Future of Humanity Institute, in Oxford, and is one of the leading hubs for AI governance research.
  Read more: Governance of AI Fellowship (FHI).
  Read more: AI Governance: A Research Agenda (FHI).

Tech Tales:

Titles from the essay collection: The Great Transition: Human Society During The Bio-Synthetic Fusion Era

Automate or Be Destroyed: Economic Incentives and Societal Transitions in the 20th and 21st Centuries

Hand Back the Microphone! Human Slam Poetry’s Unpredictable Rise

Jerry Daytime at Night: The Very Private Life of an AI News Anchor

Stag Race Development Dynamics and the AI Safety Incidents in Beijing and Kyoto

‘Blot Out The Sun!’ and Other Fictionalized Anti-Machine Ideas Inherent to 21st Century Fictions

Dreamy Crooners and Husky Hackers: An Investigation Into Machine-Driven Pop

“We Cohabit This Planet We Demand Justice For It” and Other Machine Proclamations and Their Impact

Red Scare? The Unreported Tensions That Drove US-China Competitive Dynamics

This Is A Race and We Must Win It – Political Memoir in the Age of Rapid Technological Acceleration

Things that inspired this story: Indexes and archives as historical artefacts in their own right; the idea that the information compression inherent to essay titles contains a bigger signal than people think.