Import AI: #103: Testing brain-like alternatives to backpropagation, why imagining goals can lead to better robots, and why navigating cities is a useful research avenue for AI

by Jack Clark

Backpropagation may not be brain-like, but at least it works:
…Researchers test more brain-like approaches to learning systems, discover that backpropagation is hard to beat…
Backpropagation is one of the fundamental tools of modern deep learning – it’s one of the key mechanisms for propagating and updating information through networks during training. Unfortunately, there’s relatively little evidence available that our own human brains perform a process analogous to backpropagation (this is a question Geoff Hinton has struggled with for several years in talks like ‘Can the brain do back-propagation‘?). That has given some concern to researchers for some years who worry that though we’re seeing significant gains from developing things based on backpropagation, we may need to investigate other approaches in the future.  Now, researchers with Google Brain and the University of Toronto have performed an empirical analysis of a range of fundamental learning algorithms, testing approaches based on backpropagation against ones using target propagation and other variants.
  Motivation: The idea behind this research is that “there is a need for behavioural realism, in addition to physiological realism, when gathering evidence to assess the overall biological realism of a learning algorithm. Given that human beings are able to learn complex tasks that bear little relationship to their evolution, it would appear that the brain possesses a powerful, general-purpose learning algorithm for shaping behavior”.
  Results: The researchers “find that none of the tested algorithms are capable of effectively scaling up to training large networks on ImageNet”, though they record some success with MNIST and CIFAR. “Out-of-the-box application of this class of algorithms does not provide a straightforward solution to real data on even moderately large networks,” they write.
   Why it matters: Given that we know how limited and simplified our neural network systems are, it seems intellectually honest to test and ablate algorithms, particularly by comparing well-studied ‘mainstream’ approaches like backpropagation with more theoretically-grounded but less-developed algorithms from other parts of the literature.
  Read more: Assessing the Scalability of Biologically-Motivated Deep Learning Algorithms and Architectures (Arxiv).

AI and Silent Bugs:
…Half-decade old bug in ‘Aliens’ game found responsible for poor performance…
One of the more irritating things about developing AI systems is that when you mis-program AI it tends to fail silently – for instance, in OpenAI’s Dota project we saw performance dramatically increase simply after fixing non-breaking bugs. Another good example of this phenomenon has turned up in news about Aliens: Colonial Marines, a poorly reviewed half-decade-old game. But it turns out some of the reasons for those poor reviews were likely due to a bug – subsequent patches have found that the original game mis-named one variable which lead to entire chunks of the game’s enemy AI systems not functioning.
  Read more: A years-old, one-letter typo led to Aliens: Colonial Marines’ weird AI (Ars Technica).

Berkeley researchers teach machines to dream imaginary goals and solutions for better RL:
…If you want to change the world, first imagine yourself changing it…
Berkeley researchers have developed a way for machines to develop richer representations of the world around them and use this to solve tasks. The method they use to achieve this is a technique called ‘reinforcement learning with imagined goals’ (RIG). RIG works like this: an AI system interacts with an environment, data from these observations is used to train (and finetune) a variational auto encoder (VAE) latent variable model, then they use this representation to train the AI system to solve different imagined tasks using the representation learned by the VAE. This type of approach is becoming increasingly popular as AI researchers try to increase the capabilities of algorithms by getting them to use and learn from more data.
  Results: Their approach does well at tasks requiring reaching objects and pushing objects to a goal, beating baselines including algorithms like Hindsight Experience Replay (HER).
  Why it matters: After spending several years training algorithms to master an environment, we’re now trying to train algorithms that can represent their environment, then use that representation as an input to the algorithm to help it solve a new task. This is part of a general push toward greater representative capacity within trained models.
  Read more: Visual Reinforcement Learning with Imagined Goals (Arxiv).

Facebook thinks the path to smarter AI involves guiding other AIs through cities:
…’Talk The Walk’ task challenges AIs to navigate each other through cities, working as a team…
Have you ever tried giving directions to someone over the phone? It can be quite difficult, and usually involves a series of dialogues between you and the person as you try to figure out where in the city they are in relation to where they need to get to. Now, researchers with Facebook and the Montreal Institute of Learning Algorithms (MILA) have set out to develop and test AIs that can solve this task, so as to further improve the generalization capabilities of AI agents. “”For artificial agents to solve this challenging problem, some fundamental architecture designs are missing,” the researchers say.
  The challenge: The new “Talk The Walk” task frames the problem as a discussion between a ‘guide’ and a ‘tourist’ agent. The guide agent has access to a map of the city area that the tourist is in, as well as a location the tourist wants to get to, and the tourist has access to an annotated image of their current location along with the ability to turn left, turn right, or move forward.
  The dataset: The researchers created the testing environment by obtaining 360-degree photographic views of neighborhoods in New York City, including Hell’s Kitchen, the East VIllage, Williamsburg, the Financial District, and the Upper East Side. They then annotated each image of each corner of each street intersection with a set of landmarks drawn from the following categories: bar, bank, shop, coffee shop, theater, playfield, hotel, subway, and restaurant. They then had more than six hundred users of Mechanical Turk play a human version of the game, generating 10,000 successful dialogues from which AI systems can be trained (with over 2,000 successful dialogues available for each neighborhood of New York the researchers gathered data for).
  Results: The researchers tested their developed systems at how well they can localize themselves – that is, develop a notion of where they are in the city. The results are encouraging, with localization models developed by the researchers achieving a higher localization score than humans. (Though humans take about half the number of steps to effectively localize themselves, showing that human sample efficiency remains substantially better than those of machines.
  Why it matters: Following a half decade of successful development and commercialization of basis AI capabilities like image and audio processing, researchers are trying to come up with the next major tasks and datasets they can use to test contemporary research algorithms and developing them further. Evaluation methods like those devised here can help us develop AI systems which need to interact with larger amounts of real world data, potentially making it easier to evaluate how ‘intelligent’ these systems are becoming, as they are being tested directly on problems that humans solve every day and have good intuitions and evidence about the difficulty of. Though it’s worth noting that the current version of the task as solved by Facebook is fairly limited, as it involves a setting with simple intersections (predominantly just four-way straight-road intersections), and the agents aren’t being tested on very large areas nor are being required to navigate particularly long distances.
  Read more: Take the Walk: Navigating New York City through Grounded Dialogue (Arxiv).

Microsoft calls for government-led regulation of artificial intelligence technology:
…Company’s chief legal officer Brad Smith says government should study and regulate the technology…
Microsoft says the US government should appoint an independent commission to investigate the uses and applications of facial recognition technology. Microsoft says it is calling for this because it thinks the technology is of such utility and generality that it’s better for the government to think about regulation in a general sense than for specific companies like Microsoft tot think through questions on their own. The recommendation follows a series of increasingly fraught run-ins between the government, civil rights groups, and companies regarding the use of AI: first, Google dealt with employees protesting its ‘Maven’ AI deal with the DoD, then Amazon came under fire from the ACLU for selling law enforcement authorities facial recognition systems based on its ‘Rekognition’ API.
  Specific questions: Some of the specific question areas Smith thinks the government should spend time include: should law enforcement use of facial recognition be subject to human oversight and control? Is it possible to ensure civilian oversight of this technology? Should retailers post a sign indicating that facial recognition systems are being used in conjunction with surveillance infrastructure?
  Why it matters: Governments will likely be the largest uses of AI-based systems for surveillance, facial recognition, and more – but in many countries the government needs the private sector to develop and sell it products with these capabilities, which requires a private sector that is keen to help the government. If that’s not the case, then it puts the government into an awkward position. Government can clarify some of these relationships in specific areas by, as Microsoft suggests here, appointing an external panel of experts to study an issue and make recommendations.
  A “don’t get too excited” interpretation: Another motivation a company like Microsoft might have for calling for such analysis and regulation is that large companies like Microsoft have the resources to be able to ensure compliance with any such regulations, whereas startups can find this challenging.
  Read more: Facial recognition technology: The need for public regulation and corporate responsibility (Microsoft).

Google opens a Seedbank for wannabe AI gardeners:
Seedbank provides access to a dynamic, online, code encyclopedia for AI systems…
Google has launched Seedbank, a living encyclopedia about AI programming and research. Seedbank is a website that contains a collection of machine learning examples which can be interacted with via a live programming interface in Google ‘colab’. You can browse ‘seeds’ which are major AI topic areas like ‘Recurrent Nets’ or ‘Text & Language’, then click into them for specific examples; for instance, when browsing ‘Recurrent Nets’ you can learn about Neural Translation with Attention and can open a live notebook to walk you through the steps involving in creating a language translation system.
  “For now we are only tracking notebooks published by Google, though we may index user-created content in the future. We will do our best to update Seedbank regularly, though also be sure to check TensorFlow.org for new content,” writes Michael Tyke in a blog post announcing Seedbank.
  Why it matters: AI research and development is heavily based around repeated cycles of empirical experimentation, so being able to interact with and tweak live programming examples of applied AI systems is a good way to develop better intuitions about the technology.
  Read more: Seedbank – discover machine learning examples (TensorFlow Medium blog).
  Read more: Seedbank official website.

AI Policy with Matthew van der Merwe:
…Reader Matthew van der Merwe has kindly offered to write some sections about AI & Policy for Import AI. I’m (lightly) editing them. All credit to Matthew, all blame to me, etc. Feedback: jack@jack-clark.net…

Cross-border collaboration, openness, and dual-use:
…A new report urges better oversight of international partnerships on AI, to ensure that collaborations are not being exploited for military uses…
The Australian Strategic Policy Institute has published a report by Elsa Kania outlining some of the dual-use challenges inherent to today’s scalable, generic AI techniques.
  Dual-use as a strategy: China’s military-civil fusion strategy relies on using the dual-use characteristics of AI to ensure new civil developments can be applied in the military domain, and vice versa. There are many cases of private labs and universities working on military tech, e.g. the collaboration between Baidu and CETC (state-owned defence conglomerate). This blurring of the line between state/military and civilian research introduces a complication into partnerships between (e.g.) US companies and their Chinese counterparts.
  Policy recommendations: Organizations should assess the risks and possible externalities from existing partnerships in strategic technologies, establish systems of best practice for partnerships, and monitor individuals and organizations with clear links to foreign governments and militaries.
  Why this matters: Collaboration and openness are a key driver of innovation in science. In the case of AI, international cooperation will be critical in ensuring that we manage the risks and realize the opportunities of this technology. Nevertheless, it seems wise to develop systems to ensure that collaboration is done responsibly and with an awareness of risks.
  Read more: Technological entanglement.

Around the world in 23 AI strategies:
Tim Dutton has summarized the various national AI strategies governments have put forward in the past two years.
  Observations:
– AlphaGo really was a Sputnik moment in Asia. Two days after AlphaGo defeated Lee Sedol in 2016, South Korea’s president announced ₩1 trillion ($880m) in funding for AI research, adding “Korean society is ironically lucky, that thanks to the ‘AlphaGo shock’, we have learned the importance of AI before it is too late.”
– Canada’s strategy is the most heavily focused on investing in AI research and talent. Unlike other countries, their plan doesn’t include the usual policies on strategic industries, workforce development, and privacy issues.
– India is unique in putting social goals at the forefront of their strategy, and focusing on the sectors which would see the biggest social benefits from AI applications. Their ambition is to then scale these solutions to other developing countries.
   Why this matters: 2018 has seen a surge of countries putting forward national AI strategies, and this looks set to continue. The range of approaches is striking, even between fairly similar countries, and it will be interesting to see how these compare as they are refined and implemented in the coming years. The US is notably absent in terms of having a national strategy.
   Read more: Overview of National AI Strategies.

Risks and regulation in medical AI:
Healthcare is an area where cutting-edge AI tools such as deep learning are already having a real positive impact. There is some tension, though, between the cultures of “do no harm”, and “move fast and break things.”
  We are at a tipping point: We have reached a ‘tipping point’ in medical AI, with systems already on the market that are making decisions about patients’ treatment. This is not worrying in itself, provided these systems are safe. What is worrying is that there are already examples of autonomous systems making potentially dangerous mistakes. The UK is using an AI-powered triage app, which recommends whether patients should go to hospital based on their symptoms. Doctors have noticed serious flaws, with the app appearing to recommend staying at home for classic symptoms of heart attacks, meningitis and strokes.
  Regulation is slow to adapt: Regulatory bodies are not taking seriously the specific risks from autonomous decision-making in medicine. By treating these systems like medical devices, they are allowing them to be used on patients without a thorough assessment of their risks and benefits. Regulators need to move fast, yet give proper oversight to these technologies.
  Why this matters: Improving healthcare is one of the most exciting, and potentially transformative applications of AI. Nonetheless, it is critical that the deployment of AI in healthcare is done responsibly, using the established mechanisms for testing and regulating new medical treatments. Serious accidents can prompt powerful public backlashes against technologies (e.g. nuclear phase-outs in Japan and Europe post-Fukushima). If we are optimistic about the potential healthcare applications of AI, ensuring that this technology is developed and applied safely is critical in ensuring that these benefits can be realized.
  Read more: Medical AI Safety: We have a problem.

OpenAI & ImportAI Bits & Pieces:

Better generative models with Glow:
We’ve released Glow, a generative model that uses a 1×1 reversible convolution to give it a richer representative capacity.  Check out the online visualization tool to experiment with a pre-trained Glow model yourself, applying it to images you can upload.
   Read more: Glow: Better Reversible Generative Models (OpenAI Blog).

AI, misuse, and DensePose:
IEEE Spectrum has written up some comments from here in Import AI about Facebook’s ‘DensePose’ system and the challenges it presents for how AI systems can potentially be misused and abused. As I’ve said in a few forums, I think the AI community isn’t really working hard on this problem and is creating unnecessary problems (see also: voice cloning via Lyrebird, faking politicans via ‘Deep Video Portraits’, surveiling crowds with drones, etc).
  Read more: Facebook’s DensePose Tech Raises Concerns About Potential Misuse (IEEE Spectrum).

Tech Tales:

Ad Agency Contracts for a Superintelligence:

Subject: Seeking agency for AI Superintelligence contract.
Creative Brief: Company REDACTED has successfully created the first “AI Superintelligence” and is planning a global, multi-channel, PR campaign to introduce the “AI Superintelligence” (henceforth known as ‘the AI’) to a global audience. We’re looking for pitches from experienced agencies with unconventional ideas in how to tell this story. This will become the most well known media campaign in history.

We’re looking for agencies that can help us create brand awareness equivalent to other major events, such as: the second coming of Jesus Christ, the industrial revolution, the declaration of World War 1 and World War 2, the announcement of the Hiroshima bomb, and more.

Re: Subject: Seeking agency for AI Superintelligence contract.
Three words: Global. Cancer. Cure. Let’s start using the AI to cure cancer around the world. We’ll originally present these cures as random miracles and over the course of several weeks will build narrative momentum and impetus until ‘the big reveal’. Alongside revealing the AI we’ll also release a fully timetabled plan for a global rollout of cures for all cancers for all people. We’re confident this will establish the AI as a positive force for humanity while creating the requisite excitement and ‘curiosity gap’ necessary for a good launch.

Re: Subject: Seeking agency for AI Superintelligence contract.
Vote Everything. Here’s how it works: We’ll start an online poll asking people to vote on a simple question of global import, like, which would you rather do: Make all aeroplanes ten percent more fuel efficient, or reduce methane emissions by all cattle? We’ll make the AI fulfill the winning vote. If we do enough of these polls in enough areas then people will start to correlate the results of the polls with larger changes in the world. As this happens, online media will start to speculate more about the AI system in question. We’ll be able to use this interest to drive attention to further polls to have it do further things. The final vote before we reveal it will be asking people what date they want to find out who is ‘the force behind the polls’.

Re: Subject: Seeking agency for AI Superintelligence contract.
Destroy Pluto. Stay with us. Destroy Pluto AND use the mass of Pluto to construct a set of space stations, solar panels, and water extractors throughout the solar system. We can use the AI to develop new propulsion methods and materials which can then be used to create an expedition to destroy the planet. Initially it will be noticed by astronomers. We expect early media narratives to assume that Pluto has been destroyed by aliens who will then harvest the planet and use it to build strange machines to bring havoc to the solar system. Shortly before martial law is declared we can make an announcement via the UN that we used the intelligence to destroy Pluto, at which point every person on Earth will be given a ‘space bond’ which entitles them to a percentage of future earnings of the space-based infrastructure developed by the AI.

Things that inspired this story: Advertising agencies, the somewhat un-discussed question of “what do we do if we develop superintelligence arrives”, historical moments of great significant.