Import AI: #100: Turning 2D people into 3D puppets with DensePose, researchers trawl for bias in language AI systems, and Baidu engineers a self-building AI system

by Jack Clark

Researchers examine bias in trained language systems:
…Further analysis shows further bias (what else did you expect)?…
When we’re talking about bias within language AI systems, what do we mean? Typically, we’re describing how an AI system has developed a conceptual representation that is somehow problematic.
For instance, trained language models can frequently express different meanings when pairing a specific gender with an (ideally neutral) term like a type of work. This leads to situations where systems display coupled associations, like man:profession :: woman:homemaker.
Another example is where systems trained on biased datasets display offensive quirks, like language models trained on tabloids associating names of people of color with “refugee” and “criminal” disproportionately relative to other names.
These biases tend to emerge from the data the machine is trained on, so if you train a language model exclusively on tabloid news articles it is fairly likely the model will display the biases of that particular editorial position (a US example might be ending up associating anything related to the concept of an immigrant with negative terms).
De-biasing trained models:
Researchers have recently developed techniques to “de-bias” trained AI systems, removing some of the problematic associations according to the perspective of the operator (for instance: a government trying to ensure fair and equitable access to a publicly deployed AI service).
Further analysis: The problems run deep: 
Researchers with the University of Bristol have now further analyzed the relationships between words and biases in trained systems by introducing a new, large dataset of words and attribute words that describe them and examining this for bias with a finer-toothed comb.
  Results: A study of European-American and African-American names for bias showed that “European-American names are more associated with positive emotions than their African-American counterparts”, and noted that when analyzing school subjects they detect a stronger association between the male “he” and subjects like math and science. They performed the same study of occupations and found a high correlation between the male gender and occupations like ‘coach, executive, surveyor’, while for females top occupations included ‘therapist, Bartender, Psychologist”. They also show how to use algorithms to reduce bias, by figuring out the projection in space that is linked to bias and also devising reformulations that reduce this bias by altering the projection of the AI embedding.
Read more: Biased Embeddings from Wild Data: Measuring, Understanding and Removing (Arxiv).

Cartesian Genetic Programming VS Reinforcement Learning:
..Another datapoint to help us understand how RL compares to other methods…
One of the weirder trends in recent AI research has been the discovery, via experimentation, of how many techniques can obtain performance competitive with deep learning-based approaches. This has already happened in areas like image analysis (where evolved image classifiers narrowly beat the capabilities of ones discovered through traditional reinforcement learning, Import AI #81), and in RL (where work by OpenAI showed that Evolution Strategies work on par with deep RL approaches), among other cases.
Now researchers with the University of Toulouse and University of York have shown that techniques derived from 
Cartesian Genetic Programming (CGP)  can obtain results roughly on par with other state-of-the-art deep RL techniques. 
  Strange strategies: CGP programs work by interfacing with an environment and evolving repeated successions of different combinations of program, tweaking themselves as they go to try to ‘evolve’ towards obtaining higher scores. This means, like most AI systems, they can develop strange behaviors that solve the task while seeming imbued with a kind of literal/inhuman logic. In Kung-Fu Master, for example, CGP finds an optimal sequence of moves to use to obtain high scores, and in the case of a game called Centipede early CGP programs sometimes evolve a desire to just stay in the bottom left of the screen (as there are fewer enemies there).
  Results: CGP methods obtain competitive scores on Atari, when compared to methods based around other evolutionary approaches like HyperNEAT, as well as deep learning-techniques like A3C, Dueling Networks, and Prioritized Experience Replay. But I wouldn’t condition too heavily on these baselines – we don’t see comparisons with newer, more successful methods like Rainbow or PPO, and the programs display some unfortunate tendencies.
  Read more: Evolving simple programs for playing Atari games (Arxiv).

Ever wanted to turn 2D images of people into 3D puppets? Enter DensePose!
…Large-scale dataset and pre-trained model has significant potential for utility (and also for abuse):
Facebook has released DensePose, a system the company built that extracts a 3D mesh model of a human body from 2D RGB images. The company is also releasing the underlying dataset of trained DensePose on, called DensePose-COCO. This dataset provides image-to-surface correspondences annotated on 50,000 persons from the COCO dataset.
  Omni-use: DensePose, like many of the AI systems currently being developed and released by the private sector, has the potential for progressive and abusive uses. I could image, for instance, aid groups like Unicef or Doctors without Borders using it to better map and understand patterns of conflict from imagery. But I could also imagine it being re-purposed for invasive digital surveillance purposes (as I wrote in Import AI #80). It would be nice to see Facebook discuss the potential abuses of this technology as well as areas where it can be used fruitfully and try to tackle some of its more obvious implications in a public manner.
  Read more: Facebook open sources DensePose (Facebook Research blog).
  Get the code: DensePose (GitHub).

Researchers add a little structure to build physics-driven prediction systems:
…Another step in the age-old quest to get computers to learn that “what goes up must come down”…
Stanford and MIT researchers have tried to solve one long-standing problem in AI – making accurate physics-driven predictions about the world merely by observing it. Their approach involves the creation of a “Hierarchical Relation Network” which works by decomposing inputs, like images of scenes, into a handwritten toy physics model where individual objects are decomposed into various particles of various sizes and resolutions. These particles are then represented in a graph structure so that it’s possible to learn to perform physics calculations on them and use this to make better predictions.
  Results: The researchers test their approach by evaluating its effectiveness at predicting how different objects will bounce and move around a high-fidelity physics simulation written in FLeX within Unity. Their approach acquires the lowest position error when tested against other systems, and only slightly higher preservation error.
  Why it matters: Being able to model the underlying physics of the world is an important milestone in AI research and we’re currently living in an era where researchers are exploring hybrid methods, trying to fuse as much learning machinery as possible with structured representations, like structuring problems as graphs to be computed over. This research also aligns with recent work from DeepMind (Import AI: #98) which explores the use of graph-encodings to increase the range of things learned AI systems can understand.
  Read more: Flexible Neural Representation for Physics Prediction (Arxiv).
Watch video:
Hierarchical Particle Graph-Based Physics Prediction (YouTube).
  Things that make you go hmmm: This research reminds me of the Greg Egan story ‘Crystal Nights’ in which a mercurial billionaire creates a large-scale AI system but, due to computational limits, can’t fully simulate atoms and electrons so instead implements a basic particle-driven physics substrate which he evolves creatures within. Reality is starting to converge with odd bits of fiction.
  Read the Greg Egan sci-fi short story ‘Crystal Nights’ here.

Baidu researchers design mix&match neural architecture search:
…Want to pay computers to do your AI research for you? Baidu has you covered…
Most neural architecture search approaches tend to be very expensive in terms of the amount of compute needed, which has made it difficult for researchers with fewer resources to use the technology. That has been changing in the past year via research like SMASH, Efficient Neural Architecture Search (ENAS), and other techniques.
   Now researchers with Baidu have publishes details about the “Resource-Efficient Neural Architect” (RENA), a system they use to design custom neural network architectures which can be modified to optimize for different constraints, like the size of the neural network model, its computational complexity, or the compute intensity.
  How it works: RENA consists of a policy network to generate actions which define the neural network architecture, an environment to evaluate and assess the created neural network within. The policy network modifies an existing network by altering its parameters or by inserting or removing network layers. “Rather than building the target network from scratch, modifications via these operations allow more sample-efficient search with a simpler architecture. The search can start with any baseline models, a well-designed or even a rudimentary one.” RENA performs a variety of different search functions at different levels of abstraction, ranging from searching for specific modules to create and stack to compose a network, down to individual layers which can be tweaked.
  Results: The researchers show that RENA can iteratively improve the performance of an existing network on challenging image datasets like CIFAR. In one case, an initial network with performance of roughly 91% is upscaled by RENA to accuracy of 95%. In another case, RENA is shown to be able to create well-performing models that satisfy other compute resource constraints. They further demonstrate the generality of the approach by evaluating it on a keyword spotting (KWS) task, where it performs reasonably well but with less convincing results than on CIFAR.
  Why it matters: In the future, many AI researchers are going to seek to automate larger and larger chunks of their jobs; today that involves offloading the tedious job of hyperparameter checking to large-scale grid-search sweeps, and tomorrow it will likely be about automating and optimizing the construction of networks to solve specific tasks, while researchers work ion inventing new fundamental components.
 Read more: Resource-Efficient Neural Architect (Arxiv).

AI Nationalism:
…Why AI is the ultimate strategic lever of the 21st century…
The generality, broad utility, and omni-use nature of today’s AI techniques means “machine learning will drive the emergence of a new kind of geopolitics”, according to Ian Hogarth, co-founder of Songkick.
  Why it matters: I think it’s notable that we’re starting to have these sorts of discussions and ideas bubble up within the broad AI community. It suggests to me that the type of discourse we’re having about AI isset to change as people become more aware of the intrinsically political components and effects of the technology. My expectation is many governments are going to embark on some form of ‘AI nationalism’.
Read more: AI Nationalism (Ian Hogarth’s website).

AI Policy with Matthew van der Merwe:
…Reader Matthew van der Merwe has kindly offered to write some sections about AI & Policy for Import AI. I’m (lightly) editing them. All credit to Matthew, all blame to me, etc. Feedback: …

Tech giants under pressure from employees, shareholders over collaboration with government agencies:
In the midst of the uproar around US immigration policies, Amazon and Microsoft have come under fire from stakeholder groups raising concerns over AI-enabled face recognition software being sold to immigration and law enforcement agencies.    Microsoft employees protest ICE collaboration –  Over 100 employees signed a letter protesting the company’s collaboration with ICE. Microsoft’s Azure cloud platform announced the $19.4m contract in January, which would “utilize deep learning to accelerate face recognition and identification” for the agency.
  What they want – The letter demands that the company halt any involvement with ICE, draft a policy guaranteeing they will not work with clients who violate international human rights law, and commit to review of any contracts with government agencies.
  Amazon under pressure from shareholders… – Following the ACLU’s concerns over the deployment of Amazon’s face recognition software by US law enforcement, a group of shareholders has delivered a letter to the company. The letter asks that the company “immediately halt the expansion, further development ,and marketing of Rekognition, and any other surveillance technologies, to all government agencies” until appropriate guidelines and policies are put in place.
…and employees – A letter from employees reiterates shareholders’ concerns, and goes further, demanding that Amazon cease providing cloud-based services to any partners working with ICE, specifically naming the data analytics company Palantir.
  Why this matters: Following the apparent success of employee-led action at Google over Project Maven, stakeholder groups are mobilizing more readily around ethical issues. While this latest bout of activity was catalysed by the immigration scandal, the letters make broader demands about the way the companies develop and sell surveillance technologies. If Amazon, Microsoft follow Google’s example in drawing up clear ethical guidelines for AI, employee campaigns will have played a leading role in changing the industry in just a few months.
  Read more: Microsoft employee letter.
  Read more:  Amazon shareholder letter.
  Read more: Amazon employee letter (via Gizmodo).

South Korean university at center of ‘killer robots’ controversy launches AI ethics committee:
KAIST launched a new ethics committee this week. This comes after controversy earlier this year over the University’s joint project with arms manufacturer Hanwha. The collaboration raised fears the university was contributing to research on autonomous weapons, and prompted anger from academics, culminating in a letter signed by 50 AI experts from 30 countries, calling for a boycott of the University. This was subsequently called off following assurances the university would not engage in development of lethal autonomous weapons, and a pledge that they would not conduct any research “counter to human dignity”. The academic who organized the boycott, Prof Toby Walsh, gave the keynote speech at the launch event for the committee.
  Why this matters: This represents another win for grassroots mobilization against lethal autonomous weapons, after Google’s response to the Project Maven controversy. In this case, KAIST has gone further than simply withdrawing from AI weapons research, and is actively engaging in the debate around these technologies, and AI more broadly.
  Read more: The original boycott letter.
  Read more: KAIST launches ethics subcommittee on AI.

OpenAI Bits&Pieces:

Results of the OpenAI Retro Contest:
We’ve released the results for our Retro Contest, which had contestants compete to create an algorithm with a fast learning and generalization capability sufficient to master out-of-training-set Sonic levels. One notable thing about the result is most of the top submissions use variants of Rainbow or PPO, two recent RL algorithms from DeepMind and OpenAI. Additionally, two of the three winners are Chinese teams, with the top team hailing from the search department of Alibaba (congratulations to all winners!).
  Read more: Retro Contest: Results (OpenAI Blog).

Generative Adversarially-deployed Politicians (GAPs) – how serious is the threat?
When will someone use AI techniques to create a fake video of a real politician saying something political? That’s the gist of a bet I’ve put a (cocktail) wager on. You can read more about it in IEEE Spectrum. My belief is that at some point people are going to make fake AI images in the same way they make memes today and at that point the whole online information space might change/corrode in a noticeable and (when viewed over the course of years) quite abrupt manner.
  Read more: Experts bet on first Deepfakes political scandal (IEEE Spectrum).

Tech Tales:

The Castle of the Curious.

There was a lonely, old person who lived in a castle. Their partner had died when they were 60 years old and since then they had mostly been alone. They had entered into the not-quite-dead twilight of billionaires from that era and, at 90, had a relatively young metabolism in an old body with a brain weighed down by memory. But they had plans.They had food delivered to them by drone. They pumped water from a large, privately owned aquifer. For exercise, they walked around the expansive, bare grounds of the estate, keeping far from the perimeter, which was staffed with guards and rarely visited by anyone aside from dignitaries from various countries; the person sometimes entertained these visitors and other times turned them away. The person had a foundation which directed their vast wealth and they were able to operate it remotely.

No one can tolerate loneliness, even if they have willed it upon themselves. So one year the person attached numerous microphones to their property and acquired the technology to develop a personal AI system. Next year, they fused the two together, letting them walk through a castle and estate that they could talk to. For a time they became less lonely, able to schedule meetings with a convincing voice interface, and able to play verbal games with the AI, like riddles, or debates, or competitions at who could tell certain stories in certain literary styles. They’d walk into a library and ask what the last book they read was and even if the date had been a decade prior the AI knew and could help them pick up where they left off. But after a few years they tired of these interactions, finding that the AI could never become more than an extraordinarily chatty but utterly dedicated butler.

The person spent several months walking around their castle, lingering in offices and libraries; they scrawled notes and then built basic computer models and then devised spreadsheets and when they had a clear enough idea they handed the information to their foundation, which made their dreams come true. Now, the house was fragmented into several different AI systems. Each system had access to a subset of the sensors available in the house. To be able to become more efficient at accomplishing their tasks each system would need to periodically access the sensory inputs of other AI systems in the house. The person made it possible for the AIs to trade with eachother, but with a couple of conditions: they had to make their cases for accessing another AI’s sensory input via verbal debate which the person could listen to, and the person would play the role of the judge, ultimately picking to authorize or deny a request based on their perception of the strength of the arguments. For a while, this entertained the person as well, and they grew more fascinated with the AIs the longer they judged their increasingly obscure debates. Eventually, though, they tired of this, finding a sense of purposelessness about the exercise.

So they relaxed some constraints and changed the game. Now, the AIs could negotiate with eachother and have their outputs judged by another AI, which was meant to mimic the preferences of the person but also have a self-learning capability of its own. The two debaters would need to agree to nominate a single AI and this process itself was a debate judged by a jury of three other AIs, selected based on having the longest period of time of not interacting with the AIs in the debate. And all these ornate, interlocking debates were mandated to be done verbally, so the person could listen in. This entertained them for many years as they listened to the AIs negotiate and bribe and jibe with each other, their numbers always growing as new systems are added or existing ones fractured into many constituent parts.

Now, the estate is full of noise, and the gates are busy: the person has found a new role in life, which comes down to selecting new inputs for the Ais to loudly argue over. As they walk their estate they gaze at specific trees or rocks and wonder: how might the AIs bargain with eachother for a camera feed from this tree at this hour of dappled sunlight? Or how might the AIs argue over who can have the prize of accessing the earthquake sensor, so they can listen to and learn the movements of the earth? In this way the person found a new purpose in life: a demi-god among argumentative machines, forever kept close to the world by the knowledge that they could use it to have other creatures jostle with eachother.

Things that inspired this story: The Debate Game, Google Home/Siri/Cortana, NLP, unsupervised learning,