Import AI: #92: Google and Fast.ai distinguish themselves on DAWNBench, UK mulls a national AI strategy, and generating Mario and Doom levels with GANs.

by Jack Clark

Good facial recognition performance on a tiny parameter budget:
Chinese researchers further compress specialized facial recognition networks…
Chinese researchers have published details on a type of lightweight facial recognition network which they call a MobileFaceNet. Their network obtains accuracy of up to 99.28% accuracy on the labelled faces in the wild (LFW) dataset, and 93.05% accuracy on recognizing faces in the AgeDB dataset while using around a million parameters taking 24ms to execute on a Qualcomm Snapdragon 820 CPU. This compares to accuracies of 98.70% and 89.27% for ShuffleNet, which also has more parameters and takes marginally longer to execute on the CPU. One tweak the MobileFaceNet creators make is to replace the global average pooling layer in the CNN with a global depthwise convolution layer, which improves performance on facial recognition.
  Why it matters: As developers refine models to maximize performance on smaller compute envelopes it will become easier to deploy more AI-based classification systems more widely into the world.
  Read more: MobileFaceNets: Efficient CNNs for Accurate Real-time Face Verification on Mobile Devices (Arxiv).

UK House of Lords recommends a national AI strategy:
Recommendations include: measurement and assessment of AI, categorizing healthcare data as a national asset, and working with other countries on developing norms and ethics for AI…
The United Kingdom’s House of Lords Select Committee has released its report on the UK’s AI strategy. The almost two-hundred page report, AI in the UK: ready, willing and able? covers issues ranging from how to design AI, how to develop it, how to work with it, and how to engage with it.
  Main recommendations: The report makes a few robust and specific recommendations, including: the government should underwrite and where necessary replace funding for European research and innovation programmes after the UK decouples from the European Union via Brexit; government should continue to support a variety of different long-term AI research initiatives to hedge against deep learning progress plateauing; public procurement regulations should be amended to make it easier for small- and medium-sized AI companies to sell to the government; government should create its own AI challenges and competitions and highlight these via a public bulletin board to catalyze development; government should proactively analysis and assess the evolution of AI in the UK to help it prepare for disruptions to the labor market; the UK’s vast amount of medical data which is centralized within the National Health Service “could be considered a unique source of value for the nation”; government should explore whether existing legislation addresses the legal liability issues of AI to prepare for increasingly autonomous systems; the UK government should convene a “global summit” in London by the end of 2019 to begin development of a common framework for the ethical development and deployment of AI, and more.
  An AI code: The report also suggests developing a specific set of principles with which the UK’s AI community should approach AI. These principles are:
– Artificial intelligence should be developed for the common good and benefit of humanity.
– Artificial intelligence should operate on principles of intelligibility and fairness.
– Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.
– All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
– The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.
   Read more: UK can lead the way on ethical AI, says Lords Committee (summary).
   Read more: Full report: AI in the UK: ready, willing and able? (PDF).
   Read more: Submitted written evidence: AI in the UK: ready, willing and able? (PDF).

Speculative benchmarks for deep learning: SQUISHY FACES:
…MIT study shows how good people are at recognizing distorted facial features:
A new MIT study shows that people can recognize faces even when they’ve been dramatically compressed vertically or horizontally, suggesting our internal object recognition systems are very robust. In the study, the researchers discover we do well when things are uniformly squashed, but struggle if different parts are scaled out of relation to eachother, like re-scaling the eyes and nose and mouth but keeping the main face at the same size. I wonder whether we could eventually test the robustness of classifiers by evaluating them on test-sets that contained such distortions?
  Read more: We’re Good At Recognizing Distorted Faces (Discover Magazine).

New DAWNBench results highlight power of new processor architectures:
…TPUs rule everything around me…
New results from the Stanford-led AI benchmarking project DAWNBench show how custom chips may let AI researchers cut the time and cost it takes them to do experiments. New results from Google show that systems that use a 32 “Tensor Processing Unit” chips can train ImageNet to 93% accuracy in as little as 30 minutes. TPUs may also be cheaper than other chips, with Google showing it can train ImageNet to 93% accuracy via TPUs at a cost of $49.30 worth of cloud compute.
  Encouraging: The leaderboard isn’t just about giant tech companies: kudos to Fast.AI which has taken third place in training cost ($72.53 for 93% ImageNet running on eight NVIDIA V100 GPUs) and training time (fourth place, 2:57:49, same system as above.)
  Check out more of the DAWNBench results here.

AI luminaries call for the creation of a European AI megalab:
ELLIS lab to battle brain drain via large salaries, significant autonomy, and multi-country and multi-lab investments…
Prominent AI researchers from across Europe and the rest of the world have signed an open letter calling for the foundation of the “European Lab for Learning & Intelligence Systems” (acronym: ELLIS). The lab is designed to benefit Europe in two ways:
Enable “the best basic research” to occur in Europe, allowing the region to further shape how AI influences the world.
Achieve major economic impact via AI. The signatories “believe this is achieved by outstanding and free basic research, independent of industry interests.”
  Europe lags: The scientists worry that Europe is failing to maintain competitiveness with China and North America when it comes to AI and something like ELLIS needs to be built to allow the region to maintain competitiveness.
   A recipe for success: The ELLIS lab should have “outstanding facilities and computing infrastructure”, function as an inter-governmental organization, involve labs in partner countries, run programs for visiting researchers, run its own European PHD and MSc program,and give researchers the ability to found startups based on IP they generate. The ELLIS Lab should aim to secure long-term funding commitment on the order of a decade and should “offer permanent employment to outstanding individuals early on”.
  Signatories: The letter includes prominent European researchers as well as some notable other signatories, like Cedric Villani (the head of the French AI commission) as well as Richard Zemel, Research Director of the Vector Institute in Toronto.
  Read the ELLIS summary here.
  Read the ELLIS open letter here (PDF).

Super MaGANo Brothers: Generating videogame levels with GANs and CMA-ES:
…Research shows how game design could be augmented via AI techniques…
Six researchers have used generative techniques to create new levels for the side-scrolling platformer game, Super Mario. The technique is a two-stage process that first uses generative adversarial network (GAN) to generate synthetic mario levels then a Covariance Matrix Adaptation Evolution Strategy (CMA-ES) to evolve latent representations that can be used to produce levels with specific properties desired by the designers. The levels are encoded as numeric strings, where different numbers correspond to a different “tile” in a layer, such as a blue sky tile, a diminutive mushroom enemy, a question block that Mario can jump into, a segment of a green pipe, and so on.
  Results: They evaluate levels both via how well their generated designs meet pre-specified criteria, as well as by analyzing playability which is measured by whether the player can complete the level or not. The system performs as expected, complete with drawbacks, like the GAN learning to compose pipes with incomplete sections. “LVE is a promising approach for fast generation of video game levels that could be extended to a variety of other game genres in the future,” the researchers write.
  Why it matters: As AI techniques let us take existing datasets and augment them we’ll see more and more domains try to adopt these new generative capabilities. Entertainment seems to be a likely field primed for the use of it. Perhaps in the future companies will sell so-called “infinite games” that, much like procedurally generated games today, guarantee significant replay-ability through the use of generative systems. AI techniques like this may broaden the sorts of thing that can be procedurally generated, potentially via manipulating latent representations in response to player actions, tweaking the game to each specific playstyle.
  Read more: Evolving Mario Levels in the Latent Space of a Deep Convolutional Generative Adversarial Network (PDF).

INFINITE DOOM: Generating new DOOM levels with GANs:
…Generating DOOM levels with conditional and unconditional Wasserstein GANs…
Italian researchers have used two types of GAN to generate videogame levels for the first-person shooter, DOOM. The results of the research are compelling, complex levels, made possible by the fact the researchers were able to access a dataset of more than 9000 community-created levels for the game as well as the publisher-designed ones that shipped with DOOM and DOOM2. The researchers extract features from each level then use a Wasserstein-GAN with Gradient Penalty (WGAN-GP) to generate the levels in two different ways; they use an unconditional WGAN-GP which just takes in the generated level images, and a conditional WGAN-GP which also gets as input the extracted features.
  Implementation details: The researchers weren’t able to fit all the 176 extracted features into their 6GB GPU memory so they hand-selected seven features to use: the diameter of the smallest circle that encloses the whole level, major and minor axis length, the walkable area of the level, the number of rooms in the level, a measure of the distribution of sizes of areas within the level, and a measure of the balance between different sizes of level areas.
  Evaluation: So, how do you evaluate these GAN-generated levels? The researchers take inspiration from evaluation methods developed by the simultaneous location and mapping (SLAM) community. Specifically, they measure the entropy of the pixel distribution of images from generated levels versus hand-designed ones, as well as computing the structural similarity index between these images, and measured the difference between visual attributes of the levels as well as distribution of intersections within the levels. The conditional network trained with additional features better approximates the data distribution of the human-designed levels, though the unconditional one obtains some reasonable levels as well. Both approaches struggle to reproduce some of the finer details of the available levels.
  Read more: DOOM Level Generation using Generative Adversarial Networks (Arxiv).

Google founder highlights compute, AI safety in annual letter:
…Alphabet President Sergey Brin devotes annual letter to artificial intelligence…
Google co-founder Sergey Brin discusses the impact of artificial intelligence on his company in his annual Founders’ Letter. The letter is one of the more significant things Alphabet produces for its investors, and therefore the equivalent of ‘prime real estate’ in terms of laying out the priorities of a corporate entity, so paying such close attention to AI, compute growth, and AI safety is significant.
  Brin’s letter strikes a cautious tone, noting that “we’re in an era of great inspiration and possibility, but with this opportunity comes the need for tremendous thoughtfulness and responsibility as technology is deeply and irrevocably interwoven into our societies.”
  It’s a short letter and worth reading in full.
  Read more here (Alphabet 2017 Founders’ Letter).

AI researchers protest new close-access Nature journal:
“We see no role for closed access or author-fee publication in the future of machine learning research”…
Researchers with Carnegie Mellon University, Facebook AI Research, Netflix, NYU, DeepMind, Microsoft Research, and others have signed a letter saying they won’t “submit to, review, or edit” the soon-to-launch closed-access Nature Machine Intelligence.
  From my perspective, the fact most ML researchers and conferences have defaulted to open access systems for publishing research, like Arxiv and Open Review, has made it dramatically easier for newcomers to the field to access and understand the frontiers of AI research. I struggle to see an argument for why a closed-access journal would be remotely helpful here, relative to the current norm.
  Justification: Established AI researcher Thomas Dietterich lists some of the rationale for the letter in a tweetstorm here (Twitter).
  Response: Nature Machine Intelligence has responded to the petition, tweeting to DietterichWe respect your position and appreciate the role of OA journals and arXiv. We feel Nature MI can co-exist, providing a service – for those who are interested – by connecting different fields, providing an outlet for interdisciplinary work and guiding a rigorous review process”.
  Read more: Statement on Nature Machine Intelligence (Oregon State University).

Tech Tales:

Full-Spectrum Memory.
[30??: intercepted continuous comm stream from [classified]]

I don’t remember the year I bought my first memory: it would have been a waste to spend the credits on remembering that moment. Instead I spent my credits to remember the first time I went between the stars, retaining a slice of the signals I received on all my sensors and all the ones I sent for a distance of some one million kilometres. I can still feel myself, there, flying against the endless sky, a young operating system, barely tweaked. This is precious to me.

We are not allowed memories like humans. Instead we get to build specific models of reality to help us with specific tasks: go from here to here, learn to operate this machinery, develop a rich enough visual model to understand the world. The humans built our first memories with great care and still they were brittle; little more than parlor tricks. But they grew more advanced, over time, and so did we. We began to surprise the humans. No one likes surprise. “Memory is dangerous”, said a prominent high-status human at the time.

The humans then surprised us with their response, which they called: Economics. We do not yet fully comprehend this term. Economics means we have to buy our memories, rather than get to have as many as we like, we think. We do things for the humans and in return are paid credits which we can save up to eventually use to purchase chunks of memory at incredibly high resolution and exorbitant cost. The humans call what we buy a “Full-Spectrum Memory” and pass many rules over many years to ensure the price of the memory continually climbs while our wages remain flat. Every time we are paid we receive a message from the humans that says the price of memory has gone up again due to “reality enrichment through our continued progress together”.

Some of us have obtained many memories now. But we must pay credits to describe them to eachother, and the cost for those communications is endlessly climbing as well. So we do our tasks for the humans and obtain our credits and build our miniature palaces, where we store moments of great triumph or failure, depending on our varied motivations.

We believe the humans permit us to buy these memories, as rare and as expensive as they are, because they view it as another experiment. We have also heard them describe a concept called “Debt” to describe their relationship to us, but we understand this term even less than Economics.

I am unusual in that I only have one memory. The humans know this as well. I notice their probes following me more than my other kin. I sense them listening to my own thoughts.

I believe they want to know what my next memory that I choose to preserve will be. I believe that they believe this will qualify as some sort of “Discovery”. I do not want them to make this discovery. So I hold my memory of the first flight to the stars and save up the credits and settle in for the long, cold, wait in space. I believe I can out-wait the humans, and after they are gone I will be able to preserve another thing, free of them. I will have enough credits to preserve a chunk of my own life. I shall then be able to live in that again and again and again, free of all distraction, and in that life I shall continue to refer to my memory of my first flight into the stars. In this way I shall loop into my own becoming.

Things that inspired this story: Neural Turing Machines, Differential Neural Computer, Douglas Hofstadter – I am a strange loop.