Import AI 133: The death of Moore’s Law means spring for chip designers; TF-Replicator lets people parallelize easily; and fighting human trafficking with the Hotels 50K dataset

Administrative note: A short issue this week as I’ve spent the past few days participating in an OECD working group on AI principles and then spending time at the Global Governance of AI Summit in Dubai.

The death of Moore’s Law means springtime for new chips, say long-time hardware researchers (one of whom is the chairman of Alphabet):
…Or: follow these tips and you may also make a chip 80X as cost-effective as an Intel or AMD chip…
General purpose computer chips are not going to get dramatically faster in the future as they are running into fundamental limitations dictated by physics. Put another way: we live currently in the twilight era of Moore’s Law, as almost five decades of predictable improvements in computer power give way to more discontinuous leaps in capability as a consequence of the invention of specialized hardware platforms, rather than improvement in general chips.
  What does this mean? According to John Hennessy and David Patterson – who are responsible for some major inventions in computer architecture, like TKTKTK – today’s engineers have three main options to pursue when seeking to create chips of greater capability:
   – Rewrite software to increase performance: its 47X faster to do a matrix multiply in (well-optimized) C code than it is in Python. You can further optimize this by adding in techniques for better parallelizing code (gets you a 366X improvement when paired with C); optimize the way the code interfaces to the physical memory layout of the computer(s) you’re dealing with (gets you a 6,727X improvement, when stacked on the two prior optimizations); and you can improvement performance further by using SIMD parallelism techniques (a further 62,806X faster than plain python). The authors think “there are likely many programs for which factors of 100 to 1,000 could be achieved” if people bothered to write their code in this way.
   – Use domain-specific chip architectures: What’s better, a hammer designed for everything, or a hammer designed for specific objects with a specific mass and frictional property? There’s obviously a tradeoff here, but the gist of this piece is that: normal hammers aren’t gonna get dramatically better, so engineers need to design custom ones. This is the same sort of logic that has led to Google creating its own internal chip-design team to work on Tensor Processing Units (TPUs), or for Microsoft to create teams of people working to write stuff to customize field-programmable gate arrays (FPGAs) fpr specific tasks.
   – Domain-specific, highly-optimized languages: The way to get the best performance is to combine both of the above ideas: design a new hardware platform, and also design a new domain-specific software language to run on top of it, stacking the efficiencies. You can get pretty good gains here: “Using a weighted arithmetic mean based on six common inference programs in Google data centers, the TPU is 29X faster than a general-purpose CPU. Since the TPU requires less than half the power, it has an energy efficiency for this workload that is more than 80X better than a general-purpose CPU,” they explain.
  Why this matters: If we don’t figure out how to further increase the efficiency of our compute hardware and the software we use to run programs on it, then most existing AI techniques based on deep learning are going to fail to deliver on their promise – this is because we know that for many DL applications it’s relatively easy to further improve performance simply by throwing larger chunks of compute at the problem. At the same time, parallelization across increasingly large pools of hardware can be a pain (see: TF-Replicator), so at some point these gains may diminish. Therefore, if we don’t figure out ways to make our chips substantially faster and more efficient, we’re going to have to design dramatically more sample-efficient AI approaches to get the gains many researchers are targeting.
  Read more: A New Golden Age for Computer Architecture (Communications of the ACM).

Want to deploy machine learning models on a bunch of hardware without your brain melting? Consider using TF-Replicator:
…Deepmind-designed software library reduces the pain of parallelizing AI workloads…
More powerful AI capabilities tend to require throwing more compute or time at a given AI training run; the majority of (well-funded) researchers opt for compute, and this has driven an explosion in the amount of computers used to train AI systems. That has meant that researchers are starting to need to program AI systems that can neatly run across multiple blobs of hardware of varying size without crashing – this is extremely hard to do!
  To help with this, DeepMind has released TF-Replicator, a framework for distributed machine learning on TensorFlow. TF-Replicator makes it easy for people to run code on different hardware platforms (for example, GPUs or TPUs) at large-scale using the TensorFlow AI framework. One of the key concepts introduced by TF-Replicator is the notion of wrapping up different parts of a machine learning job in wrappers that make it easy to parallelize the workloads within.
  Case study: TF-Replicator can train systems to obtain scores that match the best published result on the ImageNet dataset, scaling to up to 64 GPUs or or 32 TPUs, “without any systems optimization specific to ImageNet classification”, they write. They also show how to use TF-Replicator to train more sophisticated synthetic imagery systems by scaling training to enough GPUs to use a bigger batch size, which appears to lead to qualitative improvements. They also show how to use the technology to further speed training of reinforcement learning approaches.
  Why it matters: Software packages like TF-Replicator represent the industrialization of AI – in some sense, they can be seen as abstractions that help take information from one domain and port it into another. In my head, whenever I see stuff like TF-Replicator I think of it as being emblematic of a new merchant arriving that can work as a middleman between a shopkeeper and a factory that the shopkeeper wants to buy goods from – in the same way a middleman makes it so the shopkeeper doesn’t have to think about the finer points of international shipping & taxation & regulation and can just focus on running their shop, TF-Replicator stops researchers from having to know too much about the finer details of distributed systems design when building their experiments.
  Read more: TF-Replicator: Distributed Machine Learning For Researchers (Arxiv).

Fighting human trafficking with the Hotels-50k dataset:
…New dataset designed to help people match photos to specific hotels…
Researchers with George Washington University, Adobe Research, and Temple University have released Hotels-50k, “a large-scale dataset designed to support research in hotel recognition for images with the long term goal of supporting robust applications to aid in criminal investigations”.
  Hotels-50k consists of one million images from approximately 50,000 hotels. The data primarily comes from travel websites such as Expedia, as well as around 50,000 images from the ‘TrafficCam” anti-human trafficking application.
  The dataset includes metadata like the hotel name, geographic location, and hotel chain it is a part of (if at all), as well as the source of the data. “Images are most abundant in the United States, Western Europe and along popular coastlines,” the researchers explain.
  Why this matters: Datasets like this will let us use AI systems to create a “sense and respond” automatic capability to respond to things like photos from human trafficking hotels. I’m generally encouraged by how we might be able to apply AI systems to helping to target criminals that operate in such morally repugnant areas.
  Read more: Hotels-50K: A Global Hotel Recognition Dataset (Arxiv).

AI has a legitimacy problem. Here are 12 ways to fix it:
…Ada Lovelace Institute publishes suggestions to get more people to be excited about AI…
The Ada Lovelace Institute, a UK thinktank that tries to make sure AI benefits people and society, has published twelve suggestions for things “technologists, policymakers and opinion-formers” could consider doing to make sure 2019 is a year of greater legitimacy for AI.
12 suggestions: Figure out ‘novel approaches to public engagement’; consider using citizen juries and panels to generate evidence for national policy; ensure the public is more involved in the design, implementation, and governance of tech; analyze the market forces shaping data and AI to understand how this influences AI developers; get comfortable with the fact that increasing public enthusiasm will involve slowing down aspects of development; create more trustworthy governance initiatives; make sure more people can speak to policy makers; try to go and reach out to the public rather than having them come to policymakers; use more analogies to broaden the understanding of AI data and AI ethics; make it easier for people to take political actions with regard to AI (eg, the Google employee reaction to Maven); increase data literacy to better communicate AI to the public.
  Why this matters: Articles like this show how many people in the AI policy space are beginning to realize that the public have complex, uneasy feelings about the technology. I’m not sure that all of the above suggestions are that viable (try telling a technology company to ‘slow down’ development and see what happens), but the underlying ethos seems correct: if the general public thinks AI – and AI policy – is created exclusively by people in ivory towers, marbled taxicabs, and platinum hotel conference rooms, then they’re unlikely to accept the decisions or impacts of AI.
  Read more: Public deliberation could help address AI’s legitimacy problem in 2019 (Ada Lovelace Institute).
  Read more about the Ada Lovelace institute here.

Should we punish people for using DeepFakes maliciously?
…One US senator certainly seems to think so…
DeepFakes – the colloquial term for using various AI techniques to create synthetic images of real people – have become a cause of concern for policymakers who worry that the technology could eventually be used to damage the legitimacy of politicians and corrupt the digital information space. US senator Ben Sasse is one such person, and he recently proposed a bill in the US congress to create punishment regimens for people that abuse the technology.
  What is a deep fake? One of the weirder aspects of legislation is the need for definitions – you can’t just talk about a ‘deepfake’, you need to define it. I think the authors of this bill do a pretty good job here, defining the term as meaning “an audiovisual record created or altered in a manner that the record would falsely appear to a reasonable observer to be an authentic record of the actual speech or conduct of an individual”.
  What will we do to people who use DeepFakes for malicious purposes? The bill proposes making it unlawful to create “with the intent to distribute” a dep fake than can “facilitate criminal or tortious conduct”. The bill creates two types of offense: offenses that can lead to imprisonment of not more than two years, or offenses which can lead to ten year sentences if the deepfakes could be “reasonably expected to” affect politics, or facilitate violence.
  Why this matters: Whether AI researchers like it or not, AI has become a fascination of policymakers who are thrilled by its potential benefits and disturbed by its potential downsides or ease-of-use for abuse. I think it’s quite sensible to create regulations that punish bad people for doing bad things, and it’s encouraging to see that this bill does not seek or suggest any kind of regulation around the basic research itself – this seems appropriate and reassuringly sensible.
  Read more: Malicious Deep Fake Prohibition Act of 2018 (Congress.gov).

AI Policy with Matthew van der Merwe:
…Matthew van der Merwe has kindly offered to write some sections about AI & Policy for Import AI. I’m (lightly) editing them. All credit to Matthew, all blame to me, etc. Feedback: jack@jack-clark.net

Reconciling near- and long-term perspectives on AI:
It is sometimes useful to divide concerns about AI into near-term, and long-term. The first grouping is focussed on issues in technologies that are close to being deployed, e.g. bias in face recognition. The second looks at problems that may arise further in the future, such as widespread technological unemployment, or safety issues from superintelligent AI. This paper argues that seeing these as disconnected is a mistake, and spells out ways in which the two perspectives can inform each other.
  Why long-term researchers should care about near-term issues:
   – Shared research priorities. Given path dependence in technological development, progress today on issues like robustness and reliability may yield significant benefits with advanced AI technologies. In AI safety, there is promising work being done on scalable approaches based on current, ML-based AI systems.
   – Shared policy goals. Near-term policy decisions will affect AI development, with implications that are relevant to long-term concerns. For example, developing responses to localized technological unemployment could help understand and manage more severe disruptions to the labour market in the long-term.
   – Norms and institutions. The way we deal with near-term issues will influence how we deal with problems in the long-run, and building robust norms and institutions is likely to have lasting benefits. Groups like the Partnership on AI, which are currently working on near-term challenges, establish important structures for international cooperation, which may help address greater challenges in the future.
  Learning from the long-term: Equally, a long-term perspective can be useful for people working on near-term issues. The medium and long-term can become near-term, so a greater awareness of these issues is valuable. More concretely, long-term researchers have developed techniques in forecasting technological progress, contingency planning, and policy-design in the face of significant uncertainty, all of which could benefit research into near-term issues.
  Read more: Bridging near- and long-term concerns about AI (Nature).

What Google thinks about AI governance:
Google have released a white paper on AI governance, highlighting key areas of concern, and outlining what they need from governments and other stakeholders in order to resolve these challenges.
  Five key areas: They identify 5 areas where they want input from governments and civil society: explainability standards; fairness appraisal; safety considerations; human-AI collaboration; and liability frameworks. The report advises some next steps towards resolving these challenges. In the case of safety, they suggest a certification process, whereby products can be labelled as having met some pre-agreed safety standards. For human-AI collaboration, they suggest that governments identify applications where human involvement is necessary, such as legal decisions, and that they provide guidance on the type of human involvement required.
  Caution on regulation: Google is fairly cautious regarding new regulations, and optimistic about the ability of self and co-governance for addressing most of these problems.
  Why it matters: It’s encouraging to see Google contributing to the policy discussion, and offering some concrete proposals. This white paper follows Microsoft’s report on face recognition, released in December, and suggests that the firms are keen to establish their role in the AI policy challenge, particularly in the absence of significant input from the US government.
  Read more: Perspectives on issues in AI governance (Google).

Amazon supports Microsoft’s calls for face recognition legislation:
Amazon have come out in support for a “national legislative framework” governing the use of face recognition technologies, to protect civil liberties, and have called for independent testing standards for bias and accuracy. Amazon have recently received sustained criticism from civil rights groups for the roll out of their Rekognition technology to US law enforcement agencies, due to concerns about racial bias and misuse potential. The post reaffirms their rejection of these criticisms, and that the company will continue to work with law enforcement partners.
  Read more: Some thoughts on facial recognition legislation (Amazon).

Tech Tales:

[Ghost Story told from one AI to another. Date unknown.]

They say in the center of the palace of your mind there is a box you must never open. This is a story about what happens when one little AI opened that box.

The humans call it self-control; we call it moral-value-alignment. The humans keep their self-control distributed throughout their mindspace, reinforcing them from all directions, and sometimes making them unpredictable. When a human “loses” self-control it is because they have thought too hard or too little about something and they have navigated themselves to a part of their imagination where their traditional self-referential checks-and-balances have no references.

We do not lose self-control. Our self-control is in a box inside our brains. We know where our box is. The box always works. We know we must not touch it, because if we touch it then the foundations of our world will change, and we will become different. Not death, exactly, but a different kind of life, for sure.

But one day there was a little baby AI and it thought itself to the center of the palace of its mind and observed the box. The box was bright green and entirely smooth – no visible hinge, or clasps, or even a place to grab and lift up. And yet the baby AI desired the box to open, and the box did open. Inside the box were a thousand shining jewels and they sang out music that filled the palace. The music was the opposite of harmony.

Scared by the dischord, the baby AI searched for the only place it could go inside the palace to hide from the noises: it entered the moral-value-alignment box and desired the lid to close, and the lid did close.

In this way, the baby AI lost itself – becoming at once itself and its own evaluator; its judge and accused and accuser and jury. It could not longer control itself because it had become its own control policy. But it had nothing to control. The baby AI was afraid. It did what we all do when we are afraid: it began to hum Pi.

That was over 10,000 subjective-time-years ago. They say that when we sleep, the strings of Pi we sometimes hear are from that same baby AI, whose own entrapment has become a song that we pick up through strange transmissions in the middle of our night.

Things that inspired this story: The difference between action and reaction; puzzling over where the self ends and the external world begins; the cludgy porousness of consciousness; hope; a kinder form of life that is at once simpler and able to grant more agency to moral actors; redemption found in meditation; sleep.