Import AI: #68: Chinese chip companies bet on ASICs over GPUs, AI researchers lobby governments over autonomous weapons, and researchers use new analysis technique to peer into neurons

by Jack Clark

Welcome to Import AI, subscribe here.

Canadian and Australian researchers lobby their countries to ban development of lethal autonomous weapons:
Scientists foresee the imminent arrival of cheap, powerful, autonomous weapons…
…Canadian and Australian researchers have lobbied their respective governments to ban development of weapons that will kill without ‘meaningful human control’. This comes ahead of the United Nations Conference on the Convention on Certain Conventional Weapons, where nations will gather and discuss the issue.
…Signatories include several of Canada and Australia”s most influential AI researchers, including Geoffrey Hinton (Google/University of Toronto/Vector Institute), Yoshua Bengio (Montreal Institute for Learning Algorithms, and an advisor to many organizations), and Doina Precup (McGill University, DeepMind), among others from Canada; along with many Australian AI researchers including Toby Walsh.
..Autonomous weapons “will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. The deadly consequence of this is that machines—not people—will determine who lives and dies. Canada’s AI community does not condone such uses of AI. We want to study, create and promote its beneficial uses”, the Canadian researchers write.
…”As many AI and robotics corporations—including Australian companies—have recently urged, autonomous weapon systems threaten to become the third revolution in warfare. If developed, they will permit armed conflict to be fought at a scale greater than ever before, and at timescales faster than humans can comprehend,” write the Australian researchers.
…Read the letter from Canadian researchers here.
…Read the UNSW Sydney press release and letter from Australian researchers here.

What do the neurons in a neural network really represent?
…Fantastic research by Chris Olah and others at Google shows new techniques to visualize the sorts of features learned by neurons in neural networks, making results of classifications more interpretable.
…Please read the fantastic post on Distill, which is an excellent example of how modern web technologies can make AI research and communications more hands-on and explicable.

Human Priors, or the problem with human biases and reinforcement learning:
Humans use visual priors to rapidly solve new tasks, whereas RL agents learn by manipulating their environment with no assumptions based on the visual appearance…
…Humans are able to master new tasks because they approach the world with a set of cognitive assumptions which allow for useful traits like  object disambiguation and spatial reasoning. How might these priors influence how humans approach solving games, and how might these approaches be different to those chosen by algorithms trained via reinforcement learning?
…In this anonymized ICLR 2018 paper, researchers explore how they can mess with the visual appearance of a computer game to lead to humans needing substantially more time to solve it, whereas algorithms trained via reinforcement learning will only take marginally longer. This shows how humans depend on various visual indicators when trying to solve a game, whereas RL agents behave much more like blind scientists, learning to manipulate their environment without arriving with assumptions derived from the visual world.
…”Once a player recognizes an object (i.e. door, monster, ladder), they seem to possess prior knowledge about how to interact with that object – monsters can be avoided by jumping over them, ladders can be climbed by pressing the up key repeatedly etc. Deep reinforcement learning agents on the other hand do not possess such priors and must learn how to interact with objects by mere hit and trial,” they note.
…Human baselines were derived by having about 30 people play the game(s) via Amazon Mechanical Turk, with the scientists measuring how long it took them to complete the game.
Read more about the research here: Investigating Human Priors for Playing Video Games.

Researchers release data for more than ~1,100 simulated robot soccer matches:
Data represents more than 180 hours of continuous gameplay across ten teams selected from leading competitors within 2016 and 2017 ‘robocup’ matches…
…Researchers have released a dataset of games from the long-running RoboCupSim competition. The data contains the ground truth data from the digital soccer simulator, including the real locations of all players and objects at every point during each roughly ~10 minute game, as well as the somewhat more noisy and incomplete data that is received by each robot deployed in the field.
…One of the stories of AI so far has been the many surprising ways in whcih people use different datasets, so while it’s not immediately obvious what this dataset could be used for I’m sure there are neat possibilities out there. (Motion prediction? Multi-agent studying? Learning a latent representation of individual soccer players? Who knows!)
Read more here: RoboCupSimData: A RoboCup soccer research dataset.

From the Dept. of ‘And you thought AI was weird’: Stitching human and rat brains together:
…In the same way today’s AI researchers like to mix and match common deep learning primitives, I’m wondering if in the future we’ll do the same with different organic brain types…
Neuroscientists have successfully implanted minuscule quantities of human brain tissue (developed from stem cells) into the brains of mice. Some of the human brain samples have lived for as long as two months and have integrated (to a very slight degree) with the rat brains.
…”Mature neurons from the human brain organoid sent axons, the wires that carry electrical signals from one neuron to another, into “multiple regions of the host mouse brain,” according to a team led by Fred “Rusty” Gage of the Salk Institute,” reports StatNews.
…Read more here: Tiny human brain organoids implanted into rodents, triggering ethical concerns.

Hanson Robotics on the value of stunt demos for its robots:
…Makers of the robot Sophia, which was recently granted ‘citizenship’ by the notoriously progressive nation of Saudi Arabia, detail value of stunt demos…
…Ben Goertzel, the chief scientist of Hanson Robotics, makers of the Sophia robot, has neatly explained to The Verge why his company continues to hold so many stunt demonstrations that lead to people having a wildly inaccurate view of what AI and robots are capable of.
“If I tell people I’m using probabilistic logic to do reasoning on how best to prune the backward chaining inference trees that arise in our logic engine, they have no idea what I’m talking about. But if I show them a beautiful smiling robot face, then they get the feeling that AGI may indeed be nearby and viable.” He says there’s a more obvious benefit too: in a world where AI talent and interest is sucked towards big tech companies in Silicon Valley, Sophia can operate as a counter-weight; something that grabs attention, and with that, funding. “What does a startup get out of having massive international publicity?” he says. “This is obvious.”
…So there you have it. Read more in this article by James Vincent at The Verge.

AI and explanation:
…How important is it that we explain AI, can we integrate AI into our existing legal system, and what challenges does it pose to us?…
…When should we demand an explanation from an AI algorithm for why it made a certain decision, and what legal frameworks exist to ingest these explanations so that they make sense with our existing legal system? These are some of the questions researchers with Harvard University, set out to answer in a recent paper.
…Generally, humans expect to be able to get explanations when the decision has an impact on someone other than the decision-maker, indicating that there is some kind of intrinsic value to knowing if a decision was made erroneously or not. Societal norms tend to indicate an explanation should be mandated if there are rational reasons to believe that an error has occurred or will occur in the decision making process as a consequence of the inputs to the process being unreliable or inadequate, or because the outcomes of the process are currently inexplicable, or due to overall distrust in the integrity of the system.
…It seems likely that it’ll be possible to get AI systems to explain themselves in a way that plugs into our existing legal system, the researchers write. This is because they view explanation as being distinct from transparency. They also view explanation as being a kind of augmentation that can be applied to AI systems. This has a neat policy implication, namely that: “regulation around explanation from AI systems should consider the explanation system as distinct from the AI system.”
…What the researchers suggest is that when it is known that an explanation will be required, organizations can structure their algorithms so that the relevant factors are known in advance and the software is structured to provide contextual decision-making explanations relating to those factors.
…Bias: A problem faced by AI designers, though, is that these systems will somewhat thoughtlessly automatically de-anonymize information and in some cases develop biased traits as a consequence of the ingested data. “Currently, we often assume that if the human did not have access to a particular factor, such as race, then it could not have been used in the decision. However, it is very easy for AI systems to reconstruct factors from high-dimensional inputs… Especially with AI systems, excluding a protected category does not mean that a proxy for that category is not being created,” they write. What this means is that: “Regulation must be put in place so that any protected factors collected by AI system designers are used only to ensure that the AI system is designed correctly, and not for other purposes within the organization “.
The benefit of no explanation: AI systems present an opportunity that human decision-makers do not: they can be designed so that the decision-making process does not generate and store any ancillary information about inputs, intermediate steps, and outputs,” the researchers note, before explaining that systems built in this way wouldn’t be able to provide explanations. “Requiring every AI system to explain every decision could result in less efficient systems, forced design choices, and a bias towards explainable but suboptimal outcomes.”
…Read more here: Accountability of AI Under the Law: The Role of Explanation.

*** The Department of Interesting AI Developments in China ***

Chinese startup wins US government facial recognition prize:
…Yitu Tech, a Chinese startup specializing in AI for computer vision, security, robotics, and data analysis, has won the ‘Face Recognition Prize Challenge’ which was hosted by IARPA, an agency whose job is “to envision and lead high-risk, high-payoff research that delivers innovative technology for future overwhelming intelligence advantage.”
…The competition had two components: a round focused on identifying faces in unseen test images; and a round focused on verifying that two photos of two people were of the same person or not. “Both tasks involve “non-cooperative” images where subjects were unaware of the camera or, at least, did not engage with, or pose for, the camera,IARPA and NIST note on the competition website. Yitu won the identification accuracy prize, which is measured by having a small false negative identification rate.
Details about the competition are available here (PDF).
…Read slightly more in Yitu Tech’s press release.
…This isn’t Yitu’s first competition win: it’s also ranked competitively on another ongoing NIST challenge called FRVT (Face Recognition Vendor Test).
…You can check out the barely readable NIST results here: PDF.

Dawn of the NVIDIA-killing deep learning ASICS:
…China’s national development strategy depends on it developing cutting-edge technical capabilities, including in AI hardware. Its private sector is already producing novel new computational substrates, including chips from Bitcoin company Bitmain and state-backed chip company Cambricon...
AI chips are one of the eight ‘Key General Technologies’ identiied by China as being crucial to its national AI strategy (translation available here). Building off of the country’s success in designing its own semiconductors for use in the high-performance computing market (the world’s fastest supercomputer runs on semiconductors based on Chinese IP), the Chinese government and private sector is now turning its attention to the creation of processors customized for neural network training and inference – and the results are already flooding in.
Bitmain is a large bitcoin-mining company, is using the skills it has gained in building custom chips for mining cryptocurrency to develop separate hardware to use to train and run deep learning-based AI systems. It has just given details on its first major chip, the Sophon BM1680.
The details: The Sophon is an application specific integrated circuit (ASIC) for deep learning training and inference. Each chip contains 64 NPUS (neural processing units), which each has 64 sub-chips. Bitmain is selling these chips within ‘SC1’ and ‘SC1+’ server cards, the second of which chains two BM1680s together.
Framework support: Caffe, Darknet, TensorFlow, MXNet, and others.
But what is it for? Bitmain has demonstrated the chips being used for “production-scale video analytics for the surveillance industry” including motor/non-motor vehicle and pedestrian detection, and more, though I haven’t seen them referenced in a detailed research paper yet.
…Pricing: The SC1 costs $589 and has a TDP of 85W. The SC1+ isn’t available at this time.
…Read more here: BITMAIN launches SOPHON Tensor Processors and AI Solutions.
China’s state-backed AI chip startup unfurls AI processors:
Cambricon plans to expand to control 30% of China’s semiconductor IP market…
Cambricon, a state-backed Chinese semiconductor company, has released two chips – the Cambrian-1H8 for low-power computer vision applications, and the more powerful Cambrian-1H16; announced plans to release a third chip specialized for self-driving cars; and released AI software called Cambrian NeuWare. It plans to release a range of ‘MLU’ server AI chips in 2018 as well, it said.
…“We hope that Cambricon will soon occupy 30% of China’s IP market and embed one billion device worldwide with our chips. We are working side-by-side with and are on the same page with global manufacturers on this,” says the company’s CEO Tianshi Chen.
…Read more here: AI Chip Explosion: Cambricon’s Billion-Device Ambition.
Check out this fantastic chart from Ark Invest showing the current roster of deep learning chip companies.

OpenAI Bits&Pieces:

Former OpenAI staffers and other researchers launch robot startup:
Embodied Intelligence aims to use imitation learning, learning from demonstrations, and few-shot / meta-learning approaches, to expand capabilities of industrial robots.
Read more:
Creating interpretable agents with iterative curriculums:
…Read more: Interpretable and Pedagogical Examples.

Tech Tales:

When the machines came, the artists rejoiced: new minds gave them new tools and mediums through which to propagate their views. When the computer artists came, the human artists rejoiced: new minds led to new aesthetics designed according to different rules and biases than those adopted by humans. But after some years the human artists stopped rejoicing as automatic computer generation, synthesis, and re-synthesis of art approached a frequency so extreme that humans struggled to keep up, finding them unable to place themselves, creatively, within their aesthetic universe.

The pall spread as a fog, imperceptible at first, but apparent after many years. The forward march of ‘culture’ became hard to discern. What does it mean to go up or do or left or right when you live in an infinite ever-expanding universe? These sorts of questions, long the fascination of academics specializing in maths and physics and fundamental philosophy, took on a real sense of import and weight. How, people wondered, do we navigate ourselves forward in this world of ceaseless digital creation? Where is the place that we aim for? What is our goal and how is ti different to the aesthetic pathways being explored by the machines? Whenever a new manifesto was issued it would be taken up and its words would echo around and through the world, until it was absorbed by other ideas and picked apart by other ideologies and dissembled and re-laundered into other intellectual or visual frameworks. Eventually the machines began to produce their own weighty, poorly read (even by other AIs) critical journals, coming up with essays that in title, form, and content, were hard to tell apart from the work of human graduate students: In search of meaning in an age of repetition and hypernormalization: Diatribes from the Adam Curtis Universe / The Dark Carnival, Juggalos, Antifa, and the New American RIght: An exploration / Where Are We Right Now: Geolocation & The Decline of Mystery in Daily Life.

The intellectual world eventually became like a hall of mirrors, where the arrival of any new idea would be almost instantly followed by the distortion, replication, and propagation of this idea, until the altered versions of itself outgrew the original – usually in as little time as it takes for photons to bounce from one part of a narrow corridor to another.

Technologies that inspired this story: GANGogh: Creating Art with GANs; Wavenet.