Import AI: Issue 26: Low-wages for robots, AI optometry, and RL agents that tell you what they’re thinking

by Jack Clark

Deep learning needs discipline: AI researchers need to do a better job of making their experiments comparable with one another by publishing more details about the underlying infrastructure and specific hyperparameter recipes they use, says Google Denny Britz. ”The difficulty of building upon other’s work is a major factor in determining what research is being done,” he writes. “It’s easiest, from an experimental perspective, to build upon one’s own work… It also leads to less competition”. Researchers can prevent groupthink and enhance replicability by publishing code to go along with their papers, and giving all the details needed to aid replication.

Self-driving cars save lives: Tesla cars with Autopilot installed have a 40% lower crash rate than those lacking the software, according to data the company shared with the National Highway Traffic Safety Association. Finally, a figure that proves the residents of Duckietown are safer than your average rubber duck…
but self-driving tech may also magnify our selfishness: today, downtown urban driving is frequently fouled up by people that stop their cars and hop into a cafe to grab a drink while their vehicle idles outside, and by the incorrigible optimists that endlessly circle a street waiting for a parking spot to open up. Roboticist Rodney Brooks suspects that when really smart autonomous cars arrive people will tend towards even more of these selfish occurrences, hopping out of their AV to get a latte and telling the car to hover nearby, or autonomously circle for a parking spot. I can see a kind of intermediary future where urban traffic is more unpleasant due to hordes of dutiful vehicles, unwittingly enabling their owners’ selfishness.

AI creates: endlessly replicating cultural artifacts: given enough data, neural networks can learn to generate anything. That points to a future where certain visual classes of object, ranging from comic book characters, to landscape shots, to others, will be partially generated and refined by AI. This blog about using recurrent neural networks to generate Egyptian-esque hieroglyphics is a nice example of that phenomenon in action.

Mutating AI programming languages: Facebook and others have released PyTorch. The AI programming framework implements a technique called ‘reverse-mode auto differentiation’ to make it easier to modify neural networks created using the language. “While this technique is not unique to PyTorch, it’s one of the fastest implementations of it to date. You get the best of speed and flexibility for your crazy research,” the project writes. It’s open source, naturally.

Good morning, HAL, the AI cyclops-optometrist will see you now: Jeff Dean from Google likes to say that computers have recently ‘begun to open their eyes’. That’s in reference to the powerful image recognition algorithms we’ve developed in the past half decade. But what we’re lacking for these computers is an optometrist – researchers don’t have a good understanding of the characteristics of computer vision, and much of our research is made up of trial-and-error as much as theory…
…Now, academics from the University of Toronto are trying to change this with a paper that analyzes the structure of the effective receptive field in neural networks. Their work finds interesting parallels between how receptive fields behave in convolutional neural networks versus in mammalian visual systems, and provides clues as to ways to increase the efficiency of future networks. Techniques like this, paired with ones like the spatially adaptive computation time paper, promise a future where our computers can see more efficiently, and we can work out how to tune them based on a more rigorous theoretical understanding of their unblinking ‘eyes’.

AI and automation:The technology is not the problem. The problem is a political system that doesn’t ensure the benefits accrue to everyone,” says Geoff Hinton. In potentially related news, regulator-flouting self-driving car startup comma ai wants to ‘build the largest AI data collection machine in history’.

Cost per hour for a typical industrial robot, according to Kuka: 5 Euros
Cost per hour for worker to do similar job:
…Germany: 50 Euros
…China: 10 Euros
…”“It took 50 years for the world to install the first million industrial robots. The next million will take only eight,” reports Bloomberg.

Mysterious hippocampal signals: scientists have conducted a study of the firing of hippocampal place cells in mice. (Place cells tend to fire in response to the living entity being in a specific location, hence the name). The experiment suggests that place cells may encode some other type of information, along with geographical markers. Further analysis here will lead to more clues about how the brain represents information. We already know that London taxi drivers store a mental map of the city in the hippocampus (which appears to have an enlarged volume as a consequence) — perhaps the place cells could also function as a geographically-indexed store of bawdy jokes?

22nd Century Children’s Books: the Entertainment Intelligence Lab at Georgia Tech has trained a reinforcement learning agent to shout about its thoughts and plans as it plays classic game Frogger. “Looking forward to a hopping spot to jump to catch my breath,” it says. Good luck, Froggo!
…This kind of work could help solve the interpretability issues of AI, by making the thought processes of AI agents easier for people to diagnose and analyze…
… I can also imagine building a new form of children’s entertainment with this technology, where the characters are RL agents and they shout about their goals and ideas as they proceed through dynamically generated worlds.

Megacorps as powerful as countries: “I was recently together with the Prime Minister of quite an important country who told me there are three or four powers left in the world. One is US, one is China, and the other is Alphabet,” Klaus Schwab told Alphabet co-founder Sergey Brin, during a conversation in Davos. (Because you can’t be right all the time bonus: Brin said he mostly ignored AI at Google in the early days, only later realizing its huge importance.)

AI system gets FDA approval: Arterys has gained clearance from the US Food and Drug Administration to market Cardio DL, software that uses AI to automatically segment images taken from cardiac MRIs. Another reminder that AI technology moves very rapidly from research into production.

After the apocalypse, the data centers shall continue: this fluffy, PR video from Amazon Web Services reminds me of the tremendous investments that Amazon, Google, Microsoft, Facebook, and others have made into renewable energy infrastructure; from AWS’s fleets of solar panels, to Google’s stake in the Ivanpah solar power facility, to Facebook’s air-cooled arctic circle enclave, a new baroque landscape is taking shape, in service of the neo-feudal empires of the digital world…
…And should some calamity strike, we can imagine that the computers in these football field-sized computer cathedrals will be the last to turn off. However, the inefficient, closed-circuit environments of legacy data centers will probably be the last to house human life, as depicted in this short story by Cory Doctorow called ‘When Sysadmins Ruled the Earth’. Quick, befriend a sysadmin at a non-tech company!

OpenAI bits&pieces:

OpenAI’s Tom Brown will be speaking at the AI By the Bay conference in San Francisco in March. Readers can get 20% off tickets for the conference by heading over to this link and using the promo code ‘OPENAI20’.

Tech Tales:

[2035, Moonbase Alpha, the Moon]

Two astronauts sit in front of a 6-foot wide and 3-foot tall screen. The main lights are out, and their faces are lit by the red strobing of the emergency system.

“How long has it been like this,” says one of the astronauts.
“About two hours,” says the other. “The executable came in through the comm relay. They encoded it in transmission intervals on some of the automated logistics channels. Which means-”
‘-which means that they’d already bugged the software when it was installed, so it could receive the payload.”
“Yup”.
Both astronauts lean back and stare at the screen. One of them places their hand across their face and squints through their fingers at the images rolling across the monitor.

ERROR. DEATH INEVITABLE! Scrolls across the screen. The text blinks out, replaced by a fuzzy image of an astronaut wearing a priest’s ID patch and no helmet standing in an airlock. The screen shimmers and, next to the priest, appears a teenage girl, also lacking a helmet. Now a helmet materializes in the air, hovering between them. Green circles flash over their faces, flickering as the AI tries to pick who to save. The text appears again: ERROR. DEATH INEVITABLE!

“We’ve gotta burn it,” says one of the astronauts. “Go full analogue and rebuild the base from the ground up.”
“But that’ll take weeks!”
“We don’t have a choice. The longer we wait the worse the damage is going to be. It’s already started shunting oxygen into different airlocks. Next, it might start opening some of the doors.”

Class note: These kinds of ‘trolley problem’ viruses proliferated during the late 2020s and early 2030s, before the UN mandated AI systems be installed with their own moral heuristics, codename: ETHICS WARDENS.