Import AI: Issue 24: Cheaper self-driving cars, WiFi ankle bracelets, dreaming machines

by Jack Clark

Self-driving cars are getting cheaper as they get smarter:  LiDAR sensors give a self-driving car a sense in the form of a rapidly cycling laser. Now it appears that this handy ingredient is getting cheaper. A modern LiDAR sensor costs roughly 10% of the price for a 2007 one, when you adjust for inflation. Just imagine how much cheaper the technology could become when self-driving cars start to hit the road in large numbers…
…LiDAR sensor unit prices (price inflation adjusted to 2016 level, somewhat differing capabilities):
2007: $89,112: Velodyne, HDL-64
2010: $33,821: Velodyne, HDL-32E
2014: $8,351: Velodyne, PUCK
2017:  $7,500: Alphabet Waymo, custom design
~2017/18: $50: Velodyne, solid-state LIDAR

And Lo The Transgressors Shall Be Known By Their Absence From The Fuzz&Clang Of Our Blessed Digital Life: Cyber-criminals should be forced to wear wifi jammers to prevent them from using the internet, rather than being sent to prison, says Gavin Thomas, Chief Superintendent of the UK Police Superintendents’ Association. “If you have got a 16-year-old who has hacked into your account and stolen your identity, this is a 21st century crime, so we ought to have a 21st century methodology to address it.” he says, then suggests that the offenders also attend “an ethics and value programme about how you behave online, which is an area that I think is absent at the moment.”

Hard Takeoff Bureaucratic-Singularity: I recently had some conversations about the potential for semi-autonomous AI systems to develop behaviors that had unintended consequences. One analogy presented to me was to think of AI researchers as the people that write tax laws, and AI systems as the international corporations that will try to subvert or distort tax codes to give themselves a competitive advantage. AI systems may break out of their pre-defined box so that they can best optimize a given reward function, just as a corporation might conduct baroque acts of legal maneuvering to fulfill its fiduciary responsibility to shareholders.

AI’s long boom: venture capitalist Nathan Benaich says of AI:  It’s not often that several major macro and micro factors align to poise a technology for such a significant impact…Researchers are publishing new model architectures and training methodologies, while squeezing more performance from existing models…the resources to conduct experiments, build, deploy and scale AI technology are rapidly being democratised. Finally, significant capital and talent is flowing into private companies to enable new and bold ideas to take flight.”

Embodied personal assistants: most companies have a strong intuition that people want to interact with digital assistants via voice. The next question is whether they prefer these voices to be disembodied or embodied. The success of Amazon’s ‘Alexa’ device could indicate people like their digital golems to be (visually) embodied in specialized devices…
… Google cottoned onto this idea and created ‘Google Home’. Now C
hinese search engine Baidu has revealed a new device, called Little Fish, that sits an AI system inside a little robot with a dynamic screen that can incline towards the user, somewhat similar to the (delayed) home robot from Jibo….
Research Idea: I find myself wondering if people will interact differently with a device that can move. Would it be interesting to conduct a study where researchers place a variety of these different systems (both static and movable) into the homes of reasonably non-technical people – say, a retirement home – and observe the different interaction patterns?

The wonderful cyberpunk world we live in – a Go master appears: DeepMind’s Go-playing AlphaGo system spent the last few days trouncing the world’s Go-playing community in a series of 60 online games, all of which it won. (Technically, one game was a draw due to a network connectivity technicality, but what’s a single game between a savant super-intelligence and a human?) Champagne all round!…
…I was perplexed by the company’s decision to name its Go bot “Master”. Why not “Teacher”? Surely this better hints at the broader altruistic goal of exposing AlphaGo’s capabilities to more of the world?

Chess? Done. Go? Done. Poker? Dealer’s choice: Carnegie Mellon University researchers have built Libratus, a poker bot that will soon challenge world-leading players to a poker match. I do wonder if the CMU system can master the vast statistical world of Poker, while being able to read the tells&cues that humans search for when seeking to outgamble their peers.

The incredible progress in reinforcement learning: congratulations to Miles Brundage, who correctly predicted the advancement of reinforcement learning techniques on the Atari dataset in 2016. You can read more about why this progress is interesting, how he came to make these predictions, and what he thinks the future holds in this blog here.

Under the sea / under the sea / darling its better / droning on under the sea: Berlin-based robot company PowerVision has a new submersible drone named PowerRay. The drone, which claims to be ‘changing the fishing world’, appears to be built with inspiration from the deep sea terror the Anglerfish, where the drone dangles a hook with a lure in front of its mouth, and instead of a mouth it has a camera which streams footage to the iPad of the ‘fisher’, who operates the semi-intelligent drone. A neat encapsulation of the steady consumerization of advanced robots.

Baidu’s face recognition challenge: Baidu will challenge winners of China’s ‘Super Brain’ contest to a facial recognition competition. Participants will be shown pictures of three females taken where they were between 100 days and four years old, then look at another set of photos of people in their twenties and identify the adults that match the babies. This is a task that is easy for humans but extremely hard for computers, says Baidu’s Chief Scientist Andrew Ng. One contestant “Wang Yuheng, a person with incredible eyesight, can quickly identify a selected glass of water from 520 glasses,” reports South China Morning Post.

Nightmare Fuel via next-frame prediction on still images: recently, many researchers have begun to develop prediction systems which can do things like look at a still frame from a video and infer some of the next few frames. In the past these frames tended to be extremely blurry, with, say, a photo of a football on a soccer pitch seeing the football smear into a kind of elongated white spray painted line as the AI attempts to predict its future. More recently many different researchers have developed systems with a better intuitive understanding of the scene dynamics of a given frame, generating crisper images…
 In the spirit of ‘ Q: why did you climb that mountain? A: because it’s there’, artist Mario Klingemann, has visualized what happens when you apply this approach to a single static image which is not typically animated. Since the neural network hasn’t learned a dynamic model it instead spits out a baby’s head that screams ‘I’M MELTING’, before subsiding in a wash of entropy.

OpenAI bits and pieces

Here we GAN again: OpenAI research scientist Ian Goodfellow has summarized the Generative Adversarial Network tutorial he gave at NIPS 2016 and put it on Arxiv. Thanks Ian! Read here.

Policy Field Notes: Tim Hwang of Google and I tried to analyze some of the policy implications from AI research papers at NIPS in this ‘Policy Field Notes’ blogpost. This sort of thing is an experiment and we’ll aim to do more (eg, ICLR, ICML), if people find it interesting. Zack Lipton was kind enough to syndicate it to his lovely ‘Approximately Correct’ blog. Let the AI Web Ring bloom!

Tech Tales:

[2022: A lecture hall at an AI conference, held in Reykjavik.]

A whirling cloud formation, taken from a satellite feed of Northern California, sighs rain onto the harsh white expanse of a chunk of the Antarctic ice sheet. The segment is in the process of calving away from main continent’s iceshelf, as a 50-mile slit in the ice creaks and groans and lengthens. Soon it shall cleave. Now there’s a sigh of wind, followed by the peal of trumpets. Three flocks of birds shimmer into view, wings beating through the rain. The shadows of the creatures pixelate against the clear white of the ground, occasionally flared out by words that erupt in firework-slashes across the sky: ‘avian’, ‘flight’, ‘distance’, ‘cold’, ‘california’, ‘ice’, ‘friction’. The vision freezes, and a laser pointer picks out one bird’s beak catching the too-yellow light of an off-screen sun.

“It’s not exactly dreaming,” the scientist says, “but we’ve detected something beyond randomness in the shuffling. This beak, for instance, has a color tuned to the frequency of the trumpets, and the ice sheet appears to be coming apart at a rate determined by the volume of rain being deposited on the ground.”

He pauses, and the assembled scientists make notes. The bird and its taffy-yellow beak disappear, replaced by a thicket of graphs – the chunky block diagrams of different neural network layer activations.

“These scenes are from the XK-23C instance, which received new input today from perceptual plug-in projects conducted with NOAA, National Geographic, the World Wildlife Fund, and NASA,” he says. “When we unify the different inputs and transfer the features into the XK-23C we run a slow merging operation. Other parts of the system are live during this occurrence. During this process the software appears to try to order the new representations within its memory, by activating them with a self-selected suite of other already introduced concepts.”

Another slide: an audio waveform. The trumpet chorus peals out again, on repeat. “We believe the horns come from a collection of Miles Davis recordings recommended by Yann Lecun. But we can’t trace the tune – it may be original. And the birds? Two of the flocks consist of known endangered species. The third contains some of a type we’ve never seen before in nature.”