Import AI: #76: Why government needs technologists to work on AI policy, training machines to learn through touch as well as vision with SenseNet, and using mazes to test smart agents.

by Jack Clark

Facebook releases free speech recognition toolkit, wav2letter:
…Also pays out computational dividend in the form of a pre-trained Librispeech model for inference…
Facebook has released wav2letter, open source automatic speech recognition software. The technology has been previously described – but not released as code – in two Facebook AI Research papers: Wav2Letter: an End-to-End ConvNet-based Speech Recognition System, and Letter-Based Speech Recognition with Gated ConvNets.
  The release includes pre-trained models. I view these kinds of things as a ‘research&compute dividend’ that Facebook is paying out to the broader AI community (though it’d be nice if academic labs were able to access similarly vast resources to aid their own research and ensuing releases.
– Read more: wav2letter (GitHub).

When are we getting ‘Prime Air’ for the military and why is it taking so long to develop?
How commercial innovations in drone delivery could influence the military, which is fundamentally a large logistics organization with some pointy bits at the perimeter…
You might suspect that by now the military would be using drones throughout its supply chains, given the rapid pace of development of the technology. So why hasn’t that happened? In this essay on The Strategy Bridge Air Force Officer Jobie Turner discusses how the US could use this technology, why it’s going to take a long time to deploy it (“any new technology on the commercial market for logistics, will not have wartime survival as a precondition for employment”), and how the use of rapid logistics capabilities could influence the military.
  Turner also notes that “speed and capacity have more often than not been a hindrance to U.S. logistics rather than a boon. Too much too soon has been a far worse a problem than too little too late. For example, in the campaign for Guadalcanal, U.S. Marines deposited tons and tons of food and equipment on the beaches upon landing, only to discover that they lacked the labor and machines to move the cargo off the beaches. As a result, several weeks’ worth of food washed out with the tide—exacerbating a tenuous supply situation. In a more recent example, during Operation Desert Storm, so much cargo was brought in by air and sea that ‘iron mountains’ were created with the materiel, much of it never reaching its destination.”
– Read more: The Temptations of the Brown Box (The Strategy Bridge).

Reach out and learn shapes with the new ‘SenseNet’ simulator:
Simulator and benchmark aims to motivate progress in reinforcement learning beyond the typical visual paradigm
Today, most AI research revolves around classifying audio or visual inputs. What about touch? That’s the inspiration behind ‘SenseNet’, a new 3D environment simulator and dataset released last week by Jason Toy.
  Close your eyes and imagine picking up a ceramic mug or touching the nearest object to you. I think most people will find that imagining touching these objects gives them an internal mental impression of the object that’s distinct from imagining seeing the object. It’s this sensorimotor sensation that is the inspiration for SenseNet, which aims to challenge us to design algorithms that let machines classify objects by non-visual 3D sensing, potentially augmented by vision inputs as well.
  SenseNet gives researchers access to a simulated ‘MPL’ robotic hand with touch integrated into one of its (simulated) physical sensors. This could let people experiment with algorithms that learn to classify objects by touch alone rather than visual appearance. It uses an API loosely modelled on OpenAI Gym, so should be somewhat familiar to developers.
– Read more: SenseNet: 3D Objects Database and Tactile Simulator (Arxiv).

New environments: 2D Mazes for better AI:
Open source project lets you train agents to solve a variety of different mazes, some of which have been used in cognitive neuroscience experiments…
Haven’t you ever wanted to be trapped in an infinite set of procedurally generated mazes, testing your intelligence by negotiating them and finding your way to the end? If so (or perhaps if you’re just a research interested in training AI agents), then Gym-maze might be for you. The software project is made by Xingdong Zuo and provides a customizable set of environments to test AI agents in and can be used as an OpenAI Gym environment. It ships with a maze generator and nicely documented interface as well as a Jupyter notebook that implements and visualizes a bunch of different types of mazes.
  Budding AI-neuroscience types might like the fact it comes with a Morris water maze — a type of environment frequently used to test rodents for cognitive abilities. (DeepMind and others have also validated certain agents on Morris Water maze tasks as well).
  Bonus: It ships with a nicely documented A* search implementation, to validate the procedurally generated mazes.
– Get the code for gym-maze here (GitHub).

AI music video of the week:
…New Justin Timberlake video features robots, academics, futuristic beats, deep learning, etc. Reports of ‘jumping sharks’ said to be erroneous…
2017: AI researchers became fashion models, via a glossy Yves Saint Laurent campaign.
2018: Justin Timberlake releases a new music video called ‘Filthy’ which is set in 2028 at the ‘Pan-Asian Deep Learning Conference’ in Kuala Lumpur, Malaysia. Come for the setting and stay for the dancing robot (though I think the type of hardware they’re showing off in this fictional 2028 is likely a bit optimistic and robots will likely still be pretty crappy at that point.) Warning:  The video comes with some fairly unpleasant sexism and objectification, which (sadly) may be a reasonable prediction.
– Check out the video here: Justin Timberlake, Filthy (YouTube, mildly NSFW).

Why technologists need to run, not walk, into government to work on AI policy:
…Op-ed from security expert Bruce Schneier says current rules insufficient for a robot-fueled future…
Governments are unprepared to tackle the policy challenges posed by an increasingly digitized world, says Schneier in an op-ed in New York Magazine.
  “Pretty much all of the major policy debates of this century will have a major technological component. Whether it’s weapons of mass destruction, robots drastically affecting employment, climate change, food safety, or the increasing ubiquity of ever-shrinking drones, understanding the policy means understanding the technology. Our society desperately needs technologists working on the policy. The alternative is bad policy,” he says.
  Schneier suggests government create a new agency to study this vast topic: the ‘Department of Technology Policy’, which is a somewhat souped-up and expanded version of Ryan Calo’s proposal for a ‘Federal Robotics Commission’.  How exactly that would differ to the White House’s Office of Science and Technology Policy isn’t made clear in the article. (Currently, the OSTP is staffed thinly, relative to its predecessor, and hasn’t produced many public materials concerned with AI, nor made any significant statements or policy pronouncements in that area – a significant contrast to other nations.)
  Somewhat gloomy policy comment: It’s much easier to ask various parts of government to account for AI in existing legislation or via existing legislative bodies than to spin-up an entirely new agency, especially in a climate that generally treats government expenditure on science with suspicion.
  Read more: Click Here to Kill Everyone (NYMag – Select/All.)
Read more: The case for a federal robotics commission, Ryan Calo (Brookings).

Predicting the unpredictable: Miles Brundage’s AI forecasts:
Arxiv paper tracker & AI policy research fellow scores his own 2017 predictions…
Miles Brundage has ranked his 2017 AI forecasts in a detailed blog post. Predicting AI is a challenge and it’s interesting to see how Miles was careful at the outset to make his forecasts specific, but found in 2018 that a couple of them were still open to interpretation – suggesting a need for further question calibration. I find this sort of meta-analysis particularly helpful in letting me frame my own thinking about AI, so thanks to Miles and his collaborators for that.
  Highlights: Miles’s Atari predictions were on the money when you factor in compute requirements, but his predictions were a bit fuzzier when it came to specific applications (StarCraft and speech recognition) and on more open-ended research areas, like transfer learning.
– Read more: Miles Brundage’s Review of his 2017 AI forecasts.
Read more: Miles’s original 2017 forecasts.

OpenAI/Misc Bits & Pieces:

Policy notes from NIPS 2017:
  How does the sort of cutting-edge research being discussed at AI conferences potentially influence policy? Tim Hwang (formerly Google, currently leading the Harvard-MIT Ethics and Governance of AI Initiative) tried to read the proverbial tea leaves in current research. Read on for our thoughts on robots, bias, and adversarial tanks.
  Read more here: NIPS 2017 Policy Field Notes (Medium).

Tech Tales:

[2XXX, a pet shop on a post-AGI Earth]

So after it happened a lot of wacky stuff went on – flying cars, partial dyson spheres, postcard-sized supercomputers fired off at various distant suns, occasional dispensations of targeted and intricate punishments, machine-written laws, machine voting, the emergence of higher-dimensional AI beings who only interfaced with us ‘three-dee-errs‘ via mathematical conjectures put forth in discussions with other AIs, chickens that laid eggs whose shells always come apart into two neat halves, and so on.

But the really weird stuff continues to be found in the more mundane places, like this pet shop. Obviously these days we can simulate any pet you’d like and most kids grow up with a few talking cats and dogs to hang out with. In fact, most of us spend most of our time in simulations rather than physical reality. But for some of us there’s still a lot of importance placed on ‘real stuff’. Real animals. Real people. Real sex. Real wine. You get the picture. Sure, we can simulate it so it’s all indistinguishable, but something about knowing that you’ve bought or consumed mass that is the end-product of literally billions of years of stellar interactions… I don’t know. People get a kick out of this. Even me.

So now I’m talking to the robot that runs the shop and I’m holding the cryobox containing my dead cat in one hand and my little digital device in the other and I guess we’re negotiating? It gets some of my data for the next few years and in return it’ll scan the cat, reconstruct its mind, then download that into a young kitten. Hey presto, Mr Tabby just became Mr Tabby Junior and gets to grow up all over again but this time he’ll be a little smarter and have more of his personality at the beginning of his life. Seems good to me – I get to hang out with Mr Tabby a while longer and he gets a young man’s body all over again. We should all be so lucky.

There’s a catch, though. Because when I said Mr Tabby was dead I wasn’t being totally accurate. Technically, he’s alive. I guess the correct phrase is ‘about to be dead’. My vet AI said this morning that Mr Tabby, at the age of 19 years and zero days, has anywhere from ‘0 to 200 days to live with unknown probability distribution across this temporal period’. That’s AI code for: Frankly I’m surprised the cat isn’t dead right now.

So I did what some people do, these days. I put him into stasis, chucked him in the cryobox, and got a flyer over to the pet store. Now once I’ve finished the negotiation with the robot it’s going to just become a small matter of opening the box, bringing Mr Tabby out of hibernation, then doing the thing that the robot and I are simply referring to as ‘the procedure’, because the actual name – Rapid And Complete Exfiltration of Determination via Euthanasia And Digitization (RACEDEAD) – gives me the creeps. (There’s an unsubstantiated rumor that the AIs compete with each other to come up with acronyms that creep humans out. Don’t ask what DENTALFLY means.)

So now I just need to let go of the cryobox’s lid so that it can automatically open and we can get going, but I’m finding it difficult. Like I said, for some of us there’s a lot of importance placed on real stuff, even real death when it’s not ‘real’ or ‘death’ (though death does occur). So now Mr Tabby is in the box and I guess he’s technically dead and alive until I figure out what I’m going to do. Ha!

Things that inspired this story: Brain emulation, The Age of Em by Robin Hanson, Schrodinger’s paradox, a cat I know that has taken to sleeping on my arm when I visit their owner.