Import AI: Issue 11: Robots learn through fighting&friendship, gloopy DNA storage, and an AI acronym crime

by Jack Clark

AI – hyped, or underhyped? AI is receiving an extraordinary amount of attention these days. Is this justified? Talk to technologists and the answer is ‘yes, with some qualifications’. People tend to feel like the press coverage of AI glosses over the numerous flaws, dead-ends, and implementation costs of modern technology. But at the same time most people are convinced that the commercial potential of AI is vast and mostly unexplored. That’s why Jeremy Howard, CEO of fast.ai, says in this video interview that ‘the potential for deep learning is greater than the potential of the internet in the early 90s’.

What is yellow, expensive, and marginally smarter than a rock? Fanuc’s robots! The company has begun to apply AI techniques like reinforcement learning to its robots so that they can be taught to do industrial tasks faster. Now Fanuc and NVidia have announced plans to stuff more GPU-based computing power into the yellow machines.

1 + 1 = SWARM INTELLIGENCE: Rapyuta Robotics, a spin-off from ETH Zurich, recently got $10 million in Series A funding to help it commercialize technology to let different robots learn from each other. Seems like the right time to do so, given this research from Google which shows how you can train multiple robots to solve the same task and in doing so learn more efficiently than if you were just training on a single machine. But collaboration might be altogether too boring. Just wait until the robots start to teach each other to solve tasks by attempting to outfox one another, as outlined in this tantalizing paper from CMU and Google. ‘Having robots in adversarial setting might be a better learning strategy as compared to having collaborative multiple robots,’ they write in Supervision via Competition: Robot Adversaries for Learning Tasks.

Samsung buys Viv: Samsung has acquired Viv, an AI startup founded by the people who helped create Siri at Apple and before that worked on SRI’s Calo project. Viv generated a lot of press and made frequent cryptic references to work done in program synthesis but did not publish any meaningful technical details about its approach. Now that is has been acquired I hope the company could publish a paper so the AI community can assess its work and share in any insights the team has had.

No AI’s in the classroom, or else! As if phones weren’t bad enough the Allen Institute for AI has released a live demo of Euclid, a tool to solve SAT-style math questions. It’s got some weird tendencies, for example: ‘Question: What is the smallest number? Euclid: -120.0’. Well, that settles that then…

Even more free data: Self-driving startup Comma.ai released 80GB of driving data a couple of months ago. ‘Pah! That’s nothing,’ I imagine Udacity’s Oliver Cameron saying, as he presses the big red button to release 223GB of Mountain View driving data. Sooner or later we’re going to have trouble storing all of this information, so keep an eye on the burgeoning field of DNA storage for future solutions to density, redundancy, and resiliency problems. “We stored an entire computer operating system, a movie, a gift card, and other computer files with a total of 2.14*10^6 bytes in DNA oligos. We were able to fully retrieve the information without a single error even with a sequencing throughput on the scale of a single tile of an Illumina sequencing flow cell,” write some gloopy researchers in the abstract to their paper ‘Capacity-approaching DNA storage’.

Import AI + OpenAI: I’m planning a regular blogpost/newsletter for OpenAI in which I’ll try and analyze the monthly trends in AI from both a research and industry perspective. Is there anything in particular you think I should focus on? Get in touch, please! It’ll be a bit longer than an Import AI issue and a bit more technical, I think.

New research: GAWWN$%^@? Call the acronym police, a crime has been committed! A new paper proposes the Generative Adversarial What-Where Network (GAWWN). Like other GANs it can create synthetic images that seem plausible, and unlike other approaches it can follow detailed instructions, creating better, more realistic images than before. ‘This is pretty bonkers,’ says Miles Brundage. We agree!