Import AI: Issue 14: A Chinese robot boom, 1000-miles of driving data, and a culture clash at SoftBank

China’s 13th Five-Year Economic Plan promises big things for robots & AI: China plans to make substantial investments into robotics, according to the country’s 13th five year economic plan. The funding boost will support the Made in China 2025 initiative, which aims to improve the quality of Chinese-made goods while suffusing the country’s factories with smart machines. The robotics push may have significant consequences; in the 12th economic plan they said the development of a domestic semiconductor industry was a priority. Now, the majority of chips in the world’s fastest supercomputer use Chinese-designed ‘Sunway TaihuLight’ chips, rather than Intel’s which are used in the vast majority of other top machines. If the various robotic investments triggered by the plan pay off then there could be big consequences for artificial intelligence. Modern AI techniques have already been used by robot makers like Fanuc to substantially increase the speed at which a factory robot can be taught to excel at a particular task. This has led to related investments in areas like reinforcement learning (see: Fanuc’s partnership with Japanese AI startup Preferred Networks, or ABB’s relationship with Vicarious), which has fed into mainstream AI development. A flood of money from China, paired with the built-in end-customers in the form of large Chinese government labs, means robot-related AI development could speed up. Take a look at this gallery of a robot expo hosted by the Chinese army and imagine what kinds of advances will be possible by piggybacking on the manufacturing robot push.

If ‘Pepper’: then AI  == False: SoftBank’s tablet-clutching, Michelin Man-alike robot Pepper has not done nearly as well as the company hoped, disappointing customers. AI (or the lack of it) is partly to blame. The company had planned to apply modern AI techniques using neural networks to create a more advanced brain for the robot, but slowdowns and culture clashes between Aldebaran Robotics, a French company bought by Softbank in 2012, and the other engineering teams and managers (Japan), led to problems and delays. For example, instead of using deep learning techniques to learn to recognize emotions and react appropriately, the programmers had to dictate the responses by writing a laborious set of if-then statements. This limited Pepper’s abilities, and many companies wound up using the robot more as a cute tablet-toting marionette than as an interactive AI assistant. It’s a lesson in the importance of communication and what happens if that goes awry.

More gratuitous GTAV self-driving bot footage: In Issue 12 I wrote about research that shows hit game GTAV is a reasonable simulator to train self-driving cars in. This week I discovered that Craig Quiter, a technical contractor at OpenAI, has a personal project called DeepDrive where he uses GTAV to develop his own car systems. Watch videos of a car navigating traffic, keeping to the center of a lane, and marvel at the moment a snowstorm causes its visual brain to break down. Videos & more information here.

I would drive 500 miles, and I would self-drive 500 more, just to be the (self-driving) car that brings data to your door: the proliferation of free datasets for self-driving continues. The Oxford Robot Car dataset comes from researchers spending a year traversing the city of Oxford twice a week in a self-driving car. The 20 terabyte release involves around 100 journeys on the same route and covers both camera data, as well as LIDAR and GPS. The repetition of the route is important because the resulting dataset will let people teach AI systems how to make sense of the many permutations of route world, like shifting pedestrians, different weather patterns, rain, and so on. This follows similar data releases from Udacity and Comma.ai. We’re lucky to live in a time where companies and universities generate so much free data.

Comma catastrophe self-driving car startup Comma.ai has cancelled its main product, the Comma 1 – a $999 kit to retrofit modern cars into self-driving robo-chauffers. Founder George Hotz said he cut the product after receiving a letter from US regulator NHTSA. “comma.ai will be exploring other products and markets. Hello from Shenzhen, China. -GH”, Hotz writes.

Building up Canada’s AI ecosystem: In the same way PayPal spawned a generation of entrepreneurs who came to influence the tech industry, Canada created its own gang of hugely influential AI researchers through CIFAR, a funding body that supported people like Geoff Hinton (now at Google), Yann Lecun (now at Facebook), and Yoshua Bengio (now at UMontreal). Their contributions form much of the bedrock of the current popular AI techniques, ranging from systems for image recognition and machine translation at Google, to state-of-the-art memory systems at Facebook. But Canada has struggled to retain its top talent, as professors are drawn to work at other companies, and US-based schools massively increase investment in AI programs. To counter that, a group of researchers and entrepreneurs have launched Element AI, a Montreal-based AI incubator slash research lab. Influential research Yoshua Bengio, the Paul Erdös of AI, is one of the founders.

Welcome to the world of the self-defending autonomous AI corporation: in Charles Stross’ (excellent, free) sci-fi novel, Accelerando, the world is suffused with smart, semi-autonomous AI-driven digital corporations that indulge in a constant cacophony of deal-making, legal subterfuge, and company formation and destruction. It feels like a plausible future, but requires two prerequisites: 1) a form of digital currency with various forms of metadata built into it to let computers participate in a universal economic market, 2) the ability of corporations to exchange information with each other privately and efficiently. 3) better decision-making AI systems capable of feats of memory, transfer learning, and ideation. Software like Bitcoin and Ethereum attempts to solve problem one, new research from Google tackles option two, and the AI research community is working on option three. The new Google paper, Learning to Protect Communications with Adversarial Neural Cryptography, outlines what they call a neural cryptography system. This lets them have semi-independent AIs improvise a secure communication channel with each other in the presence of an adversary. Give it a few years and they’ll be a proliferation of microscopic economic agents that barter privately with each other in a market of digital information.

Image recognition isn’t only for the data titans: Image recognition startup Clarifai has raised a $30 million Series B. Clarifai is led by AI researcher Matt Zeiler, and the company proved its AI chops in 2013 by winning the ImageNet competition. It sells image recognition services to people around the world, letting people use AI to automatically organize their photos and videos, or identify specific items in images (hypothetical example: ad agencies training a Clarifai AI to recognize and spot different brand logos on clothing, then using the software to automatically patrol the web to identify photos containing the brand. “Today Clarifai not only hosts a static API that tags thousands of images and videos with human level accuracy, but now empowers anyone from a wedding photographer to a large retail company to a sail boat enthusiast (hat tip: USV’s Albert Wenger) to easily access and build products employing Google level AI with drag and drop ease of use,” writes Lux Capital, which invested in the round. Clarifai’s ongoing success is an intriguing counterpoint to the narrative that it’s difficult for AI startups to compete with the resources wielded by vast tech companies like Amazon, Google, Microsoft, and others.