Import AI: Issue 10: Data is the new (expensive) coal, Microsoft revamps research division for AI, and Google asks for better debug tools

by Jack Clark

Free data!: Either someone has made off with the keys to Google’s data vault, or the organization has made a strategic decision to give away a teeny-tiny percentage of its competitive advantage. Now, along with sharing code and programing tools (TensorFlow), the company has begun to share large amounts of data as well. Case in point: this week it announced the Open Images dataset, which consists of 9 million URLs to images that have been annotated with labels spanning over 6000 categories. However, these labels were applied by Google’s own AI systems so there’s a chance they could not reflect reality as nicely as human-annotated labels. It also announced YouTube-8M, a mammoth video dataset. Salesforce MetaMind released WikiText, a large text dataset to help people produce better language models.

If data is the new coal, who can use it? Coal is useful. Neil Lawrence (University of Sheffield and now a member of Amazon’s ML team), has said data is as important to machine learning as coal is to the generation of power. So releases of data by organizations and companies is a great thing. But it’s worth bearing in mind that turning coal into energy is expensive and requires a significant amount of infrastructure. “£20k buys 3 machines for AI research. Typical experiments by companies use hundreds or thousands of machines. See the gap that worries me?,” writes Nando de Freitas of Google DeepMind. “Hardware is the least of our worries. Having high quality open datasets in areas that matter is vastly more important issue,” counters Google’s Francois Chollet. Either way, it’s worth bearing in mind that developing modern AI is tremendously expensive and requires vast hardware investments, so merely having access to the data isn’t enough, you need to be able to marshal the resources and technology to deploy on large pools of infrastructure.

The machine that builds the machine that builds the machine: Skip to the last half of this (excellent) lecture by Nando de Freitas to get a good overview of the latest AI research, which seeks to develop computers that can learn to design certain aspects of AI systems, which then solve tasks. Currently, good AI researchers have strong intuitions which they use to develop certain arrangements of neural network software. The next generation fo software will replicate some of this intuition and design aspects of the machinery itself.

Better cloud computers: After years of developers grumbling about its ageing fleet of GPUs, someone at Amazon has taken the wraps off of its new AI infrastructure. You can now rent computers that lash together up to 8 NVIDIA Tesla K80 accelerators from Amazon Web Services, and you can pair that with a nice software bundle developed by Amazon for running AI applications. Though companies are dabbling in other chips, ranging from FPGAs, to ASICs, to novel coprocessors, GPUs look like they’ll remain the standard workhorse for AI for years to come.In related news, Nvidia’s share price has more than doubled in the last year.

Relatively good robots: Deep learning techniques are washing into the field of robotics, speeding up progress there. That was most visible in this year’s Amazon Picking Challenge where entrants used robotic arms to pick up objects in a (simplified) warehouse. The results were significantly better than the year before and many of the teams had adopted deep learning techniques to make their robots better at seeing and acting in the world. Now, an MIT team which ranked highly in the competitions has published a research paper, Multi-view Self-supervised Deep Learning for 6D Pose Estimation in the Amazon Picking Challenge (PDF), outlining the system it used in the competition. One thing worth noting is that datasets like ImageNet don’t provide the right sort of data needed to train these industrial robots, so one of the inventions the team came up with was a method to train its robots to create their own large datasets of the objects they were trying to pick up.

The incredible, surprising, inevitable, metastasizing nature of artificial intelligence: Microsoft has gone through one of its periodic restructurings to create an AI research group consisting of more than 5,000 computer scientists and engineers. Why? Because the flexibility and utility of AI software has reached the point where researchers can come up with new techniques and engineers can (relatively) easily port these over to work on specific tasks within specific business divisions. Therefore, it makes sense to throw more resources into AI organizations because the inventions usually allow you to extract some kind of short-term information arbitrage advantage over your competitors, or let you reduce the cost of carrying out some part of your business. The same attitude underlies the ‘Brain’ group at Google, and Facebook’s Applied Machine Learning Group. Amazon is making similar moves to expand and build-up its AI group, as is IBM. “AI is shifting the computer science research supply chain and blurring lines between research and product,” writes MSR’s AI czar Harry Shum. “End-to-end innovation in AI will not come from isolated research labs alone, but from the combination of at-scale production workloads together with deep technology advancements in algorithms, systems and experiences.”

Please, build this for me: (And by me, I mean Google). “Better debugging tools will help researchers understand why their models aren’t learning, better experimentation management will make it easier for them to run and analyze more experiments,” writes Rajat Monga, the engineering director for TensorFlow, in a Quora session.