DeepMind learns about AI through DILBERT SIMULATIONS: New research by DeepMind and University College London explores how the brain understands and analyzes social hierarchies.”The prefrontal cortex, a region that is highly developed in humans, was particularly important when participants were learning about the power of people in their own social group, as compared to that of another person. This points towards the special nature of representing information that relates to the self,” says researcher Dharshan Kumaran. Pity the 30 “healthy college students” who formed the dataset for the experiment, as they were asked while in an fMRI scanner to study the power structure of a fictitious company, exploring social dynamics through the lens of the Taylorist cubeville culture that defines the 21st Century.
Math Spaghetti (it’s good for you): Pieter Abbeel’s and John Schulman’s slides (PDF) for their NIPS tutorial on reinforcement learning and policy optimization are worth your time if you love understanding the algorithms that power AI systems. If you’re not comfortable with the math then you’d do well do skip to the end and read the “current frontiers” section to understand why areas like meta-learning, inverse RL, sim2real transfer learning, and other areas are going to be big in 2017.
Data doping: real-world data is the Beluga Caviar of AI – expensive and time-consuming to extract from the world. That’s driven people to look to ways to augment real-world dataset with cheaper, synthetic products. This week, European researchers contributed a new dataset called PHAV, short for Procedural Human Action Videos, which consists of 37,536 videos, each consisting of over 1000 examples across 35 basic categories. Their research suggests “that our procedurally generated videos can be used as a simple drop-in complement to small training sets of manually labeled real-world videos. Hence, we can leverage state-of-the-art supervised deep models for action recognition without modifications, yielding vast improvements over alternative unsupervised generative models of video,” they write.You can find more information in the paper: “Procedural Generation of Videos to Train Deep Action Recognition Networks,” here (PDF).
Code releases: DeepMind has released the code behind its delightfully recursive ‘learning to learn by gradient descent by gradient descent” research, which uses machine learning rather than the intuitions of highly-paid AI researchers. It’s written in TensorFlow, naturally. This will aid the industrialisation of deep learning by reducing the need for specialist knowledge on the part of those implementing algorithms…
… additionally, Google has released transfer learning code for image recognition. The TensorFlow code lets you take a pre-trained model “and train a new top layer that can recognize other classes of images…
Hey, look, no clerks! Amazon’s new retail store: It’s almost Christmas, so Amazon has pulled one of its annual PR stunts designed to generate headlines, press, and sales. And I’m playing right into it. The new product announced by Amazon is a store called ‘Amazon Go’ which contains ‘walk right out’ technology to let you grab your goods and stroll out of the store. No need for cashiers or clerks – sophisticated machine learning algorithms figure out what you’ve grabbed, and bill your account appropriately. Though judging by the video, which contains innumerable individually-wrapped products, it’s likely the main tech supporting this is a bunch of RFID tags embedded in (the outside of) cupcakes.
Life as a conference-going telepresence robot: Conferences aren’t easy for everyone – the cramped, people-thronged halls of convention centers can prove challenging for some people due to mental or physical reasons. So why not tap into the power of robots and telepresence to attend instead? IT consultant & writer Trevor Pott shares his poignant story of attending a tradeshow via a telepresence bot here. Please be kind to any robots you see at NIPS.
Geometric Intelligence + Uber: CarBorg company Uber has acquired don’t call it deep learning startup Geometric Intelligence to form an AI research lab. The 15-strong team will join Uber, bringing a wealth of varying AI expertise into the company, from psychologist (and noted neural net skeptic) Gary Marcus to evolutionary algorithm chap Jeff Clune to fMRI-for-AI research Jason Yosinski. Chief Science Officer Zoubin Ghahramani will be staying in Cambridge for the time being. I’d like to tell you about Geometric’s technology, but the company has been tight-lipped about its approach…
… coincidentally, Uber’s current head of machine learning, Danny Lange, is leaving the company to join game engine unity.
#FakeNewsChallenge… Fake news played a role in the recent US election, as did the seeming inability of tech companies to deal with it. That prompted self-driving car expert & adjunct CMU faculty member Dean Pomerleau to start the #FakeNewsChallenge – there’s a total of $2000 to be awarded for the top-5 teams, with payments in proportion to the accuracy of their ginned-up AI systems at spotting fake news.
All hail the Visual Sentinels: New research from Salesforce MetaMind, Virginia Tech, and Georgia Institute of Technology called Knowing when to Look: Adaptive Attention via a Visual Sentinel for Image Captioning, breaks new ground in pairing vision and language techniques inside single systems, creating an image captioning system which appears to spit out more useful telemetry about what internal representations it has developed and how they map to visual elements. Bonus points for the term ‘visual sentinel’.
Flying pig in ‘salmon glaze’ color spotted over Cupertino… Apple will start publishing AI papers, Russ Salakhutdinov,of CMU/Apple said in a presentation at NIPS on Tuesday.
We are at NIPS. Schedule here.
I’m saving this for the regular, full fat ImportAI edition.
Now, remember NIPS conference goes, follow the advice that George W Bush gave to Obama upon handing over office: “always use Purell hand sanitizer”.