Import AI: Issue 8: Starcraft as the new AI battleground, report from Bay Labs’ African expedition, generative models and platonic forms

by Jack Clark

Welcome to Import AI, a newsletter about artificial intelligence. Subscribe here.

Deep learning goes to Africa, helps some kids: Last week I told you about Bay Labs and some collaborators taking technology to Africa to help identify symptoms of  Rheumatic Heart Disease (RHD) in Kenyan school children. The Bay Labs software uses deep learning to analyze data derived from an ultrasound to take a good educated guess as to whether it’s seeing something consistent with RHD. During the trip, medical professionals scanned 1200 children in four days and were able to spot 48 children with RHD or congenital heart disease. During this, they had a chance to test out the Bay Labs tech and see if it worked. “The feedback from our tests was overwhelmingly positive, particularly coming from Kenyans who never used an ultrasound scanning device before. John for instance, a clinical officer working for the Eldoret hospital, was able to acquire the right view after a few minutes of using the prototype, and to see the recommendations of the Bay Labs prototype (it was a non-pathological case here). I spent some time interviewing him after the fact and it was hard to contain his enthusiasm. He performed what usually takes a sonographer few years of training in a few minutes!,” Bay Labs’ Johan Mathe tells me. Check out some pictures from the trip here. If you or anyone you know is trying to deploy deep learning into (affordable) healthcare systems to help people, then please let me know.

Intelligent Utilities: Sci-Fi author Stephen Baxter has a pet theory that one of the test-beds for really sophisticated AI systems will be planet-spanning utility systems. The idea is that if you’re tasked with managing power for a sufficiently large system then you’ll need some degree of intelligence to match the inputs with the outputs, distribute load effectively, and even manipulate some of your edge hardware (fields of solar panels, dams, etc) to modify inputs. So it’s interesting to see this postdoc position at Oxford which is seeking a researcher to apply machine learning methods to the noisy, local measurements generated by large energy storage systems.

The (synthetic) players of games: Starcraft, a real-time strategy game released in 1998 that is played and watched by tens of thousands of people a month in South Korea could well be the next ‘grand challenge’ companies are likely to test their artificial intelligence systems on. The game pits players against one another in a battle containing numerous units that spans land and air, full of subterfuge, fast-paced play, imperfect information,and all dependent on an underlying resource extraction economy which each player must carefully build, tend, and defend. Google Deepmind has dropped numerous hints that Starcaft is a game it’s paying attention to, and last week Facebook AI Research published a paper where it used neural networks to learn some troop movement policies within a Starcraft game.

The self-modifying, endlessly mutating, e-commerce website: a new product from AI startup Sentient makes it possible for a website to ‘evolve’ over time to achieve higher sales. Sentient Ascend will convert a web page into numerous discrete components, then shuffle through various arrangements of them, and breed and evolve its way to a page that is deemed successful, eg one which generates more purchases. This relies on the company’s technology which pairs a specialism in evolutionary computation with a massive, million-CPU-plus computer farm spread out across the world. No surprise that University of Texas evolutionary algorithm professor Risto Miikkulainen has been working there since mid-2015.

Dealing with the deep learning research paper deluge: Because deep learning is currently suffused with money and interest and postgraduate students there’s been a corresponding rise in the number of research papers being published. Andrej Karpathy’s Arxiv Sanity has been a handy tool for navigating this. Now Stephen Merity has released another tool, called Trending Arxiv that makes it easier to spot papers that are being widely talked about.

Studying deep learning: The Deep Learning textbook, a general primer on deep learning, is now available to purchase in hardcover, if you’re into that sort of thing. Try before you buy by reading the online version for free. Another great online (free) resource is the ’neural networks and deep learning’ book from Michael Nielsen.

Imagination, generative models, and platonic forms: One of the truly weird things about young children is you can show them a stylized picture of something, like a wedge of cheese wearing a dinner jacket, tell them something about it (for instance: this cheese is named Frank and works in insurance), then show them a real version of the object and they’ll figure out what it is. (In this case, the child will examine the lump of cheddar replete with miniature knitted jacket and exclaim ‘that’s Frank, he works in insurance!). Why is this? Well, the child has developed an idea in their head of what the object is and can then generalize to other versions of it. You may know this from philosophy, where Plato is famous for talking about the ‘platonic forms’, which is a notion that we carry around ideas in our head of The Perfect Dog or The Perfect Steak, and then use these rich, perfect representations to help us categorize the imperfect steaks and dogs we find in the world. Clearly, it’d be helpful to develop software that can observe the world and develop similarly rich, internal representations of it. This would make it easier to build, for example, robots that possess a general idea of what a door handle is and therefore be able to manipulate never-before-seen handles. Generative adversarial networks (GANs) are one promising route to coding this kind of rich representation into computers. So keep that in mind when looking at this work from UC Berkeley and Adobe that lets you generate new shoes and landscapes from simple visual tweaks, or this GAN which is able to generate videos, or this new paper from the Twitter / Magic Pony team that uses GANs to scale-up low-resolution images. And there’s new research from NYU / FAIR that may make it easier to train the (notoriously unstable) GANs.

Neural nets grow up: As recently as a year ago companies would view neural networks and other currently in-vogue AI techniques as being little more than research projects. Now they’re actively trying to hire people with expertise in these areas for production projects around categorization and reinforcement learning. And the interest doesn’t show any signs of dimming, says Jake Klamka, CEO of Insight Data Science. To get an idea of just how many places people are finding neural nets useful for, take a look at this (heroic) round-up of recent research papers by The Next Platform… Weather forecasting! Detection of deep-sea animals! Fault diagnosis in satellites! And much, much more.

What can’t AI do? Lots! The best way to describe current AI is probably the Churchillian phrase ‘the end of the beginning’. We’ve deployed smart software into the world that is capable of doing a few useful things, like saving on power consumption of data centers, performing basic classification of perceptual inputs, and helping to infer some optimal arrangements of various things. But our AI systems can’t really act independently of us in interesting ways, and are frustratingly obtuse in many others. There’s a lot to work on, as replies to this tweet show.