Import AI

Import AI: Issue 8: Starcraft as the new AI battleground, report from Bay Labs’ African expedition, generative models and platonic forms

Welcome to Import AI, a newsletter about artificial intelligence. Subscribe here.

Deep learning goes to Africa, helps some kids: Last week I told you about Bay Labs and some collaborators taking technology to Africa to help identify symptoms of  Rheumatic Heart Disease (RHD) in Kenyan school children. The Bay Labs software uses deep learning to analyze data derived from an ultrasound to take a good educated guess as to whether it’s seeing something consistent with RHD. During the trip, medical professionals scanned 1200 children in four days and were able to spot 48 children with RHD or congenital heart disease. During this, they had a chance to test out the Bay Labs tech and see if it worked. “The feedback from our tests was overwhelmingly positive, particularly coming from Kenyans who never used an ultrasound scanning device before. John for instance, a clinical officer working for the Eldoret hospital, was able to acquire the right view after a few minutes of using the prototype, and to see the recommendations of the Bay Labs prototype (it was a non-pathological case here). I spent some time interviewing him after the fact and it was hard to contain his enthusiasm. He performed what usually takes a sonographer few years of training in a few minutes!,” Bay Labs’ Johan Mathe tells me. Check out some pictures from the trip here. If you or anyone you know is trying to deploy deep learning into (affordable) healthcare systems to help people, then please let me know.

Intelligent Utilities: Sci-Fi author Stephen Baxter has a pet theory that one of the test-beds for really sophisticated AI systems will be planet-spanning utility systems. The idea is that if you’re tasked with managing power for a sufficiently large system then you’ll need some degree of intelligence to match the inputs with the outputs, distribute load effectively, and even manipulate some of your edge hardware (fields of solar panels, dams, etc) to modify inputs. So it’s interesting to see this postdoc position at Oxford which is seeking a researcher to apply machine learning methods to the noisy, local measurements generated by large energy storage systems.

The (synthetic) players of games: Starcraft, a real-time strategy game released in 1998 that is played and watched by tens of thousands of people a month in South Korea could well be the next ‘grand challenge’ companies are likely to test their artificial intelligence systems on. The game pits players against one another in a battle containing numerous units that spans land and air, full of subterfuge, fast-paced play, imperfect information,and all dependent on an underlying resource extraction economy which each player must carefully build, tend, and defend. Google Deepmind has dropped numerous hints that Starcaft is a game it’s paying attention to, and last week Facebook AI Research published a paper where it used neural networks to learn some troop movement policies within a Starcraft game.

The self-modifying, endlessly mutating, e-commerce website: a new product from AI startup Sentient makes it possible for a website to ‘evolve’ over time to achieve higher sales. Sentient Ascend will convert a web page into numerous discrete components, then shuffle through various arrangements of them, and breed and evolve its way to a page that is deemed successful, eg one which generates more purchases. This relies on the company’s technology which pairs a specialism in evolutionary computation with a massive, million-CPU-plus computer farm spread out across the world. No surprise that University of Texas evolutionary algorithm professor Risto Miikkulainen has been working there since mid-2015.

Dealing with the deep learning research paper deluge: Because deep learning is currently suffused with money and interest and postgraduate students there’s been a corresponding rise in the number of research papers being published. Andrej Karpathy’s Arxiv Sanity has been a handy tool for navigating this. Now Stephen Merity has released another tool, called Trending Arxiv that makes it easier to spot papers that are being widely talked about.

Studying deep learning: The Deep Learning textbook, a general primer on deep learning, is now available to purchase in hardcover, if you’re into that sort of thing. Try before you buy by reading the online version for free. Another great online (free) resource is the ’neural networks and deep learning’ book from Michael Nielsen.

Imagination, generative models, and platonic forms: One of the truly weird things about young children is you can show them a stylized picture of something, like a wedge of cheese wearing a dinner jacket, tell them something about it (for instance: this cheese is named Frank and works in insurance), then show them a real version of the object and they’ll figure out what it is. (In this case, the child will examine the lump of cheddar replete with miniature knitted jacket and exclaim ‘that’s Frank, he works in insurance!). Why is this? Well, the child has developed an idea in their head of what the object is and can then generalize to other versions of it. You may know this from philosophy, where Plato is famous for talking about the ‘platonic forms’, which is a notion that we carry around ideas in our head of The Perfect Dog or The Perfect Steak, and then use these rich, perfect representations to help us categorize the imperfect steaks and dogs we find in the world. Clearly, it’d be helpful to develop software that can observe the world and develop similarly rich, internal representations of it. This would make it easier to build, for example, robots that possess a general idea of what a door handle is and therefore be able to manipulate never-before-seen handles. Generative adversarial networks (GANs) are one promising route to coding this kind of rich representation into computers. So keep that in mind when looking at this work from UC Berkeley and Adobe that lets you generate new shoes and landscapes from simple visual tweaks, or this GAN which is able to generate videos, or this new paper from the Twitter / Magic Pony team that uses GANs to scale-up low-resolution images. And there’s new research from NYU / FAIR that may make it easier to train the (notoriously unstable) GANs.

Neural nets grow up: As recently as a year ago companies would view neural networks and other currently in-vogue AI techniques as being little more than research projects. Now they’re actively trying to hire people with expertise in these areas for production projects around categorization and reinforcement learning. And the interest doesn’t show any signs of dimming, says Jake Klamka, CEO of Insight Data Science. To get an idea of just how many places people are finding neural nets useful for, take a look at this (heroic) round-up of recent research papers by The Next Platform… Weather forecasting! Detection of deep-sea animals! Fault diagnosis in satellites! And much, much more.

What can’t AI do? Lots! The best way to describe current AI is probably the Churchillian phrase ‘the end of the beginning’. We’ve deployed smart software into the world that is capable of doing a few useful things, like saving on power consumption of data centers, performing basic classification of perceptual inputs, and helping to infer some optimal arrangements of various things. But our AI systems can’t really act independently of us in interesting ways, and are frustratingly obtuse in many others. There’s a lot to work on, as replies to this tweet show.

Import AI: Issue 7: Intelligent ultrasound machines, Canadian megabucks, and edible boxing gloves

Welcome to Import AI, a newsletter about artificial intelligence. Subscribe here.

Deep learning + heart doctors in Africa: Good healthcare is punishingly expensive. It relies on vast infrastructure and, in most countries, huge amounts of government support. If you’re unlucky enough to be born in a part of the world with poor healthcare infrastructure then your life will be shorter and your opportunities will be smaller. So it’s great to see examples of AIhelping to reduce the cost of healthcare. This week, deep learning startup BayLabs is doing work with the American society of echocardiography to help a Kenyan team scan hundreds of school children in the village of Eldoret for signs of Rheumatic Heart Disease – the most common acquired heart disease in children, particularly those in developing countries. The company is using a prototype device that looks like a ultrasound machine, though miniaturized. It’s got a GPU in it, naturally. The device uses artificial intelligence to spot RHD symptoms. It does this locally so it doesn’t need to phone home to a cloud system to work. “The probe acquires heart images and we run inference on a whole video clip of a given view or set of views of the heart (basically a sliced view of the moving heart),” says BayLabs ‘mad scientist’ Johan Mathe. (Note to concerned parents, I’ve met Johan and he appears to be reasonably sane.)

Play it again, HAL: New research from DeepMind shows how to teach computers to generate voices, music, and anything else. This brings us closer to a day where our phones can talk to us with intonation and, eventually, sarcasm, like Marvin the Paranoid Android. Check out the synthetic voices on the DeepMind blog and relax to some of the ghostly neural network piano tunes. This technology will also make it easier for people to create synthetic audio clips from known individuals, so propagandists could eventually conjure up an audio clip of Barack Obama calling for universal basic income, or another world leader issuing a declaration of war. The technique’s drawback is it involves processing 16,000 datapoints a second. This means it is – to use a technical term – bloody expensive. Optimization and hardware should change this over time.

Rise of the accelerators: Speaking of hardware… Intel is buying computer vision chip company Movidius, just weeks after snapping up the deep learning experts at Nervana. Intel’s view is that AI will require dedicated processors, probably paired with a traditional (Intel-made) CPU, and modifiable FPGAs (from recent Intel-acquisition Altera). Nvidia is continuing to design more deep learning-specific chips, adapting its graphical systems for AItasks. Meanwhile, companies like Google are designing their own systems from the ground up. It’s not clear yet if Intel can win this, but it’s certainly paying to get a seat at the table. The Next Platform has a nice analysis of these trends. Nuit Blanche points out need for radical new hardware – so, crazy IC geeks, please dive in! One reassuringly crazy idea is optical computing, see the website of startup LightOn.

Montréal Megabucks: the Université de Montréal, Polytechnique Montréal and HEC Montréal have been awarded $93,562,000 (Canadian) dollars to carry out research into deep learning, machine learning, and operations research. So I think this means UMontreal AI expert Yoshua Bengio can pick up the bill next time he goes out to dinner with his fellow researchers? It’s fantastic to see the Canadian government shovel money into a field that it helped start, long may the funding continue.

Is math mandatory?: How much math do you need to know to understand deep learning? There’s some debate. The proliferation of new software makes it relatively easy to get started with the software, but you’ll likely need to understand some of the technical components to diagnose complex bugs and to develop entirely new algorithms. That may require a greater understanding of the math involved. “ML has deep pitfalls, and mitigating them requires a foundational understanding of the mechanisms that make ML work,” writes Anton Troynikov. “Math is a tool, a language of sorts. Having a math background does not magically allow to “understand” anything, and in particular not ML,” writes Francois Chollet. “Math & CS can be used to model chess, but you don’t need to understand this formalism in order to play chess. Not even at the highest level. The same is true of the relationship between math & ML. Doing ML relies on intuitions which come from the practice of ML, not from math.” (Personally, I think learning more math can help you conceptualize aspects of deep learning.)

Neural network diagrams: Here’s a Google primer to some modern aspects of neural network development that pairs accurate, easy-to-grasp descriptions with some very powerful visualizations.

Too good to be true: Recently the AI research community was astir with the great and surprising results contained in a new paper, called Stacked Auto Regression Machines, that was published on Arxiv. The paper has now been withdrawn. One of the authors says they left out key evidence in the paper. “In the future, I will release a software package for public verification, along with a more detailed technical report,” they write. Good! The best way to attain trust in the AI community is to give people the code to replicate your results.

Oh dear. No, no, no, that’s not right at all is it? Deep learning perception systems do not work like human perception systems. University of Toronto AI researcher and ‘neural network technician’ Jamie Ryan Kiros has been exploring the faults inherent to one of these software systems and publishing the bloopers on Twitter. Check out Usain Bolt’s secret frisbee habit and the marvels of this edible boxing glove!

Thanks for reading. If you have suggestions, comments or other thoughts you can reach me at jack@jack-clark.net or tweet at me@jackclarksf