Import AI: Issue 7: Intelligent ultrasound machines, Canadian megabucks, and edible boxing gloves
by Jack Clark
Welcome to Import AI, a newsletter about artificial intelligence. Subscribe here.
Deep learning + heart doctors in Africa: Good healthcare is punishingly expensive. It relies on vast infrastructure and, in most countries, huge amounts of government support. If you’re unlucky enough to be born in a part of the world with poor healthcare infrastructure then your life will be shorter and your opportunities will be smaller. So it’s great to see examples of AIhelping to reduce the cost of healthcare. This week, deep learning startup BayLabs is doing work with the American society of echocardiography to help a Kenyan team scan hundreds of school children in the village of Eldoret for signs of Rheumatic Heart Disease – the most common acquired heart disease in children, particularly those in developing countries. The company is using a prototype device that looks like a ultrasound machine, though miniaturized. It’s got a GPU in it, naturally. The device uses artificial intelligence to spot RHD symptoms. It does this locally so it doesn’t need to phone home to a cloud system to work. “The probe acquires heart images and we run inference on a whole video clip of a given view or set of views of the heart (basically a sliced view of the moving heart),” says BayLabs ‘mad scientist’ Johan Mathe. (Note to concerned parents, I’ve met Johan and he appears to be reasonably sane.)
Play it again, HAL: New research from DeepMind shows how to teach computers to generate voices, music, and anything else. This brings us closer to a day where our phones can talk to us with intonation and, eventually, sarcasm, like Marvin the Paranoid Android. Check out the synthetic voices on the DeepMind blog and relax to some of the ghostly neural network piano tunes. This technology will also make it easier for people to create synthetic audio clips from known individuals, so propagandists could eventually conjure up an audio clip of Barack Obama calling for universal basic income, or another world leader issuing a declaration of war. The technique’s drawback is it involves processing 16,000 datapoints a second. This means it is – to use a technical term – bloody expensive. Optimization and hardware should change this over time.
Rise of the accelerators: Speaking of hardware… Intel is buying computer vision chip company Movidius, just weeks after snapping up the deep learning experts at Nervana. Intel’s view is that AI will require dedicated processors, probably paired with a traditional (Intel-made) CPU, and modifiable FPGAs (from recent Intel-acquisition Altera). Nvidia is continuing to design more deep learning-specific chips, adapting its graphical systems for AItasks. Meanwhile, companies like Google are designing their own systems from the ground up. It’s not clear yet if Intel can win this, but it’s certainly paying to get a seat at the table. The Next Platform has a nice analysis of these trends. Nuit Blanche points out need for radical new hardware – so, crazy IC geeks, please dive in! One reassuringly crazy idea is optical computing, see the website of startup LightOn.
Montréal Megabucks: the Université de Montréal, Polytechnique Montréal and HEC Montréal have been awarded $93,562,000 (Canadian) dollars to carry out research into deep learning, machine learning, and operations research. So I think this means UMontreal AI expert Yoshua Bengio can pick up the bill next time he goes out to dinner with his fellow researchers? It’s fantastic to see the Canadian government shovel money into a field that it helped start, long may the funding continue.
Is math mandatory?: How much math do you need to know to understand deep learning? There’s some debate. The proliferation of new software makes it relatively easy to get started with the software, but you’ll likely need to understand some of the technical components to diagnose complex bugs and to develop entirely new algorithms. That may require a greater understanding of the math involved. “ML has deep pitfalls, and mitigating them requires a foundational understanding of the mechanisms that make ML work,” writes Anton Troynikov. “Math is a tool, a language of sorts. Having a math background does not magically allow to “understand” anything, and in particular not ML,” writes Francois Chollet. “Math & CS can be used to model chess, but you don’t need to understand this formalism in order to play chess. Not even at the highest level. The same is true of the relationship between math & ML. Doing ML relies on intuitions which come from the practice of ML, not from math.” (Personally, I think learning more math can help you conceptualize aspects of deep learning.)
Neural network diagrams: Here’s a Google primer to some modern aspects of neural network development that pairs accurate, easy-to-grasp descriptions with some very powerful visualizations.
Too good to be true: Recently the AI research community was astir with the great and surprising results contained in a new paper, called Stacked Auto Regression Machines, that was published on Arxiv. The paper has now been withdrawn. One of the authors says they left out key evidence in the paper. “In the future, I will release a software package for public verification, along with a more detailed technical report,” they write. Good! The best way to attain trust in the AI community is to give people the code to replicate your results.
Oh dear. No, no, no, that’s not right at all is it? Deep learning perception systems do not work like human perception systems. University of Toronto AI researcher and ‘neural network technician’ Jamie Ryan Kiros has been exploring the faults inherent to one of these software systems and publishing the bloopers on Twitter. Check out Usain Bolt’s secret frisbee habit and the marvels of this edible boxing glove!
Thanks for reading. If you have suggestions, comments or other thoughts you can reach me at jack@jack-clark.net or tweet at me@jackclarksf