Import AI

Category: Import AI

Import AI: Issue 4: Medical-grade machine learning in Uganda, free data, and a personal announcement.

Welcome to Import AI, a newsletter about artificial intelligence. Subscribe here.

Intelligence and compression: being able to summarize something is a key trait of intelligence, so new work from Google that shows how to use neural networks for image compression is worth paying attention to. The paper, ‘full resolution image compression with recurrent neural networks’, outlines ways to compress images using neural networks, and appears to be competitive with or outperform existing techniques. The difference is that the neural networks have been taught to compress things through understanding what they are compressing, rather than programmed with specific knowledge of how compression works.

Please build this for me: Neural networks can do compression.  They can alsoscale up low-resolution images, given a sufficiently broad dataset. Twitter’s recent acquisition Magic Pony had developed some good technology in this area. Now I find myself wondering why there isn’t a web service that can take my pictures and scale them up for me for a (small) fee? This would be handy for landscape and/or tourist shots where there’s already a lot of data out there. I suspect Google will eventually add this feature to Google Photos. [Post-publication edit: It turns out this does already exist – https://www.isize.co/.]

Free data! Data is the crude oil of AI. Just as the world relies on oil the development of AI relies on access to data. If you don’t have data, you don’t have the raw material needed to develop, experiment on, and enhance AIsystems. So Kaggle should be congratulated for creating ‘Kaggle Datasets’, which hosts (free!) data, and lets people upload their own.

Free tools! Facebook has released fastText, some open source software for text and word classification, making it easier for people to build software that analyzes the sentiment of a piece of text, or figure out how a previously unseen word relates to known words. It’s hard to think of another industry which makes so many of its tools available so freely.

Better tools!: plumbing is important. You can think of a neural network as an intricate stack of many layers, each containing many vessels, connected to one another by pipes. Liquid flows from the bottom layer of the system to the top, then washes back down again, altering numbers associated with each vessel along the way. Once it reaches the bottom, the process starts all over again. What if you didn’t need to wait for it to wash back down? That’s the essence of a new paper from Google DeepMind called ‘decoupled neural interfaces using synthetic gradients’. It outlines a way to train very large neural networks more rapidly by being able to unhook some of the computations from one another. This is useful because it lets you do more experiments in the same amount of time, speeding progress. Some wonder if it will mostly work at Google-scale (translation: big. Like, Salvador Dali mind-bendingly weird Big). and will not be so useful for smaller systems. “The obviousness of it makes me think it is something others who have worked long and hard in the field have thought of but never had the resources to execute. But then, some things are obvious only in retrospect.” writes developer Delip Rao.

Don’t believe what you see: new computer vision techniques are making it much, much easier for people to manipulate images and videos. This will lead to new forms of propaganda, art and, of course, memes. Earlier this year researchers demonstrated ‘Face2Face’, which lets you manipulate the faces of people by mapping your expressions onto theirs, so you can literally put words into someone else’s mouth. Now, a video from a MIT PHD student named Abe Davis shows us ‘Interactive Dynamic Video’, technology to manipulate and animate objects from video. All you need is a slight vibration. The system works by looking at how vibrations propagate through an object and then use that to figure out the underlying structure of it. This could give people a way to analyze the structural integrity of bridges by filming them in a stiff breeze, or make it easy add CGI effects to films by making certain objects interact with each other.

An all-seeing, all-thinking, globe-spanning eye: satellite company Planet Labs has partnered with the computer vision experts at Orbital Insight to go after customers in the financial sector, pairing a (growing) fleet of around 60 satellites with a team teaching computers how to read the (literal) tea leaves. That will make it easier for investors to ask questions like ‘what are the shipping trends, based on the traffic at Major Hub X’, or ‘how will the biofuel market respond to crop fluctuations in Country Y’. As usage of this type of technology grows the price should come down, letting people like you and me analyze our own local environment. I’m particularly interested in exploring how the colors of the vegetation on the East Bay hills respond to seasonal temperature and rainfall fluctuations.

The doctor will (really) see you now: AI is going to make healthcare much, much better. The same computer vision algorithms that companies like Google have been busily refining for years to correctly classify (and serve ads against) images, are also perfectly suited to analyzing and labeling medical imagery.Stanford researchers have trained an AI system to figure out if something was cancerous or not. Its system learns to categorize nearly 10,000 individual traits compared to the several hundred a (human) pathologist might use. “These characteristics included not just cell size and shape, but also the shape and texture of the cells’ nuclei and the spatial relations among neighbouring tumor cells”. Meanwhile, Ugandan researchers have used similar techniques to attain good performance at spotting intestinal parasite eggs in stool samples, diagnosing malaria in thick blood samples, and tuberculosis in sputum samples. “The fact that in our experiments the same network architecture successfully identifies objects in three different types of sample further indicates its flexibility; better results still are likely given task-specific tuning of model parameters with cross validation for each case,” they write.

OpenAI imports Jack: I’ve joined OpenAI. I’ll be starting in a few weeks as our ‘Strategy and Communications Director’, which basically entails explaining AI to the world, whether that be journalists, researchers, regulators, or other interested parties. Suggestions? Questions? jack@jack-clark.net. I shall continue to write this newsletter in my spare time.

Thanks for reading. If you have suggestions, comments or other thoughts you can reach me at jack@jack-clark.net or tweet at me@jackclarksf

Import AI: Issue 3: Synthetic Pokemon, brain-like AI, and the history of Dropout.

Welcome to Import AI, a newsletter about artificial intelligence. Subscribe here.

No code neural networks: Another year brings new companies trying to let people build neural networks without having to do any programming. This time it is Aetros which has an online drag-and-drop interface people can use. It’s got a nice Bladerunner-meets-Aliens-meets-Ikea aesthetic. However, if you’re knowledgeable enough to be able to specify the fine-tuned settings and architecture, then you might prefer the precision of writing code rather than manipulating a GUI.

The secret history of Dropout: Dropout is to neural networks as fat is to cooking; it improves pretty much everything. The technique helps guard against overfitting, which is when your neural network has learned some patterns peculiar to its training data, and hasn’t learned the larger patterns present in previously unseen data. Dropout was invented by a bunch of people at the University of Toronto including Geoff Hinton, who was inspired by the tedium of queuing in a bank. “I went to my bank. The tellers kept changing and I asked one of them why. He said he didn’t know but they got moved around a lot. I figured it must be because it would require cooperation between employees to successfully defraud the bank. This made me realize that randomly removing a different subset of neurons on each example would prevent conspiracies and thus reduce overfitting,” said Hinton, in aGoogle Brain AMA on Reddit.

Regulate this: The legal profession may want to regulate companies that build its AI systems, says Wendy Wen Yun Chang, a member of the American Bar Association’s Standing Committee on Ethics and Professional Responsibility. “Lawyers must understand the technology that they are using to assure themselves they are doing so in a way that complies with their ethical obligations,” she writes. “The industry is moving along without us. Very quickly. We must act, or we will be left behind.” Some of the issues she talks about could be solved by making AI software more interpretable, as a lot of her concerns stem from the black box nature of most  AI software.

OK computer, tell me why you did that? one of the perpetual concerns people have about AI systems is their inscrutability. It’s hard to figure out exactly why a neural network has classified such-and-such a thing in such-and-such a way. But is this that big of a deal? “I think interpretability is important, but I don’t think it should slow down the adoption of machine learning. People are not very interpretable either, because we don’t really know what our brains are doing. There is a lot of evidence in the psychology literature that the reasons we give for why we decided to do things are not the real reason,” saysOpenAI’s Ian Goodfellow in a Quora AMA. Some European data protection regulations already look to be on a collision course with oblique AI.

Synthetic Pokemon: In the past couple of years we’ve worked out how to get AI tools to generate synthetic images. Researchers have since published papers and released open source code. Now people are using these techniques to generate new Pokemon.That’s a step up from some of the humble beginnings of this approach, like the creation of imaginary toilets in imaginary fields.

Care for some Keras? François Chollet, the creator of the Keras deep learning framework, Is doing a Quora AMA on Monday. Keras makes it trivial to design neural networks and is relatively easy to pick up compared to other frameworks. Self-driving startup Comma.ai – as mentioned in last week’s newsletter – is built partially on Keras.

Brainy, gloopy categorization: modern neural networks bear as much resemblance to the neurons in our heads as wind-up dolls do to living things. So it’s worth keeping an eye on other techniques that draw a bit more from biology. Researchers at the University of Pennsylvania and Michigan State University recently published a paper on ‘evolution of active categorical image classification via saccadic eye movement’. This system is able to scan a small part of an image and correctly guess what it is about three quarters of the time, when run on the standard (and basic) MNIST handwritten digit dataset. It can also do this without having to look at all of the image, instead it starts in a random position, expands until it finds something that looks like the image it’s looking for, then scans the rest of the nearby pixels from there. This is promising because of the greater efficiency but its performance is nowhere near state of the art. It’s.similar to the ‘attention’ techniques used in neural networks, though the implementation is different. Keep your eyes peeled for more links between biology and AI.

Thanks for reading. If you have suggestions, comments or other thoughts you can reach me at jack@jack-clark.net or tweet at me@jackclarksf

Import AI: Issue 1: GANs, ML bias, and a neural net Benjamin Franklin

Welcome to Import AI, a newsletter about artificial intelligence. Subscribe here.

Adversarial training / generative adversarial networks: “the most interesting idea in the last 10 years in ML” says Yann Lecun, a jazz aficionado who has a day job as Facebook’s Director of AI Research. One problem with GANs is that they are quite unstable and choosing the right settings is currently mostly an act of intuition, kind of like convolutional networks were a decade ago. Onward!

“It’s not my fault my data contains bias” is the new “the dog ate my homework”. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings? research from Boston University & Microsoft Research.

Machine learning is not, by default, fair or just in any meaningful way,” says Stephen Merity.

Now we know Google has used reinforcement learning to reduce the power consumption of its data centers it’s reasonable to wonder how else RL can and will be applied. Answers so far include robotics, wildfire suppression, healthcare, and more. RL will eventually be used to simulate (and run) complex multi-agent environments, like the power grid. Electric Power Market Modeling With Multi-Agent Reinforcement Learning gives some good clues.

Layer normalization: new technique substantially reduces training time of recurrent nets. On one question-answering task it “trains faster but converges to a better validation result”. Which sounds suspiciously like ‘having cake and eating it too’. From University of Toronto & Google/UofT’s Geoff Hinton. Try it for yourself via this TensorFlow implementation.

The Machine Intelligence Research Institute has accomplished a lot in the last year as it grapples with the paradoxes of controlling superintelligence. Its greatest achievement, though, is the invention of the term “Vingean Reflection”.

Riddle me this: ““Joan made sure to thank Susan for all the help she had given. Who had given the help? Answer 0: Joan or Answer 1: Susan””. You probably got this right. A computer would probably get this wrong, according to the latest results of the Winograd Schema Challenge.

You should follow @smilevector on twitter. It’s an experiment from Tom White that uses modern AI techniques to manipulate faces. It certainly cheered up Benjamin Franklin! (Though I’m less sure about Obama.

Thanks for reading. If you have suggestions, comments or other thoughts you can reach me at jack@jack-clark.net or tweet at me @jackclarksf