Do Androids go to the karaoke bar? Computers are getting much better at generating music and melodies thanks to the use of neural networks. New research from the University of Toronto, ‘Song From PI: A Musically Plausible Network for Pop Music Generation’ sees AI create some quite convincing songs. Listen to some samples here; they sound like the Muzak a sentient elevator might hum to itself. It uses a hierarchical recurrent neural network to develop songs with long-range structure; capturing the sorts of melodies and key changes that exist over multiple bars has been challenging to AI in the past, so this work is a step in the right direction. The authors also pair the tech with text/image synthesis AI to give the robots lyrics formed of the descriptions of images. “We were barely able to catch the breeze at the beach and it felt as if someone stepped out of my mind,” the robots sing. Just wait till we combined the synth-voice tech with Wavenet!
Automatic compression: the field of data compression is beginning to be revolutionized by AI, as new techniques like recurrent neural networks and autoencoders make it possible to train systems to compress and decompress images. New research (PDF) by Twitter Cortex takes a step in this direction, though there isn’t yet truly convincing performance relative to traditional compression methods.
People will wear masks when they speak of revolutionary things: spare a thought for spies, who will no longer able to meet in a loud bar to whisper clandestine truths to each other across a table. New research by the University of Oxford and DeepMind has created AI software (called LipNet (PDF)) that can learn to read people’s lips with an accuracy of around 93%, outperforming human experts. Important caveat: the training data consists of simple sentences with a limited vocabulary, such as ‘place blue in m 1 soon’, so it doesn’t work on real-world data yet. Check out this video to get an idea of how it works. “It’s a limited vocabulary & grammar dataset developed by colleagues at @shefcompsci, I know cos I’m in it,” writes Sheffield/Amazon’s Neil Lawrence.”So while the model may be able to read my lips better than a human, it can only do so when I say a meaningless list of words from a highly constrained vocabulary in a specific order.” (The technical term for this is ‘gibberish’). To take this sort of technique to the real world researchers will need to do three things: gather a vast amount of real-world video, develop software to be able to read lips from multiple angles and in varying lighting conditions, and build a language model that is sophisticated enough to be able to guess at the kinds of phrases people are using. The technology has such obvious utility, though, that it seems inevitable to be built. At some point in the future if you want to tell people secrets you’ll have to wear a face-mask, not because you’re afraid of pollution, but because you fear the decoding-ability of random CCTV cameras and airborne spy drones.
AI will deliver on the tech boom’s promise: “Economists have long wondered why the so-called computing revolution has failed to deliver productivity gains. Machine intelligence will finally realize computing’s promise,” argue James Cham and Shivon Zillis of Bloomberg Beta. That’s because ML will augment any business that involves a) computation and b) the gathering of data. Any successful business has this trait to one extent or another, and some of the lagrest and most significant enterprises have structured their companies to optimize for a) and b).
Battle of the world-changing technologies: AI VS CRISPR: It’s Election Week in America, so in that spirit I decided to conduct a terrifically biased and flawed poll. I asked people to tell me which of AI, CRISPR – a powerful gene editing technology -, or “something else” would have the biggest effect on the world over the next 10 years. More than 50 percent of the 700 votes (of my heavily AI-biased followers) went to AI, with the rest split between CRISPR and something else. A variant of my poll which added the option of ‘climate change’ saw votes split more evenly between AI and Climate Change, followed by CRISPR. We’er lucky to live in a time where we have not one but two totally transformative technologies developing at a rapid pace. It seems likely that CRISPR and AI will overlap over the coming years, as scientists use AI to better analyze the results from CRISPR experiments, and CRISPR lets us enhance our understanding of genes to the point where we can start turning them into organic computing machinery. The convergence of these fields will pave the way for things like ‘Turing biocircuits – programmable biological network pathways for partitioning and cycling energy and matter’. Turing biocircuits! What do you think?
The future is… militarized AI: the US military sees the deployment of AI & robotics as being as strategically important as its earlier development of nuclear weapons and precision munitions, according to the the Financial Times. (I’ve also heard from people in the defense universe that cognitive enhancements – whether that be drugs, or some kind of neural lace – are being viewed as key technologies for a future strategic advantage.)
Computers open their eyes: Advances in AI mean ‘images will be as transparent to computers as text is today’, says Benedict Evans of A16Z. What does this new world look like? Work by Mario Klingemann for the Google Arts Project gives us a peek. The XDegreesofseparation project uses software to build a visual bridge between radically different art objects, selecting a set of several pieces of art that help you go from the baleful geometry of an Aztec death mask to the sweeping, fluid curves of the Venus de Milo, or from an old painting of a woman to an ancient sculpture. The technical term for what’s going on here is ‘interpolation’ – the AI has learned to look at visual traits of an object and navigate between them. Expect more work like this, especially as research into a field known as generative models lets us navigate more of the mysterious landscape that unites all aesthetic objects.
AI develops a conscience: the development of AI has outpaced the development of ethics relating to AI. That’s created unfortunate situations where algorithms have displayed gross biases like echoing racial stereotypes in text-association programs or displaying rank sexism in language engines. But all is not lost; the community has woken up to these problems and researchers across the industry are devoting time to fixing things. Now they’re getting outside help as well: CMU has been given $10 million by a law firm called K&L Gates to set up a center to study ethics and AI. This center will sit alongside related research efforts at Berkeley, Oxford, Cambridge, Stanford, and others to look at the societal and ethical impact of AI. In related news, the University of Oxford is hiring a researcher to look into the issues of personal safety, ethics, and security with regards to the internet of things, and how they relate to the amalgamation of a vast sea of digital data.
Moore’s Law is Dead, so now the chip-design rebels will stage their coup: When a great whale dies its carcass becomes an oasis in the belly of the ocean; for a while life flourishes amid its bones. The same is true of the end of different eras in technology. Now is the time of the death of Moore’s Law — and this is a good thing. Moore’s Law states that people can stuff double the amount of transistors in the same area of a chip every two years. That amazing law has driven much of the tech expansion in recent times, making intractable AI processing problems tractable, and putting supercomputers into the pockets of everyone with a smartphone. Now Moore’s Law is dying as ever-more-finely-detailed chip-making processes run into the uncaring and unyielding laws of physics. This means people are now exploring non-standard architectures for their processors, hoping to eke out performance through specialization rather than by relying on Moore’s Law. As one chip design student at MIT told me ‘it’s our time, now!’. That attitude drove Google to develop the TensorFlow Processing Unit chip for its data centers, led Microsoft to add FPGA co-processors to its Azure cloud, and motivated Intel to buy FPGA company Altera and AI-chip company Nervana. Now there’s a new chip startup called Graphcore which decloaked recently with $30 million in funding and plans to spread its Intelligence Processing Unit (IPU) chip and associated software far and wide. “The IPU has been optimized to work efficiently on the extremely complex high-dimensional models that machine intelligence requires. It emphasizes massively parallel, low-precision floating-point compute and provides much higher compute density than other solutions,” Graphcore writes.
The Kurt Vonnegut/Hunter S Thompson tech write you didn’t know you needed: This is your semi-yearly reminder to read James Mickens, the reassuringly insane computer science researcher and writer. “If a misaligned memory access is like a criminal burning down your house in a fail-stop manner, an impossibly large buffer error is like a criminal who breaks into your house, sprinkles sand atop random bedsheets and toothbrushes, and then waits for you to slowly discover that your world has been tainted by madness,” he says (PDF). Read more of his work here. Thank me later.
Automation requires a new social safety net: OpenAI co-chairman Elon Musk says the rise of automation and AI demands a new social safety net, specifically some kind of Universal Basic Income. There’s a vigorous debate going on among economists at the moment about whether UBI is viable, but one thing everyone agrees on is that advances in Ai are forcing a re-evaluation of what it means to work, what it means to be compensated for work, and how the government should react to the ensuing rise in inequality and centralizing of profits among a few operators of smart AI-infused capital. “Globalization may have ravaged blue-collar America, but artificial intelligence could cut through the white-collar professions in much the same way,” writes Jim Yardley in the New York Times.
ICLR-palooza: like many of our peers across the world we submitted a spread of papers to the ICLR conference. You can have a browse of them here. I’ll have a thorough writeup next week but the gist of this batch of research is about transfer learning for robots, more efficient reinforcement learning algorithms and further work on generative models.