Import AI: Issue 20: Technology versus globalization, AI snake oil, and more computationally efficient resnets

by Jack Clark

Outsourcing: UK services and outsourcing omni-borg Capita plans to save 50 million pounds a year by laying off some of its workers and replacing them with “proprietary robotic technology”. This will mean Capita’s human staff can do ten times the amount of work they used to be able to do pre-robot, making them ten times more efficient, said CEO Andy Parker…
… it’s for reasons like this that people are suspicious of the march of technology and automation. “Every technological revolution mercilessly destroys jobs and livelihoods – and therefore identities – well before the new ones emerge,” Mark Carney, governor of the Bank of England, said in a speech given earlier this week. “This was true of the eclipse of agriculture and cottage industry by the industrial revolution, the displacement of manufacturing by the service economy, and now the hollowing out of many of those middle-class services jobs through machine learning and global sourcing.”…
85 percent of the job losses in American manufacturing can be explained by the rise of technology rather than globalization, according to the Brookings Institution… however, that could soon change as other countries make huge investments into robotics, letting them make goods they can sell at a lower price, hitting American companies with a potent cocktail of globalization & tech. A recent report from Bernstein finds that China spent about $3 billion on robots last year, versus $2 billion in America.  

Diversity improves at NIPS, slightly… female attendance at premier AI conference NIPS was 15% this year, up from 13.7% last year. I’d call that a barely perceptible step in the right direction. Attendance at the WIML workshop, however, more than doubled from 265 participants last year to 570 this year.

Self-driving trucks are a long, long way off,… say truck drivers, who think it could be as much as 40 years before self-driving big rigs take away their jobs. That’s based on focus groups conducted by Roy Bahat and The Shift Commission (which OpenAI is participating in). When I speak to self-driving AI experts, the most conservative estimates are that self-driving trucks will be here and doing major stuff in the economy in ~15 years.

Don’t look at the sky, look at the bird!… that’s the gist of research from Google, CMU, Yandex, and the Higher School of Economics. The new technique lets us teach a residual network classifier to perform fewer computations for the same outcome, letting the network expend time processing the parts of the image that matter, such as a bird rather than the sky behind it, or sportspeople on a field versus the grassy pitch they’re playing on. This approach builds in a ‘good enough’ measure so you stop computing a section of a given image once you’re confident that your classifier has a good handle on the feature. What’s the upshot? You can get equivalent accuracy to a full-fat resnet while expanding about half the amount of computation. You can read more in the paper, Spatially Adaptive Computation Time for Residual Networks
… and there may be some indications that these networks are learning to identify the sorts of things that humans find germane as well. “The amount of per-position computation in this model correlates well with the human eye fixation positions, suggesting that this model captures the important parts of the image,” the researchers write.

AI & radiology – not so fast, says wunderkind radiologist: there’s a lot of evidence that AI and radiology are going to overlap as new deep learning techniques let computers compete with radiologists, providing assistant diagnostic capabilities and perhaps, eventually, replacing them in their jobs.That prompted AI pioneer Geoff Hinton to say in November that: “If you work as a radiologist you’re like the coyote that’s already over the edge of the cliff. People should stop training radiologists now, it’s just completely obvious that in five years deep learning is going to do better than radiologists, it might be ten years”. He’s not alone – startups like Enlitic and established companies via IBM (through its acquisition of Merge Healthcare) are betting that they can use AI to supplement or replace radiologists…
… but it may be harder than it seems, says reader Jeff Chang, MD., co-founder of Doblet, and a former radiologist in the US (the youngest radiologist on record, according to his LinkedIn profile)…When could deep learning approaches replace radiologists, I asked Jeff. “I tend to (very grossly) guesstimate about 15 years till we get to that point,” he said. “Radiology being among one of the most complex forms of pattern recognition done by humans, and very dependent on 3D spatial reconstruction — i.e., by moving through a series of axial, coronal or sagittal images, humans automatically render 2D images into 3D patterns in their minds, and can thus interpret and diagnose anatomically visible abnormalities,” he said. “Most diagnoses in radiology are ridiculously context-dependent”. Thanks for the knowledge Jeff!
Feedback requested: Anyone care to disagree with his assessment? Email me!

Everything but the kitchen sink… is what Apple is working on with its AI research. The company is exploring generative models, scene understanding, reinforcement learning, transfer learning, distributed training, and more, said its new head of machine learning Ruslan Salakhutdinov during a meeting at NIPS, according to Quartz. Though the company professes to be opening itself up, this was a closed-door meeting.  ¯\_(ツ)_/¯

Tencent plans AI lab… Chinese tech company Tencent has is creating an artificial intelligence research lab. “Chinese companies have a really good chance, because a lot of researchers in machine learning have a Chinese background. So from a talent acquisition perspective, we do think there is a good opportunity for these companies to attract that talent,” Tencent VP Xing Yao tells the MIT Technology Review. CMU’s dean of CS, Andrew Moore, said at his recent Senate testimony that the US should pay attention to how many engineers are being graduated by India and China each year.

Help the AI community by adding to this list of datasets: bravo to Oliver Cameron at Udacity for creating this ever-evolving ‘datasets for machine learning’ Google doc. We’re up to 51 neatly described, linked, and assessed examples, but could also do with more, so please feel free to edit it yourself… the document is already full of wonders, such as the ‘militarized interstate disputes’ set, which logs “all instances of when one state threatened, displayed, or used force against another”.

AI boom sighted in NYT article corpus: work by Microsoft and Stanford tracks the public perception of AI over time through the lens of the NYT. They find there has been a boom in articles covering AI from 2009 onwards, and the previous trough in coverage neatly mapped onto the ‘AI winter’ fallow funding period. Conclusions? “Discussion of AI has increased sharply since 2009 and has been consistently more optimistic than pessimistic. However, many specific concerns, such as the fear of loss of control of AI, have been increasing in recent years.” Read more here: “Long-Term trends in the public perception of artificial intelligence” (PDF).

The Medium is the Method of Control: What happens when we combine the perceptive capabilities of deep learning with a newly digitized visual world? New means of control. “The fact that digital images are fundamentally machine-readable regardless of a human subject has enormous implications. It allows for the automation of vision on an enormous scale and, along with it, the exercise of power on dramatically larger and smaller scales than have ever been possible,” writes Trevor Paglen in The New Inquiry.

A field guide to spotting AI Snake Oil: have you ever found yourself wandering through a convention center suddenly distracted by the smooth twang of a salesperson, thumbs hooked into their braces, rocking back and forth on their heels exclaiming “why ladies and gentlemen if you but dwell a while with me here I promise to show you AI the likes of which you’ve only dreamed of, the type of AI to make Kurzweil blush, Shannon scream, and Minsky mull!”. (Well, it’s happened to me). Print out this guide to AI Snake Oil from Dan Simonson and be sure to ask the following questions when evaluating an AI startup: ‘is there existing training data? And if not, how you plan on getting it?”, “do you have an evaluation procedure built into the software?”, “does your application require on unprecedentedly high performance on specific components?”, “if you’re using pre-packaged AI components, then do you have an exact understanding for how they’ll affect your program”?

Deep learning webring:

Oliver Cameron’s Transmission covers some of the research I didn’t.

OpenAI Bits&Pieces:

Govbucks for Basic Research, please: OpenAI co-founder Greg Brockman, along with his peers at other AI institutions and organizations, wants more money for AI research. “Brockman warns that if the government and other nonprofit entities don’t become bigger players in the field of AI, the danger is that the intellectual property, infrastructure, and expertise needed to “build powerful systems” could become sequestered inside just one or a few companies. AI is going to affect the lives of all of us no matter what, he says. “So I think it’s important that the people who have a say in how it affects us are representative of us all,” reports the MIT Technology Review.

AI is a lever nations will use to exert strategic power… is one of the things I argue in this interview with Initialized Capital’s Kim-Mai Cutler.

GANs and RL: generative adversarial networks will start to overlap with the RL community, with early work already linking GANs to imitation learning, inverse RL, and interpreting them as actor-critic problems, according to slides from Ian Goodfellow’s talk at NIPS.

Crazy&Weird:

[2021: Yosemite National Park, California. A mother and her two children make their way up switchbacks, ascending from the valley floor to the granite cliffs. Two drones buzz near them, following at a distance.]

The mother stops halfway up the path, before the next turn. “Hush,” she says. The children gabble to each other, but slow their walk. “I say hush!” she says. The kids go quiet. “Jason-” the taller child looks at her. ‘Can you turn off the drones?”
   “But mom, then we won’t have it!”
   “It’s important. I can’t hear it over their fans.”
   “Hear what?”
   “Just turn them off for a second.”
   Jason sighs, then thumbs at a bracelet on his wrist. The drones lower themselves to land behind the people on the path. Their fans spin down. The mother and her children listen to the faint hum of the waterfall on the other side of the valley, the crackling of the woods, and, close-by, a low, repetitive susurration. The mother holds her arm out to point to a patch of feathers in a tree. As she points, two rheumy yellow eyes swivel into view. The owl lets forth one last, bassy hum then flies away.
   “Okay,” she says, “you can turn them back on.” Jason touches his bracelet and the drones begin to fly again.

The family remembers the owl later that year, at thanksgiving, when they let grandma and grandpa watch the hiking trip, and have to explain why a section of the walk is missing.