Import AI: #63: Google shrinks language translation code from 500,000 to 500 lines with AI, only 25% of surveyed people believe automation=better jobs

by Jack Clark

Welcome to Import AI, subscribe here.

Keep your (CNN) eyes on the ball:
…Researchers with the University of British Columbia and the National University of Defense Technology in China have built a neural network to accurately pick sports players out of crowded scenes.
…Recognizing sports players – in the case of this research, those playing basketball or soccer – can be difficult because their height varies significantly due to the usage of a variety of camera angles in sports broadcasting, and they frequently play against visually noisy backgrounds composed of large crowds of humans. Training a network to be able to distinguish between the sports player and the crowd around them is a challenge.
…The main contribution of this is a computationally efficient sportsplayer/not-sportsplayer classifier. It works through the use of cascaded convolutional neural networks, where networks only pass an image patch on for further analysis if it triggers a high belief that it contains target data (in this case, sportsplayer data). They also employ dilation to let the network inferences derived from image patches scale to full-size images as well.
Reassuringly lightweight: The resulting system can get roughly equivalent classification results to standard baselines, but with a 100-1000X reduction in memory required to run the network.
…Read more here: Light Cascaded Convolutional Neural Networks for Accurate Player Detection.

The power of AI, seen via Google translate:
…Google recently transitioned from its original stats-based hand-crafted translation system to one based on a large-scale machine learning model implemented in TensorFlow, Google’s open source AI programming framework.
…Lines of code in original Google translation system: ~500,000.
…Lines of code in Google’s new neural machine translation system: 500.
…That’s according to a recent talk from Google’s Jeff Dean, which Paige Bailey attended. Thanks for sharing knowledge, Paige!
…(Though bear in mind, Google has literally billions of lines of code in its supporting infrastructure, which the new slimmed-down system likely relies upon. No free lunch!)

Cool tools: Facebook releases library for recognizing more than 170 languages on less than 1MB of memory:
Download the open source tool here: Fast and accurate language identification using fastText.

Don’t fear the automated reaper (until it comes for you)…
…The Pew Research Center has surveyed 4,135 US adults to gauge the public’s attitude to technological automation. “Although they expect certain positive outcomes from these developments, their attitudes more frequently reflect worry and concern over the implications of these technologies for society as a whole,” Pew writes.
58% believe there “should be limits on number of jobs businesses can replace with machines, even if they are better and cheaper than humans”.
25% believe a heavily automated economy “will create many new, better-paying human jobs”.
67% believe automation means that the “inequality between rich and poor will be much worse than today”.
…Here’s another reason why concerns about automation may not have percolated up to politicians (who skew older, whiter, and more affluent): the largest group to have reported having either lost a job or had pay or hours reduced due to automation is adults aged 18-24 (6% and 11%, respectively). Older people have experienced less automation hardship, according to the survey, which may influence their dispositions re automation politically.
Read more here: Automation in Everyday Life.

Number of people employed in China to monitor and label internet content: 2 million/
..China is rapidly increasing its employment of digital censors, as the burgeoning nation seeks to better shape online discourse.
…”We had about 30-40 employees two years ago; now we have nearly a thousand reviewing and auditing,” said the Toutiao censor, who, like other censors Reuters spoke to, asked not to be named due to the sensitivity of the topic,” according to the Reuters writeup in the South China Post.
…What interests me is the implication that if you’re employing all of these people to label all of this content, then they’re generating a massive datasets suitable for training machine learning classifiers with. Has the first censorship model already been deployed?
…Read more here: ‘It’s seen as a cool place to work’: How China’s Censorship Machine is Becoming a Growth Industry.

Self-driving cars launch in Californian retirement community:
..Startup Voyage has started to provide a self-driving taxi service to residents of The Villages, a 4000-person retirement community in San Jose, CA. 15 miles of reasonably quiet roads and reasonably predictable weather make for an ideal place to test out and mature the technology.
…Read more here: Voyage’s first self-driving car deployment.

DeepMind speeds up Wavenet 1000X, pours it into Google’s new phone:
…Wavenet is a rapid speech synthesis system developed in recent years by DeepMind. Now, the company has gone to the hard work of taking a research contribution and applying it to a real-world problem – in this case significantly speeding it up so it can improve the speech synthesis capabilities of its on-phone Google Assistant.
…Performance improvements:
…Wavenet 2016: Supports waveforms of up to 16,000 samples a second.
…Wavenet 2017: Generates one second of speech in about 50 milliseconds. Supports waveforms of up to 24,000 samples a second.
…Components used: Google’s cloud TPUs. Probably a truly vast amount of input speech data to use to generate the synthetic data.
…Read more here: WaveNet Launches in the Google Assistant.
DeepMind expands to Montreal, hoovers up Canadian talent:
DeepMind has opened a new office in Montreal in close partnership with McGill University (one of its professors, Doina Precup, is going to lead the new deepmind lab). This follows DeepMind opening an office in Edmonton a few months ago. Both offices will focus primarily on reinforcement learning.
…Read more here: Strengthening our commitment to Canadian research.

Humans in the loop – for fun and profit:
…Researchers with the US Army Research Laboratory, Columbia University, and the University of Texas at Austin, have extended software called TAMER (2009) – Training an Agent Manually via Evaluative Reinforcement – to work in high-dimensional (aka, interesting) state spaces.
…The work has philosophical similarities with OpenAI/DeepMind research on getting systems to learn from human preferences. Where it differs is in its ability to run in real-time, and in its claimed significant improvements in sample efficiency.
…The system, called Deep TAMER, works by trying to optimize a function around a goal inferred via human feedback. They augmented the original TAMER via the addition of a ‘feedback replay buffer’ for the component that seeks to learn the human’s desired objective. This can be viewed as analogous to the experience replay buffer used in traditional Deep Q-Learning algorithms. The researchers also use an autoencoder to further reduce the sample complexity of the tasks.
…Systems that use Deep Trainer can rapidly attain top scores on the Atari game Bowling, beating traditional RL algorithms like A3C and Double-DQN, as well as implementations of earlier versions of TAMER.
…The future of AI development will see people playing an increasingly large role in the more esoteric aspects of data shaping, with their feedback serving as a powerful aide to algorithms seeking to explore and master complex spaces.
…Read more here: Deep TAMER: Interactive Agent Shaping in High Dimensional Spaces.

The CIA gets interested in AI in 137 different ways:
…The CIA currently has 137 pilot projects focused on AI, according to Dawn Meyerriecks, its head of technology development.
…These projects include automatically tagging objects in videos, and predicting future events.
Read more here in this writeup at Defense One.

What type of machine let the Australian Center for Computer Vision win part of the Amazon picking challenge this year?
…Wonder know more! The answers lie within a research paper published by a team of Australian researchers that details the hardware design of the robot, named Cartman, that took place in the ‘stowing’ component of the Amazon Robotics challenge competition, in which tens of international teams tried to teach robots to do pick&place work in realistic warehouse settings.
… The Cartman robot cost the team a little over $20,000 AUD in materials. Now, the team plans to create an open source design of its Cartman robot which will be ready by Icra 2018 – they expect that by this point the robots will cost around $10,000 Australian Dollars (AUD) to build. The robot works very differently to the more iconic multi-jointed articulated arms that people see – instead, it consists of a single manipulate that can be moved around the X, Y, and Z axis by being tethered to a series of drive belts. This design has numerous drawbacks with regard to flexibility, deployability, footprint, and so on, but it has a couple of advantages: it is far cheaper to build than other systems, and it’s significantly simpler to operate and use relative to standalone arms.
Read more here: Mechanical Design of a Cartesian Manipulator for Warehouse Pick and Place.

Why the brain’s working memory is like a memristor:
…The memristor is a fundamental compute component – able to take the role of both a memory storage system and a computation device within the same fundamental element, while consuming low to zero power when not being accessed – and many companies have spent years trying to bring the technology to market. Most have struggled or failed (eg, HPE), because of production challenges.
…Now researchers with a spread of international institute find compelling evidence of an analogue to the memristor capability  in the human brain. They state that the brain’s working memory – the small sheet of grey matter we use to remember things like telephone numbers or street addresses for short periods of time – has similar characteristics. The scientists have shown that “we can sometimes store information in working memory without being conscious of it and without the need for constant brain activity,” they write. “The brain appears to have stored the target location in working memory using parts of the brain near the back of the head that process visual information. Importantly, this … storage did not come with constant brain activity, but seemed to rely on other, “activity-silent” mechanisms that are hidden to standard recording techniques.”
…Remember, what the authors call “activity-silent” systems basically translates to – undetectable via typical known recording techniques or systems. The brain is another country which we can still barely explore or analyse.
…Read more here: A theory of working memory without consiousness or sustained activity.

Tech Tales:

[2029: International AI-dispute resolution contracting facility, datacenter, Delaware, NJ, USA.]

So here we are again, you say. What’s new?
Nothing much, says another one of the artificial intelligences. Relatively speaking.

With the small talk out of the way you get to the real business of it: lying. Think of it like a poker game, but without cards. The rules are pretty complicated, but they can be reduced to this: a negotiation of values, about whose values are the best and whose are worse. The shtick is you play 3000 or 4000 of these games and you get pretty good at bluffing and outright lying your way to success, for whatever abstruse deal is being negotiated at this time.
One day the AIs get to play simulated lies at: intra-country IP theft cases.
Another day they play: mineral rights extraction treaty.
The next day it’s: tax repatriation following a country’s specific legal change.

Each of the AIs around the virtual negotiating table is owned by a vastly wealthy international law firm. Each AI has certain elements of its mind which have dealt with all the cases it has ever seen, while most parts of each AI’s mind are vigorously partitioned, with only certain datasets activated in certain cases, as according to the laws and regulations of the geographic location of the litigation at hand.

Sometimes the AIs are replaced. New systems are always being invented. And when that happens a new face appears around the virtual negotiation table:
Hey gang, what’s new? It will say.
And the strange AI faces will look up. Nothing much, they’ll say.. Relatively speaking.

Technologies that inspired this story: reinforcement learning, transfer learning, large-scale dialogue systems, encrypted and decentralized AI via Open Mined from Andrew Trask&others.