Import AI Newsletter 38: China’s version of Amazon’s robots, DeepMind’s arm farm, and a new dataset for tracking language AI progress
by Jack Clark
Robots, Robots, and Robots!
…Kiva Systems: Chinese Edition… when Amazon bought Kiva Systems in 2012 the company’s eponymous little orange robots (think of a Rhoomba that has hung out at the gym for a few years) wowed people with their ability to use swarm intelligence to rapidly and efficiently store, locate, and ferry goods stacked on shelves to and fro in a warehouse.
…now it appears that a local Chinese company has built a similar system. Chinese delivery company STO Express has released a video showing robots from Hikvision swiveling, shimmying, and generally to- and fro-ing to increase the efficiency of a large goods shipping warehouse. The machines can sort 200,000 packages a day and are smart enough to know when to go to their electricity stations to charge themselves. Hikvision first announced the robots in 2016, according to this press release (Chinese). Bonus: mysterious hatches in the warehouse floor!
…”An STO Express spokesman told the South China Morning Post on Monday that the robots had helped the company save half the costs it typically required to use human workers. They also improved efficiency by around 30 per cent and maximized sorting accuracy, he said. We use these robots in two of our centers in Hangzhou right now,” the spokesman said. “We want to start using these across the country, especially in our bigger centers.”, according to the South China Morning Post.
…Amazon has continued to invest in AI and automation since the Kiva acquisition. In the company’s latest annual letter to shareholders CEO Jeff Bezos explains how AI ate Amazon: “Machine learning drives our algorithms for demand forecasting, product search ranking, product and deals recommendations, merchandising placements, fraud detection, translations, and much more. Though less visible, much of the impact of machine learning will be of this type – quietly but meaningfully improving core operations,” writes Bezos.
Research into reinforcement learning, generative models, and fleet learning, may further revolutionize robotics by making it possible for robots to learn to rapidly identify, grasp, and transfer loosely packed items around warehouses and factories. Add this to the Kiva/Hikvision equation and it’s possible to envisage fully automated, lights out warehouses and fulfillment centers. Just give me a Hikvision pod with a super capable arm on top and a giant chunk of processing power and I’m happy.
Industrial robots get one grasp closer: startup Righthand Robotics claims to have solved a couple of thorny issues relating to robotics, namely grasping and dealing with massive variety.
…the company’s robots uncloaked recently. They are designed to pick loose, mixed items out of bins and place them on conveyor belts or shelves. This is a challenging problem in robotics. So challenging in fact that in 2015 Amazon started the ‘robot picking challenge’, a competition meant to motivate people to come up with technologies that Amazon, presumably can then buy and use to supplement for human labor.
…judging by my unscientific eyeballing, Righthand’s machines use an air-suction device to grab the object, then stabilize their grip with a three-fingered claw. Things I’d like to know: how heavy an object the sucker can carry, and how wildly deformed an object’s surface can be and still be grippable?
DeepMind reveals its own (simulated) arm farm: last year Google Brain showed off a room containing 14 robot arms, tasked with picking loose items out of bins and learning to open doors. The ‘arm farm’, as some Googlers term it, let the arms learn in parallel, so when each individual arm got better at something that knowledge was transferred to all the others in the room. This kind of fleet-based collective learning is seen by many as a key way of surmounting the difficulties of developing for robotics (reality is really slow relative to simulation, and variants from each physical robot can hurt generalization).
…DeepMind’s approach sees it train robot arms in a simulator to successfully find a Lego Duplo block on a table, pick it up, and stack it on another one. By letting the robots share information with one another, and using that data to adjust the the core algorithms used to learn to stack the blocks, the company was able to get training time down to as little as 10 hours of interaction across a fleet of 16 robots. This is approaching the point where it might be feasible for products. (The paper mostly focuses on performance within a simulator, though there are some asides that indicate that some tests have shown some generalization to the real world.)
…For this experiment, DeepMind built on and extended the Deep Determinisic Policy Gradient algorithm in two ways: 1) it added the ability to let the algorithm provide updates back to the learner more times during each discrete step, letting robots learn more efficiently. It called this variant DPG-R 2) It then took DPG-R and franken-engineered it with some of the distributed ideas from the Asynchronous Actor Critic (A3C) algorithm. This let it parallelize the algorithm across multiple computers and simulated robots.
…For the robot it used a Jaco, a robotics arm developed by Kinova Robotics. The arm has 9 degrees of freedom (6 in the body and 3 in the hand), creating a brain-melting level of computations to perform to get it to do anything remotely useful. This highlights why it’s handy to learn to move the arm using an end-to-end approach.
...Drawbacks: the approach uses some hand-coded information about the state of the environment, like the position of the Lego Block on the table, and such. Ultimately, you want to learn this purely from visual experience. Early results here have about an 80% success rate, relative to around 95% for approaches that use hard-coded information.
…more information in: Data-efficient Deep Reinforcement Learning for Dexterous Manipulation.
###
ImportAI’s weekly award for Bravely Enabling Novel Intelligence for the Community of Experimenters (BENICE) goes to… Xamarin co-founder Nat Friedman, who has announced a series of unrestricted $5,000 grants for people to work on open source AI projects.
…”I am sure that AI will be the foundation of a million new products, ideas, and companies in the future. From cars to medicine to finance to education, AI will power a huge wave of innovation. And open source AI will lower the price of admission so that anyone can participate (OK, you’ll still have to pay for GPUs),” he writes.
…anyone can apply from any country of any age with no credentials required. Deadline for applications is April 30th 2017. The money “is an unrestricted personal gift. It’s not an equity investment or loan, I won’t own any of your intellectual property, and there’s no contract to sign,” he says.
Double memories: The brain writes new memories to two locations in parallel: the hippocampus and the cortex. This, based on a report in Science, cuts against years of conventional wisdom about the brain. Understanding the interplay between the two memory sysems and other parts of the brain may be of interest to AI researchers – the Neural Turing Machine and the Differentiable Neural Computer are based on strongly held beliefs about how we use the hippocampus as a kind of mental scratch pad to help us go about our day, so it’d be curious to model systems with multiple memory systems interacting in parallel.
Technology versus Labor: Never bring a human hand to a robot fight. The International Monetary Fund finds that labor’s share of the national income declined in 29 out of 50 surveyed countries over the period of 1991 to 2014. The report suggests technology is partially to blame.
AlphaGo heads to China: DeepMind is mounting a kind of AlphaGo exhibition in China in May, during which the company and local Go experts will seek to explore the outer limits of the game. Additionally, there’ll be a 1:1 match between AlphaGo and the world’s number one Go champion Ke Jie.
German cars + Chinese AI: Volkswagen has led a $180 million financing round for MobVoi, a Chinese AI startup that specializes in speech and language processing. The companies will work together to further develop a smart rear-view mirror. Google invested several million dollars into Mobvoi in 2015.
I heard you like programming neural networks so I put a neural network inside your neural network programming environment: a fun & almost certainly counter-productive doohickey from Pascal van Kooten, Neural Complete, uses a generative seq2seq LSTM neural network to suggest next lines of code you migth want to write.
Tracking AI progress… via NLP: Researchers have just launched a new natural language understanding competition. Submissions close and the results will be featured at EMNLP in September…
… this is a potentially useful development because tracking AI’s progress in the language domain has been difficult. That’s because there are a bunch of different datasets that people evaluate stuff on eg, Facebook’s BabI, Stanford’s Sentiment Treebank (see: OpenAI research on that), Penn TreeBank, the One Billion Word Benchmark, and many more that I lack the space to mention. Additionally, language seems to be a more varied problem space than images, so there are more ways to test performance.
… the goal of the new benchmark is to spur progress in natural language processing by giving people a new large dataset to use to reason about sentences with. It contains a dataset of 430,000 human-labeled sentence pairs, along with corresponding labels on whether they are neutral, contradiction, or entailment, is to spur progress in NLP.
…New datasets tend to motivate new solutions to problems – that’s what happened with ImageNet in 2012 (Deep Learning) and 2015 (ResNets – which proved merit on ImageNet and have been rapidly adopted by researchers), as well as approaches like MS COCO.
… one researcher, Sam Bowman,, said he hopes this dataset and competition could yield: “A better RNN/CNN alternative for sentences”, as well as “New ideas on how to use unlabeled text to train sentence/paragraph representations, rather than just word representations [and] some sense of exactly where ‘AI’ breaks down in typical NLP systems.”
Another (applied) machine learning brick in the Google search wall: Google has recently launched “Similar items” within image search. This product uses machine learning to automatically identify products within images and then separately suggest shopping links for them. “Similar items supports handbags, sunglasses, and shoes and we will cover other apparel and home & garden categories in the next few months,” they say…
…in the same way Facebook is perennially cloning bits of Snapchat to deal with the inner existential turmoil that stems from what we who are mortal call ‘getting old’, Google’s new product is similar to ‘Pinterest Lens’ and Amazon XRay.
…seperately, Google has created a little anti-psychotic-email widget, based on its various natural language services on its cloud platform. The DeepBreath system can be set up with a Google Compute Engine account.
RIP OpenCyc: Another knowledge base bites the dust: data is hard, but maintaining a good store of data can be even more difficult. That’s partly why OpenCyc – an open source variant of the immense structured knowledge based developed by symbolic AI company Cyc – has shut down. “Its distribution was discontinued in early 2017 because such “fragmenting” led to divergence, and led to confusion amongst its users and the technical community generally that that OpenCyc fragment was Cyc. Those wishing access to the latest version of the Cyc technology today should contact info@cyc.com to obtain a research license or a commercial license to Cyc itself,” the company writes. (It remains an open question as to how well Cyc is doing. Lucid.ai, a company formed to commercialize the technology, appears to have lets its website lapse. I haven’t ever been presented with a compelling and technically detailed example for how Cyc has been deployed. My inbox is open!)
OpenAI bits&pieces:
Inventing language: OpenAI’s Igor Mordatch was interviewed by Canadian radio science program The Spark about his recent work on developing AI agents that learned to invent their own language.
Tech Tales:
[2045: A bunker within a military facility somewhere in the American West.]
The scientists call it the Aluminum Nursery, the engineers call it the FrankenFarm, and the military call it a pointless science project and ask for it to be defunded. But everyone privately thinks the same thing: what the robots are doing is fascinating to the point that no one wants to stop them.
It started like this: three years ago the research institute scattered a hundred robots into a buried, underground enclosure. The enclosure was a large, converted bunker from the cold war, and its ceilings were studded with ultraviolet lights, which cycle on and off throughout the course of each artificial “day”. Each day sees the lights cycle with a specific pattern that can be discerned, given a bit of thought.
To encourage the robots to learn, the scientists gave them one goal in life: to harvest energy from the overhead lights. It only took a few weeks for the robots to crack the first pattern. One robot, operating within its own little computational envelope, was able to figure out the pattern of the lights. When one light turned off, it beamed a message to another robot giving it some coordinates elsewhere in the complex. The robot began to move to that location, and when it arrived the overhead light-cycle ticked over and a light shone down upon it, letting it collect energy.
In this way the robots learned teamwork. Next came specialization: The scientists had built the robots to be modular, with each one able to extend or diminish itself by adding legs, or dextrous manipulators, or additional solar collectors, and so on. After this first example, the robots learned to try to spot the pattern in the sky. After a few more successes, one robot decided to specialize. It made a deal with another robot to gain one of that robot’s cognitive cores, in exchange for one of its legs. This meant when it cracked the pattern it was able to tell the other robot, which moved into position, collected energy, and then traded it with the originating robot. In this way, the robots learned to specialize to achieve their goals.
The scientists made the patterns more complex and the robots responded by making some of themselves smarter and others more mobile.
One day, when the scientists checked the facility, they did a scan and found only 99 robots. After they reviewed the footage they saw that in the middle of the artificial night a group of robots had fallen upon a single one that had been patrolling a rarely visited corner of the facility. In the space of a few minutes the other robots cannibalized the robot they’d ambushed, removing all of its limb, gripper, sensor modules, and all of its cognition other than a single base ID core. The next day, the robots solved the light pattern after a mere three cycles – something that was close to computationally optimal. Now the scientists have a bet with eachother as to how many robots the population will reduce to. “Where is the lower bound?” they ask, choosing to ignore the dead ID core sitting in the enclosure, its standby battery slowly draining away.
I don’t believe that there are only two memories in the brain. If hippocampus were short-term, how could one hold a meaningful conversation with patient H.M. who lacks his hippocampus? I once read a paper where they took a bunch of hippocampal CA3 cells and applied a thetanus burst to it. Then they measured the connections between cells … nothing happened. Time passed by, 1 minute, 2 minutes, 5 minutes. Then, suddenly after 6 minutes, synapse effiacy jumped up by +30 percent. So hippocampus seems more a mid-term memory, and there must be other mechanisms for short-term, e.g. loops, fast weights, or shifts of firing thresholds.
As far as RightHand’s vacuum gripper is concerned, why did you ask us instead of them via email? They won the 2013 DARPA arm competition under a different name. AFAIK, this sucker gripper is a hybrid. So if the suction cup doesn’t work for an object, it can still use a normal grasp, although it will be slower. There was a video where it lifted a weight of 50 lbs.