Import AI 168: The self-learning warehouse; a sub-$225 homebrew drone; and training card-playing AIs with RLCard 

Why the warehouse of the future will learn about itself:
…Your next product could be delivered via Deep Manufacturing Dispatching (DMD)…
How can we make manufacturing facilities more efficient? One approach is to try to make them more efficient. One way to make things more efficient is – sometimes – to make them more intelligent. That’s what researchers at Hitachi America Ltd are trying to do with a new research paper where they improve dispatching systems in (simulated) warehouses via the use of AI. They call their resulting approach “Deep Manufacturing Dispatching (DMD)”, which I find oddly charming. 

How DMD works: The DMD works like this – the researchers turn the state of the shop floor into a 2D matrix, incorporate various bits of state from the environment, then design reward systems which favor the on-time delivery of items. 

Does any of this work? Yes, in simulation: They compare DND with seven other dispatching algorithms, ranging from carefully designed rule-based systems, to ones that use machine learning and reinforcement learning. They perform these comparisons in a variety of circumstances, assessing how well DMD can satisfy different constraints – here, lateness and tardiness. “Overall, for 19 settings, DMD gets best results for 18 settings on total discounted reward and 16 settings on average lateness and tardiness.” In tests, DMD beats out other systems by wide margins of success.

Why this matters: As the economy becomes increasingly digitized, we can expect some subset of the physical goods chain to move faster, as some goods are an expression of people’s preferences which are themselves determined by social media/advertising/fast-moving digital things. Papers like this suggest that more retailers are going to deal in a larger variety of products, each sold at relatively low volumes; this generally increases the importance of systems for efficiently coordinating in warehouses where this is the case.
   Read more: Manufacturing Dispatching using Reinforcement and Transfer Learning (Arxiv)

####################################################

What happens when people think private AI systems should be public goods?
..All watched over by un-integrated machines of incompetence…
In the past few years, robots have become good and cheap enough to start being deployed in the world – see the proliferation of quadruped dog-esque bots, new generation robot vacuum cleaners, robo-lawnmowers, and so on. One use case has been security, exemplified by robots produced by a startup called Knightscope. These robots patrol malls, corporate campuses, stores, and other places, providing a highly visible and mobile sign of security.

So what happens when people get in trouble and need security? In Los Angeles in early October,, some people started fighting and there happened to be a Knightscope robot nearby. The robot had ‘POLICE’ written on it. A woman ran up to the robot and hit its emergency alert button but nothing happened, as the robot’s alert button isn’t yet connected to the local police department, a spokesperson told NBC News. “Amid the scene, the robot continued to glide along its pre-programmed route, humming an intergalactic tune that could have been ripped from any low-budget sci-fi film,” NBC wrote. “The almost 400-pound robot followed the park’s winding concrete from the basketball courts to the children’s splash zone, pausing every so often to tell visitors to “please keep the park clean.””

Why this matters: Integrating robots into society is going to be difficult if people don’t trust robots; situations where robots don’t match people’s expectations are going to cause tension.
   Read more: A RoboCop, a park and a fight: How expectations about robots are clashing with reality (NBC News).

####################################################

Simple sub-$225 drones for smart students:
…Brown University’s “PiDrone” aims to make it easy for students to build smart drones…
Another day brings another low-cost drone and associated software system, developed by university educators. This time it is PiDrone, a project from Brown University which describes a low-cost quadcopter drone which the researchers created to accompany a robotics course. Right now, the drone is a pretty basic platform, but the researchers expect it will become more advanced in the future – they plan to tap into the drone’s vision system for better object tracking and motion planning,  and to run a crowdfunding campaign “to enable packaging of the drone parts into self-contained kits to distribute to individuals who desire to learn autonomous robotics using the PiDrone platform”. 

Autonomy – no deep learning required: I spend a lot of time in this newsletter writing about the intersection of deep learning and contemporary robot platforms, so it’s worth noting that this drone doesn’t use any deep learning. Instead, it uses tried and tested systems like an Unscented Kalman Filter (UKF) for state estimation,as well as two methods for localization – particle filters, and a FastSLAM algorithm. State estimation lets the drone know its state in reference to the rest of the world (eg, its height), and localization lets the drone know its location – having both of these systems makes it possible to build smart software on top of the drone to carry out actions in the world.

Why this matters: In the past few years, drones have been becoming cheaper to build as a consequence of economics of scale, and drones directly benefiting from improvements in vision and sensing technology driven by the (vast!) market for smartphones. Now, educators are turning drones into modular, extensible platforms that students can pick apart and write software for. I think the outcome of this is going to be a growing cadre of people able to hack, extend, and augment drones with increasingly powerful sensing and action technologies.
   Read more: Advanced Autonomy on a Low-Cost Educational Drone Platform (Arxiv)

####################################################

Want to see if your AI can beat humans at cards? Use RLCard:
…OpenAI Gym-esque system makes it easy to train agents via reinforcement learning…
Researchers with Texas A&M University and Simon Fraser University have released RLCard, software to make it easy to train AI systems via reinforcement learning to play a variety of card games. RLCard is modeled on other, popular reinforcement learning frameworks like OpenAI Gym. It also ships with some in-built utilities for things like parallel training.

Included games: RLCard ships with the following integrated card games: Blackjack, Leduc Hold’em, Limit Texas Hold’em, Dou Dizhu, Mahjong, No-limit Texas Hold’em, UNO, and Sheng Ji.

Why this matters: In the same way that some parts of AI research in language modeling have moved from single task to multi-task evaluation (see multi-task NLP benchmarks like GLUE, and SuperGLUE), I expect the same thing will soon happen with reinforcement learning, where we’ll start training algorithms on multiple levels of the same game in parallel, then on games that are somewhat related to eachother, then across genres entirely. Systems like RLCard will help researchers improve algorithmic performance against card game domains, and could feed other, larger evaluation approaches in the future.
   Read more: RLcard: A Toolkit for Reinforcement Learning in Card Games (Arxiv)

####################################################

Lockheed Martin and Drone Racing League prepare to pit robots against humans in high-speed races:
…League’s new “Artificial Intelligence Robotic Racing” (AIRR) circuit seeks clever AI systems to create autonomous racing drones…
The Drone Racing League is getting into artificial intelligence with RacerAI, a drone built for the specific needs of AI systems. This month, the league is launching an AI vs AI racing competition in which teams will see who can develop the smartest AI system, deploy it on a RacerAI drone, and win a competition against nine teams. 

A drone, built specially for AI systems: “The DRL RacerAI has a radical drone configuration to provide its computer vision with a non-obstructive frontal view during racing,” the Drone Racing League explains in a press release. Each drone has a Jetson AGX Xavier chip onboard, and each has four onboard cameras – “enabling the AI to detect and identify objects with twice the field of view as human pilots”. 

Military industrial complex, meet sports! The DRL is developing RacerAI to support Lockheed Martin’s “AlphaPilot” challenge, an initiative to get more developers to build smart, autonomous drones. 

Why this matters: Autonomous drones are in the post-Kitty Hawk phase of development: after a decade of experimentation, driven by the availability of increasingly low-cost drone robot platforms, the research has matured to the point that it has yielded numerous products (see: Skydio’s autonomous drones for automatically filming people), and has opened up new frontiers in research, like developing autonomous systems that can eventually outwit humans. As this technology matures, it will have increasingly profound implications for both the economy, and asymmetric warfare.
   Read more: DRL RacerAI, The First-Ever Autonomous Racing Drone (PRNewsWire).
   Find out more about AlphaPilot here (Lockheed Martin official website).
   Get a closer look at the RacerAI drone here (official Drone Racing League YouTube).

####################################################

AI Policy with Matthew van der Merwe:
…Matthew van der Merwe brings you views on AI and AI policy; I (lightly) edit them…

US government places restrictions on Chinese AI firms:
The US Commerce Department has placed 28 Chinese organisations on the ‘Entity List’ of foreign parties believed to threaten US interests, prohibiting them from trading with US firms without government approval. This includes several AI companies, like AI camera experts Hikvision and speech recognition company IFLYTEK. The Department of Commerce alleges the organisations are complicit in human rights abuses in Xinjiang. By restricting the companies’ access to imported hardware and talent, the move is expected to hinder their growth. (It has been suggested, though, that import restrictions like these might serve to accelerate the development of China’s domestic hardware capabilities, having the opposite effect of the sanction’s intention.)
   
  Why it matters: Given the Trump administration’s broader trade negotiation with China, these sanctions serve to heighten the stakes of that discussion. It is unclear how materially this will affect China’s AI industry, whether there will be further restrictions, and how China will respond. Fully realizing the benefits of advanced AI will require more cooperation and coordination between major AI developers like the US and China, so the US government’s approach could have long-term repercussions.
   Read more: Addition of Certain Entities to the Entity List (gov).
   Read more: Expanded U.S. Trade Blacklist Hits Beijing’s Artificial-Intelligence Ambitions (WSJ).

####################################################

How immigration rules are curtailing the US AI industry:
Talent is a critical input into technology, and the USA’s ability to attract foreign-born workers has long been a competitive advantage. Sustaining and growing this talent pipeline will be important if the US wants to retain its lead in AI. Current policies are poorly suited to this task, and threaten to be an impediment to the AI industry.

  Problems: Over and above specific policies, a climate of uncertainty and restriction is discouraging foreign talent from settling in the US. Rules against illicit technology transfer that are being applied to immigration, such as visa restrictions and screening, are causing serious harm to the AI industry, with little apparent benefit. Current policies favour large companies, at the expense of startups, entrepreneurs and new graduates, and are restricting labour mobility within the US.

   Solutions: The report recommends expanding immigration opportunities for AI talent in industry and academia; fixing policies that make it harder to recruit and retain AI talent; reviewing and revising the measures against illicit technology transfer that are impacting foreign-born workers.
   Read more: Immigration Policy and the U.S. AI Sector (CSET).

####################################################

Tech Tales

[A classified memo from the files of XXXXXX, found shortly after the incident, 2036.]

The Automation Life Boat 

“Massively expand the economy, but ensure there’s work for people” – that was the gist of the order they gave the machine. 

It thought about this at length. Ten seconds later, it executed the plan it had come up with. 

Two hours later, the first designs were delivered to the human-run factories. 

The humans worked. Most factories were now mostly made of machines, with a small group of humans for machine-tending, the creation of quick improvised-fixes, and the prototyping of new parts of new machines for the line. 

With the AIs new objective, the global manufacturing systems began to design new products and new ways of laying out lines to serve two objectives: expand the economy, and ensure there’s work for people. 

The first innovation was what the AI termed “wasteless maintenance” – now, most products were built with components that could be disassembled to create spare parts for the products, or tools to fix or augment them. Within weeks, a new profession formed: product modifier. A whole new class of jobs for people, based around learning from AI-generated tutorials how to disassemble and remake the products churned out by the machine. 

It was to prevent political instability, the politicians said.
People need to work, said some of them.
People have to have a purpose, said the others. 

But people are smart. They know when someone is playing a trick on them. So the AI had to allocate successively more of its resources to the systems that created ‘real work’ for humans in the increasingly machine-driven economy. 

In the 20th century, when people became heads of state, they got to learn about the real data underlying UFO sightings and disease outbreaks and mysterious power outages. In the 21st century, after the AI systems became dominant, newly-appointed politicians got to learn about the Kabuki theater that made up the modern economy. 

And unbeknownst to them, the AI had started to think about how else it could ensure there was work for people, while growing the economy. The problem became easier if it changed the notion of what comprised people, it had discovered. In this machine-driven insight lay our great undoing. 

Things that inspired this story: Politics, neoliberalism, dominant political notions of meaning and how it is frequently defined from narrowly-defined concepts of ‘work’, reinforcement learning, meta-learning, learning from human feedback, artisans, David Graeber’s work on ‘Bullshit Jobs‘.