Import AI 116: Think robots are insecure? Prove it by hacking them; why the UK military loves robots for logistics; Microsoft bids on $10bn US DoD JEDI contract while Google withdraws
‘Are you the government? Want to take advantage of AI in the USA? Here’s how!’ says thinktank:
….R-Street recommends politicians focus on talent, data, hardware, and other key areas to ensure America can benefit from advances in AI…
R-Street, a Washington-based thinktank whose goal is to “promote free markets and limited, effective government” has written a paper recommending how the US can take advantage of AI.
Key recommendations: R Street says that the scarce talent market for AI disproportionately benefits deep pocketed incumbents (such as Google) that can outbit other companies. “If there were appropriate policy levers to increase the supply of skilled technical workers available in the United States, it would disproportionately benefit smaller companies and startups,” they write.
Talent: Boost Immigration: In particular, they highlight immigration as an area where the government may want to consider instituting changes, for instance by creating a new class of technical visa, or expanding H-1Bs.
Talent: Offset Training Costs: Another approach could be to allow employers to detect the full costs of training staff in AI, which would further incentivize employers to increase the size of the AI workforce.
Data: “We can potentially create high-leverage opportunities for startups to compete against established firms if we can increase the supply of high-quality datasets available to the public,” R Street writes. One way to do this can be to analyze data held by the government with “a general presumption in favor of releasing government data, even if the consumer applications do not appear immediately obvious”.
Figure out (fair use X data X copyright): One of the problems AI is already causing is how it intersects with our existing norms and laws around intellectual property, specifically copyright law. A key question that needs to be resolved is figuring out how to assess data in terms of fair use when looking at AI systems – which will tend to consume vast amounts of data and use this data to create outputs that could, in certain legal lights, be viewed as ‘derivative works’, which would provide disincentives to people looking to develop AI.
“Given the existing ambiguity around the issue and the large potential benefits to be reaped, further study and clarification of the legal status of training data in copyright law should be a top priority when considering new ways to boost the prospects of competition and innovation in the AI space,” they write.
Hardware: The US government should be mindful about how the international distribution of semiconductor manufacturing infrastructure could come into conflict with national strategies relating to AI and hardware.
Why it matters: Analyses like this show how traditional policymakers are beginning to think about AI and highlights the numerous changes needed for the US to fully capitalize on its AI ecosystem. At a meta level, the broadening of discourse around AI to extend to Washington thinktanks seems like a further sign of the ‘industrialization of AI’, in the sense that the technology is now seen as having significant enough economic impacts that policymakers should start to plan and anticipate the changes it will bring.
Read more: Reducing Entry Barriers in the Development and Application of AI (R Street).
Get the PDF directly here.
Tired: Killer robots.
Wired: Logistics robots for military re-supply!
…UK military gives update on ‘Autonomous Last Mile Resupply’ robot competition…
The UK military is currently experimenting with new ways to deliver supplies to frontline troops – and it’s looking to robots to help it out. To spur research into this area a group of UK government organizations are hosting the The Autonomous Last Mile ReSupply (ALMRS) competition.
ALRMS is currently in phase two, in which five consortiums led by Animal Dynamics, Barnard Microsystems, Fleetonomy, Horiba Mira, and Qinetic, will build prototypes of their winning designs for testing and evaluation, receiving funding of around ~£3.8million over the next few months.
Robots are more than just drones: Some of the robots being developed for ALMRS include autonomous powered paragliders, a vertical take-off and land (VTOL) drone, autonomous hoverbikes, and various systems for autonomous logistics resupply and maintenance.
Why it matters: Research initiatives like this will rapidly mature applications at the intersection of robotics and AI as a consequence of military organizations creating new markets for new capabilities. Many AI researchers expect that contemporary AI techniques will significantly broaden the capabilities of robotic platforms, but so far hardware development has lagged software. With schemes like ALMRS, hardware may get a boost as well.
Read more: How autonomous delivery drones could revolutionise military logistics (Army Technology news website).
Responsible Computer Science Challenge offers $3.5million in prizes for Ethics + Computer Science courses:
…How much would you pay for a more responsible future?…
Omidyar Network, Mozilla, Schmidt Futures and Craig Newmark Philanthropies are putting up $3.5million to try to spur the development of more socially aware computer scientists. The challenge has two phases:
– Stage 1 (grants up to $150,000 per project): “We will seek concepts for deeply integrating ethics into existing undergraduate computer science courses”. Winners announced April 2019.
– Stage 2 (grants up to $200,000): “We will support the spread and scale of the most promising approaches”.
Deadline: Applications will be accepted from now through to December 13 201.
Why it matters: Computers are general purpose technologies, and so encouraging computer science practitioners to think about the ethical component of their work in a holistic, coupled manner, may yield to radical new designers for more positive and aware futures.
Read more: Announcing a Competition for Ethics in Computer Science, with up to $3.5 Million in Prizes (Mozilla blog).
Augmenting human game designers with AI helpers:
…Turn-based co-design system lets an agent learn how you like to design levels…
Researchers with the Georgia Institute of Technology have developed a 2D platform game map editor which is augmented with a deep reinforcement learning agent that learns to suggest level alterations based on the actions of the designer.
An endearing, frustrating experience: Like most things involving the day-to-day use of AI the process can be a bit frustrating: after the level designer tries to create a series of platforms with gaps to open space below the AI persists in filling these holes in with its suggestions – despite getting a negative RL reward each time. “As you can see this AI loves to fill in gaps, haha,” says Matthew at one point.
Creative: But it can also come up with interesting ideas. At one point the AI suggests a pipe flanked at the top on each side by single squares. “I don’t hate this. And it’s interesting because we haven’t seen this before,” he says. At another point he builds a mirror image of what the AI suggests, creating an enclosed area.
Learning with you: The AI learns to transfer some knowledge between levels, as shown in the video. However, I expect it needs greater diversity and potentially larger game spaces to show what it can really do.
Why it matters: AI tools can give all types of artists new tools with which to augment their own intelligence, and it seems like the adaptive learning capabilities of today’s RL+supervised learning techniques can make for potentially useful allies. I’m particularly interested in these kind of constrained environments like level design where you ultimately want to follow a gradient towards an implicit goal.
Watch the video of Matthew Guzdial narrating the level editor here (Youtube).
Check out the research paper here: Co-Creative Level Design with Machine Learning (Arxiv).
Think robots are insecure? Think you can prove it? Enter a new “capture the flag” competition:
…Alias Robotics’ “Robot CTF” gives hackers nine challenges to test their robot-compromising skills…
Alias Robotics, a Spanish robot cybersecurity company,, has released the Robotics Capture The Flag (RCTF), a series of nine scenarios designed to challenge wannabe-robot hackers. “The Robotics CTF is designed to be an online game, available 24/7, launchable through any web browser and designed to learn robot hacking step by step,” they write.
Scenarios: The RCTF consists of nine scenarios that will challenge hackers to exfiltrate information from robots, snoop on robot operating system (ROS) traffic, find hardcoded credentials in ROS source code, and so on. One of the scenarios is listed as “coming soon!” and promises to give wannabe hackers access to “an Alias Robotics’ crafted offensive tool”.
Free hacks! The researchers have released the scenarios under an open source TK license on GitHub. “We envision that as new scenarios become available, the sources will remain at this repository and only a subset of them will be pushed to our web servers http://rctf.aliasrobotics. com for experimentation. We invite the community of roboticists and security researchers to play online and get a robot hacker rank,” they write.
Why it matters: Robotics are seen as one of the next frontiers for contemporary AI research and techniques, but as this research shows – and other research on hacking physical robots published in ImportAI #109 – the substrates on which many robots are built are still quite insecure.
Read more: Robotics CTF (RCTF), A Playground for Robot Hacking (Arxiv).
Check out the competition and sign-up here (Alias Robotics website).
Fighting fires with drones and deep reinforcement learning:
…Forest fire: If you can simulate it, perhaps you can train an AI system to monitor it?…
Stanford University researchers have used reinforcement learning to train drones in simulators to spot wildfires better than supervised baselines. The project highlights how many complex real world tasks, like wildfire monitoring, can be represented as POMDPs (partially observable markov decision processes) which are tractable for reinforcement learning algorithms.
The approach works like this: The researchers build a simulator that lets them simulate wildfires in a grid-based way. They then populate this system with some simulated drones and use reinforcement learning to train the drones to effectively survey the fire and, most crucially, stay with the ‘fire front’, which is the expanding frontier of it and therefore the part with the greatest potential safety impact. “Each aircraft will get an observation of the fire relative to its own location and orientation. The observations are modeled as an image obtained from the true wildfire state given the aircraft’s current position and heading direction,” they write.
Rewards: The reward function is structured as follows: The aircraft gets penalties for distances from fire front, for high bank angles, for closeness to other aircraft, and for being near too many non-burning cells.
Belief: The researchers also experiment with what they call a “belief-based approach” which involves training the drones to create a shared “belief map”, which is a map of their environment indicating whether they believe particular cells will contain fire or not, and this map is updated with real data taken during the simulated flight. This is different to an observation-based approach, which purely focuses on the observations seen by these drones.
Results: Two aircraft with nominal wildfire seed: Both the belief-based and observation-based methods obtain significantly higher rewards than a hand-programmed ‘receding horizon’ baseline. There is no comparison to human performance, though. The belief-based technique does eventually obtain a slightly higher final performance than the observation-based version, but it takes longer to converge to a good solution.
Results: Greater than two aircraft: The system is able to scale to dealing with numbers of aircraft greater than two, but this requires the tweaking of a proximity-based reward to discourage collisions.
Results: Different wildfires: The researchers test their system on two differently shaped wildfires (a t-shape and an arc) and show that both RL-based methods exceed performance of the baseline, and that the belief-based system in particular does well.
Why it matters: We’ve already seen states like California use human-piloted drones to help emergency responders deal with wildfires. As we head into a more dangerous future defined by an increase in the rate of extreme weather events driven by global warming I am curious to see how we might use AI techniques to create certain autonomous surveillance and remediation abilities, like those outlined in this study.
Caveat: Like all studies that show success in simulation, I’ll retain some skepticism till I see such techniques tested on real drones in physical reality.
Read more: Distributed WIldfire Surveillance With Autonomous Aircraft Using Deep Reinforcement Learning (Arxiv).
AI Policy with Matthew van der Merwe:
…Matthew van der Merwe has kindly offered to write some sections about AI & Policy for Import AI. I’m (lightly) editing them. All credit to Matthew, all blame to me, etc. Feedback: firstname.lastname@example.org…
Pentagon’s AI ethics review taking shape:
The Defense Innovation Board met last week to present some initial findings of their review of the ethical issues in military AI deployment. The DIB is the Pentagon’s advisory panel of experts drawn largely from tech and academia. Speakers covered issues ranging from autonomous weapons systems, to the risk posed by incorporating AI into existing nuclear weapons systems.
The Board plans to present their report to Congress in April 2019.
Read more: Defense Innovation Board to explore the ethics of AI in war (NextGov)
Read more: DIB public meeting (DoD)
Google withdraws bid for $10bn Pentagon contract:
Google has withdrawn its bid for the Pentagon’s latest cloud contract, JEDI, citing uncertainty over whether the work would align with its AI principles.
Read more: Google drops out of Pentagon’s $10bn cloud competition (Bloomberg).
Microsoft employees call for company to not pursue $10bn Pentagon contract:
Following Google’s decision to not bid on JEDI, people identifying themselves as employees at Microsoft published an open letter asking the company to follow suit, and remove their own bid on the project. (Microsoft submitted a bit for JEDI following the publication of the letter.)
Read more: An open letter to Microsoft (Medium).
Future of Humanity Institute receives £13m funding:
FHI, the multidisciplinary institute at the University of Oxford led by Nick Bostrom, has received a £13.3m donation from the Open Philanthropy Project. This represents a material uptick in funding for AI safety research. The field as a whole, including work done in universities, non-profits and industry, spent c.$10m in 2017, $6.5m and c.$3m in 2015, according to estimates from the Center for Effective Altruism.
Read more: £13.3m funding boost for FHI (FHI).
Read more: Changes in funding in the AI safety field (CEA).
The Watcher We Nationalized
So every day when you wake up as the head of this government you check The Watcher. It has an official name – a lengthy acronym that expands to list some of the provenance of its powerful technologies – but mostly people just call it The Watcher or sometimes The Watch and very rarely Watcher.
The Watcher is composed of intelligence taps placed on most of the world’s large technology companies. Data gets scraped out of them and combined with various intelligence sources to give the head of state access to their own supercharged search engine. Spook Google! Is what British tabloids first called it. Fedbook! Is what some US press called it. And so on.
All you know is that you start your day with The Watcher and you finish your day with it. When you got into office, several years ago, you were met by a note from your predecessor. Nothing you do will show up in Watcher, unless something terrible happens; get used to it, read the note.
They were right, mostly. Your jobs bill? Out-performed by some viral memes relating to a (now disgraced) celebrity. The climate change investment? Eclipsed by a new revelation about a data breach at one of the technology companies. In fact, the only thing so far that registered on The Watcher from your part of the world was a failed suitcase bombing attempt on a bank.
Now, heading towards the end of your premiership, you hold onto this phrase and say it to yourself every morning, right before you turn on The Watcher and see what the rhythm of the world says about the day to come. “Nothing you do will show up in Watcher, unless something terrible happens; get used to it”, you say to yourself, then you turn it on.
Things that inspired this story: PRISM, intelligence services, governments built on companies like so many houses of cards, small states, Europe, the tedium of even supposedly important jobs, systems.