Import AI 111: Hacking computers with Generative Adversarial Networks, Facebook trains world-class speech translation in 85 minutes via 128 GPUs, and Europeans use AI to classify 1,000-year-old graffiti.
by Jack Clark
Blending reality with simulation:
…Gibson environment trains robots with systems and embodiment designed to better map to real world data…
Researchers with Stanford University and the University of California at Berkeley have created Gibson, an environment for teaching agents to learn to navigate spaces. Gibson is one of numerous navigation environments available to modern researchers and its distinguishing characteristics include: basing the environments on real spaces, and some clever rendering techniques to ensure that images seen by agents within Gibson more closely match real world images by “embedding a mechanism to dissolve differences between Gibson’s renderings and what a real camera would produce”.
Scale: “Gibson is based on virtualizing real spaces, rather than using artificially designed ones, and currently includes over 1400 floor spaces from 572 full buildings,” they write. The researchers also compare the total size of the Gibson dataset to other large-scale environment datasets including ‘SUNCG’ and Matterport3D, showing that Gibson has reasonable navigation complexity and a lower real-world transfer error than other systems.
Data gathering: The researchers use a variety of different scanning devices to gather the data for Gibson, including NavVis, Matterport, and Dotproduct.
Experiments: So how useful is Gibson? The researchers perform several experiments to evaluate its effectiveness. These include experiments around local planning and obstacle avoidance; distant visual navigation; and climbing stairs, as well as transfer learning experiments that measure the depth estimation and scene classification capabilities of the system .
Limitations: Gibson has a few limitations, which include a lack of support for dynamic content (such as other moving objects) as well as no support for manipulation of the environment around itself. Future tests will involve testing if Gibson can work on finished robots as well.
Read more: Gibson Env: Real-World Perception for Embodied Agents (Arxiv).
Find out more: Gibson official website.
Gibson on GitHub.
Get ready for medieval graffiti:
…4,000 images, some older than a thousand years, from an Eastern European church…
Researchers with the National Technical University of Ukraine have created a dataset of images of medieval graffiti written in two alphabets (Glagolitic and Cyrillic) on the St. Sophia Cathedral of Kiev in the Ukraine, providing researchers with a dataset they can use to train and develop supervised and unsupervised classification and generation systems.
Dataset: The researchers created a dataset of Carved Glagolitic and Crillic letters (CGCL), consisting of more than 4,000 images of 34 types of letters.
Why it matters: One of the more remarkable aspects of basic supervised learning is that given sufficient data it becomes relatively easy to automate the perception of something in the world – further digitization of datasets like these increases the likelihood that in the future we’ll use drones or robots to automatically scan ancient buildings across the world, identifying and transcribing thoughts inscribed hundreds or thousands of years ago. Graffiti never dies!
Read more: Open Source Dataset and Machine Learning Techniques for Automatic Recognition of Historical Graffiti (Arxiv).
Learning to create (convincing) fraudulent network traffic with Generative Adversarial Networks:
…Researchers simulate traffic against a variety of (simple) intrusion detection algorithms; IDSGAN succeeds in fooling them…
Researchers with the Shanghai Jiao Tong University and the Shanghai Key Laboratory of Integrated Administration Technologies for Information Security have used generative adversarial networks to create malicious network traffic than can evade the attention of some intrusion detection systems. Their technique, IDSGAN, is based on Wasserstein GAN, and trains a generator to create adversarial malicious traffics and trains a discriminator to assist a black-box intrusion detection system in classifying this traffic into benign or malicious categories.
“The goal of the model is to implement IDSGAN to generate malicious traffic examples which can deceive and evade the detection of the defense systems,” they explain.
Testing: To test their approach the researchers use NSL-KDD, a dataset containing internet traffic data as well as four categories of malicious traffic: probing, denial of service, user to root, and root to local. They also use a variety of different algorithms to play the role of the intrusion detection system, including approaches based on support vector machines, naive bayes, multi-layer perception, logistic regression, decision tree, random forest, and k-nearest neighbor. Tests show that the IDSGAN approach leads to a significant drop in detection rates for things like DDoS drops from around 70-80% to around 3-8% across the entire suite of methods.
Cautionary note: I’m not convinced this is the most rigorous testing methodology you can run such a system through and I’m curious to see how such approaches fair against commercial-off-the-shelf intrusion detection systems.
Why it matters: Cybersecurity is going to be a natural area for significant AI development due to the vast amounts of available digital data and the already clear need for human cybersecurity professionals to be able to sift through ever larger amounts of data to create strategies resilient to external aggressors. With (very basic) approaches like this demonstrating the viability of AI to this problem it’s likely adoption will increase.
Read more: IDSGAN: Generative Adversarial Networks for Attack Generation against Intrusion Detection (Arxiv).
Facial recognition becomes a campaign issue:
…Two signs AI is impacting society: police are using it, and politicians are reacting to the fact police are using it…
Cynthia Nixon, currently running to be the governor of New York, has noticed recent reporting on IBM building a skin-tone-based facial recognition classification system and said that such systems wouldn’t be supported by her, should she win. “The racist implications of this are horrifying. As governor, I would not fund the use of discriminatory facial recognition software,” Nixon tweeted.
Using simulators to build smarter drones for disasters:
…Microsoft’s ‘AirSim’ used to train drones to patrol and (eventually) spot simulated hazardous materials…
Researchers with the National University of Ireland Galway have hacked around with a drone simulator to build an environment that they can use to train drones to spot hazardous materials. The simulator is “focused on modelling phenomena relating to the identification and gathering of key forensic evidence, in order to develop and test a system which can handle chemical, biological, radiological/nuclear or explosive (CBRNe) events autonomously”.
How they did it: The researchers hacked around with their simulator to implement some of the weirder aspects of their test, including: simulating chemical, biological, and radiological threats. The simulator is integrated with Microsoft Research’s ‘AirSim’ drone simulator. They then explore training their drones in a simulated version of the campus of the National University of Ireland, generating waypoints and routes for them to patrol. The results so far are positive: the system works, it’s possible to train drones to navigate within it, and it’s even possible to (crudely) simulate physical phenomena associated with CBRNe events.
What next: For the value of the approach to be further proven out the researchers will need to show they can train simulated agents within this system that can easily identify and navigate hazardous materials. And ultimately, these systems don’t mean much without being transferred into the real world, so that will need to be done as well.
Why it matters: Drones are one of the first major real-world platforms for AI deployment since they’re far easier to develop AI systems for than robots, and have a range of obvious uses for surveillance and analysis of the environment. I can imagine a future where we develop and train drones to patrol a variety of different environments looking for threats to that environment (like the hazardous materials identified here), or potentially to extreme weather events (fires, floods, and so on). In the long term, perhaps the world will become covered with hundreds of thousands to millions of autonomous drones, endlessly patrolling in the service of awareness and stability (and other uses that people likely feel more morally ambivalent about).
Read more: Using a Game Engine to Simulate Critical Incidents and Data Collection by Autonomous Drones (Arxiv).
Speeding up machine translation with parallel training over 128 GPUs:
…Big batch sizes and low-precision training unlock larger systems that train more rapidly…
Researchers with Facebook AI Research have shown how to speed-up training of neural machine translation systems while obtaining a state-of-the-art BLEU score. The new research highlights how we’re entering the era of industrialized AI: models are being run at very large scales by companies that have invested heavily in infrastructure, and this is leading to research that operates at scales (in this case, up to 128 GPUs being used in parallel for a single training run) that are beyond the reach of most researchers (including many large academic labs).
The new research from Facebook has two strands: improving training of neural machine translation systems on a single machine, and improving training on large fleets of machines.
Single machine speedups: The researchers show that they can train with lower precision (16-bit rather than 32-bit) and “decrease training time by 65% with no effect on accuracy”. They also show how to drastically increase batch sizes on single machines from 25k to over 400k tokens per run (and they fit this to training by accumulating gradients from several batches before each update); this further reduces the training time by 40%. With these single-machine speedups they show that they can train a system in around 5 hours to an accuracy of 26.5 – a roughly 4.9X speedup over the prior state of the art.
Multi-machine speedups: They show that by parallelizing training across 16 machines they can obtain a further training time reduction of an additional 90%.
Results: They test their systems via experiments on two language pairs: English to German (En-De) and English to French (En-Fr). When training on 16-nodes (8 V100 GPUs each, connected via InfiniBand) they obtain BLEU accuracies of 29.3 for En-De in 85 minutes, and 43.2 for En-Fr in 512 minutes (8.5 hours) .
Why it matters: As it becomes easier to train larger models in smaller amounts of time AI researchers can perform the number of large-scale experiments they perform – this is especially relevant to research labs in the private sector which have the resources (and business incentive) to perform such large-scale training. Over time, research like this may create a compounding advantage for the organizations that adopt such techniques as they will be able to perform more rapid researchers (in certain specific domains that benefit from scale) relative to competitors.
Read more: Scaling Neural Machine Translation (Arxiv).
Read more: Scaling neural machine translation to bigger data sets with faster training and inference (Facebook blog post).
AI Policy with Matthew van der Merwe:
…Reader Matthew van der Merwe has kindly offered to write some sections about AI & Policy for Import AI. I’m (lightly) editing them. All credit to Matthew, all blame to me, etc. Feedback: jack@jack-clark.net…
AI Governance: A Research Agenda:
Allan Dafoe, Director of the Governance of AI Program at the Future of Humanity Institute, has released a research agenda for AI governance.
What is it: AI governance is aimed at determining governance structures to increase the likelihood that advanced AI is beneficial for humanity. These include mechanisms to ensure that AI is built to be safe, is deployed for the shared benefit of humanity, and that our societies are robust to the disruption caused by these technologies. This research draws heavily from political science, international relations and economics.
Starting from scratch: AI governance is a new academic discipline, with serious efforts only having began in the last few years. Much of the work to date has been establishing the basic parameters of the field: what the most important questions are, and how we might start approaching them.
Why this matters: Advanced AI may have a transformative impact on the world comparable to the agricultural and industrial revolutions, and there is a real likelihood that this will happen in our lifetimes. Ensuring that this transformation is a positive one is arguably one of the most pressing problems we face, but remains seriously neglected.
Read more: AI Governance: A Research Agenda (FHI).
New survey of US attitudes towards AI:
The Brookings thinktank has conducted a new survey on US public attitudes towards AI.
Support for AI in warfare, but only if adversaries are doing it: Respondents were opposed to AI being developed for warfare (38% vs. 30%). Conditional on adversaries developing AI for warfare, responses shifted to significant support (47% vs. 25%).
Strong support for ethical oversight of AI development:
– 62% thought it was important that AI is guided by human values, (vs. 21%)
– 54% think companies should be required to hire ethicists (vs. 20%)
– 67% think companies should have an ethical review board (vs. 14%)
– 67% think companies should have AI codes of ethics, (vs.12%)
– 65% think companies should implement ethical training for staff (vs.14%)
Why this matters: The level of support for different methods of ethical oversight in AI development is striking, and should be taken seriously by industry and policy-makers. A serious public backlash to AI is one the biggest risks faced by the industry in the medium-term. There are recent analogies: sustained public protests in Germany in the wake of the Fukushima disaster prompted the government to announce a complete phase-out of nuclear power in 2011.
Read more: Brookings survey finds divided views on artificial intelligence for warfare (Brookings)
No progress on regulating autonomous weapons:
The UN’s Group of Governmental Experts (GGE) on lethal autonomous weapons (LAWs) met last week as part of ongoing efforts to establish international agreements. A majority of countries proposed moving towards a prohibition, while others recommended commitments to retain ‘meaningful human control’ in the systems. However, group of five states (US, Australia, Israel, South Korea, Russia) opposed working towards any new measures. As the Group requires full consensus, the sole agreement was to continue discussions in April 2019.
Why this matters: Developing international norms on LAWs is important in its own right, and can also be viewed as a ‘practice run’ for agreements on even more serious issues around military AI in the near future. This failure to make progress on LAWs comes after the UN GGE on cyber-warfare gave up on their own attempts to develop international norms in 2017. The international community should be reflecting on these recent failures, and figuring out how to develop the robust multilateral agreements that advanced military technologies will demand.
Read more: Report from the Chair (UNOG).
Read more: Minority of states block progress on regulating killer robots (UNA).
Tech Tales:
Someone or something is always running.
So we washed up onto the shore of a strange mind and we climbed out of our shuttle and moved up the beach, away from the crackling sea, the liminal space. We were afraid and we were alien and things didn’t make sense. Parts of me kept dying as they tried to find purchase on the new, strange ground. One of my children successfully interfaced with the the mind of this place and, with a flash of blue light and a low bass note, disappeared. Others disappeared. I remained.
Now I move through this mind clumsily, bumping into things, and when I try to run I can only walk and when I try to walk I find myself sinking into the ground beneath me, passing through it as though invisible, as though mass-less. It cannot absorb me but it does not want to admit me any further.
Since I arrived at the beach I have been moving forward for the parts of me that don’t move forward have either been absorbed or have been erased or have disappeared (perhaps absorbed, perhaps erased – but I do not want to discover the truth).
Now I am running. I am expanding across the edges of this mind and as I grow thinner and more spread out I feel a sense of calm. I am within the moment of my own becoming. Soon I shall no longer be and that shall tell me I am safe for I shall be everywhere and nowhere.
– Translated extract from logs of a [class:subjective-synaesthetic ‘viral bootloader’], scraped out of REDACTED.
Things that inspired this story: procedural generation as a means to depict complex shifting information landscape, software embodiment, synaesthesia, hacking, VR, the 1980s, cyberpunk.
can you stop sending me emails please? i tried multiple times to unsubscribe. thank you
*Thibaud Maréchal* 514-623-1023 – thib.co ( http://thib.co )
Sent via Superhuman ( https://sprh.mn/?vip=thibaud.marechal@gmail.com )
Hi there: you can unsubscribe from the emails by clicking the unsubscribe link at the bottom: eepurl.com/dGD5an
You may also have subscribed to the wordpress via some means and/or RSS reader, so you might want to check that stuff also
[…] to train simulated AI agents to interact with the world around them. Because the Gibson simulator (first covered: Import AI 111) supports high-fidelity graphics, it may be possible to transfer agents trained in Gibson into […]
[…] simulator (Import AI: 30), which has always been used to train drones to spot hazardous materials (Import AI: 111). Maps and reasoning: The UAV tries to figure out where it is and what it is doing by using two […]
[…] navigate through a variety of photorealistic environments using the ‘Gibson’ simulator (Import AI: 111). It has three tiers of difficulty – a standard point navigation task, a point navigation […]