Import AI 185: Dawn of the planetary search engine; GPT-2 poems; and the UK government’s seven rules for AI providers
Dawn of the planetary satellite-imagery search engine:
…Find more images like this, but for satellite imagery…
AI startup Descartes Labs has created a planet-scale search engine that lets people use the equivalent of ‘find more images like this’, but instead of uploading an image into a search engine and getting a response back, they upload a picture of somewhere on Earth and get a set of visually similar locations.
How they did it: To build this, the authors used four datasets – for the USA, they used aerial imagery from the National Agriculture Imagery Program (NAIP), as well as the Texas Orthoimagery Program. For the rest of the world, they used data from Landsat 8. They then took a stock 50-layer ResNet pre-trained on ImageNet and made a couple of tweaks – injecting noise during training to make it easier for the network to learn to make binary classification decisions, they also did light customization for extracting features from networks trained against different datasets. Through this, they gained a set of 512-bit feature vectors, which make it possible to search for complex things like visual similarity.
How well does it work: In tests, they get scores of reasonable but not stellar performance, obtaining top-30 accuracies of around 80% when dealing with things they’ve fine-tuned the network against. However, in qualitative tests it feels like its performance may be higher than this for most use cases – I’ve played around with the Descartes Labs website where you can test out the system; it does reasonably well when you click around, identifying things like intersections and football stadiums well. I think a lot of the places where it gets confused come from the relatively low resolution of the satellite imagery, making fine-grained judgements more difficult.
Why this matters: Systems like this give us a sense of how AI lets us do intuitive things easily that would otherwise be ferociously difficult – just twenty years ago, asking for a system that could show you similar satellite images would be a vast undertaking with significant amounts of hand-written features and bespoke datasets. Now, it’s possible to create this system with a generic pre-trained model, a couple of tweaks, and some generally available unclassified datasets. I think AI systems are going to unlock lots of applications like this, letting us query the world with the sort of intuitive commands (e.g., similar to), that we use our own memories for today.
Read more: Visual search over billions of aerial and satellite images (arXiv).
Try the AI-infused search for yourself here (Descartes Labs website).
####################################################
Generating emotional, dreamlike poems with GPT-2:
…If a poem makes you feel joy or sadness, then is it good?…
Researchers with Drury University and UC-Colorado Springs have created a suite of fine-tuned GPT-2 models for generating poetry with different emotional or stylistic characteristics. Specifically, they create five separate data corpuses of poems that, in their view, represent emotions like anger, anticipation, joy, sadness, and trust. They then fine-tune the medium GPT-2 model against these datasets.
Fine-tuned dream poetry: They also train their model to generate what they call “dream poems” – poems that have a dreamlike element. To do this, they take the GPT-2 model and train it on a corpus of first-person dream descriptions, then train it again on a large poetry dataset.
Do humans care? The researchers generated a batch of 1,000 poems, then presented four poems from each emotional category to a set of ten human reviewers. “Poems presented were randomly selected from the top 20 EmoLex scored poems out of a pool of 1,000 generated poems,” they write. The humans were asked to write the poems according to the emotions they felt after reading them – in tests, they classified the poems based on the joy and sad corpuses as reflecting those emotions 85% and 87.5% of the time, respectively. That’s likely because these are relatively easy emotions to categorize with relatively broad categories. By comparison, they correctly categorized things like Anticipation and Trust 40% and 32.5% of the time, respectively.
Why this matters: I think language models are increasingly being used like custom funhouse mirrors – take something you’re interested in, like poetry, and tune a language model against it, giving you an artefact that can generate warped reflections of what it was exposed to. I think language models are going to change how we explore and interact with large bodies of literature.
Get the ‘Dreambank’ dataset used to generate the dream-like poems here.
Read more: Introducing Aspects of Creativity in Automatic Poetry Generation (arXiv).
####################################################
Want a responsible AI economy? Do these things, says UK committee:
…Eight tips for governments, seven tips for AI developers..
The UK’s Committee on Standards in Public Life thinks the government needs to work harder to ensure it uses AI responsibly, and that the providers of AI systems operate in responsible, trustworthy ways. The government has a lot of work to do, according to a new report from the committee: “Government is failing on openness,” the report says. “Public sector organizations are not sufficiently transparent about their use of AI and it is too difficult to find out where machine learning is currently being used in government”.
What to do about AI if you’re a government, national body, or regulator: The committee has eight recommendations designed for potential AI regulators:
- Adopt and enforce ethical principles: Figure out which ethical principles to use to guide the use of AI in the public sector (there are currently three sets of principles for the public sector – the FAST SUM Principles, the OECD AI Principles, and the Data Ethics Framework.
- Articulate a clear legal basis for AI usage: Public sector organizations should publish a statement on how their use of AI complies with relevant laws and regulations before they are deployed in public service delivery.
- Data bias and anti-discrimination law: Ensure public bodies comply with the Equality Act 2010.
- Regulatory assurance body: Create a regulatory assurance body that identifies gaps in the regulatory landscape and provides advice to individual regulators and government on the issues associated with AI.
- Procurement rules and processes: Use government procurement procedures to mandate compliance with ethical principles (when selling to public organizations).
- The Crown Commercial Service’s Digital Marketplace: Create a one-stop shop for finding AI products and services that satisfy ethical requirements.
- Impact assessment: Integrate an AI impact assessment into existing processes to evaluate the potential effects of AI on public standards, for a given use case.
What to do if you’re an AI provider: The committee also has some specific recommendations for providers of AI services (both public and private-sector). These include:
- Evaluate risks to public standards: Assess systems for their potential impact on standards and seek to mitigate standard risks identified.
- Diversity: Tackle issues of bias and discrimination by ensuring they take into account “the full range of diversity of the population and provide a fair and effective service”.
- Upholding responsibility: Ensure that responsibility for AI systems is clearly allocated and documented.
- Monitoring and evaluation: Moniter and evaluate AI systems to ensure they always operate as intended.
- Establishing oversight: Implement oversight systems that allow for their AI systems to be properly scrutinised.
- Appeal and redress: AI providers should always tell people about how they can appeal against automated and AI-assisted decisions.
- Training and education: AI providers should train and educate their employees.
Why this matters: Sometimes I think of the AI economy a bit like an alien invasion – we have a load of new services and capabilities that were not economically feasible (or in some cases, possible) before, and the creatures in the AI economy don’t currently mesh perfectly well with the rest of the economy. Initiatives like the UK committee report help calibrate us about the changes we’ll need to make to harmoniously integrate AI technology into society.
Read more: Artificial intelligence and Public Standards, A Review by the Committee on Standards in Public Life (PDF, gov.uk).
####################################################
Speeding up scientific simulators by millions to billions of times:
…Neural architecture search helps scientists build a machine that simulates the machines that simulate reality…
You’ve heard of how AI can improve our scientific understanding of the world (see: systems like AlphaFold for protein structure prediction, and various systems for weather simulation), but have you heard about how AI can improve the simulators we use to improve our scientific understanding of the world? New research from an interdisciplinary team of scientists from the University of Oxford, University of Rochester, Yale University, University of Seattle, and the Max-Planck-Institut fur Plasmaphysik, shows how you can use modern deep learning techniques to speed up diverse scientific simulation tasks by millions to billions of times.
The technique: They use Deep Emulator Network SEarch (DENSE), a technique which consists of them defining a ‘super architecture’ and running neural architecture search within it. The super-architecture consists of “convolutional layers with different kernel sizes and a zero layer that multiplies the input with zero,” they write. “The option of having a zero layer and multiple convolutional layers enable the algorithm to choose an appropriate architecture complexity for a given problem.” During training, the system alternates between training the network and observing its performance, then performing a search step where network variables “are updated to increase the probability of the high-ranked architectures and decrease the probability of the low-ranked architectures”.
Results: They test their approach on ten different scientific simulation cases. These cases have input parameters that vary from 3 to 14, and outputs from 0D (scalars) to multiple 3D signals. Specifically, they use DENSE to try and train emulators of ten distinct simulation use cases, then assess the performance of the emulators. In tests, the emulators obtain, at minimum, comparable results to the real simulators, and at best, far superior ones. They also show eye-popping speedups of as high as hundreds of millions to billions of time faster.
“The ability of DENSE to accurately emulate simulations with limited number of data makes the acceleration of very expensive simulations possible,” they write. “The wide range of successful test cases presented here shows the generality of the method in speeding up simulations, enabling rapid ideas testing and accelerating new discovery across the sciences and engineering”.
Why this matters: If deep learning is basically just really great at curve-fitting, then papers like this highlight just how useful that is. Curve-fitting is great if you can do it in complex, multidimensional spaces! I think it’s pretty amazing that we can use deep learning to essentially approximate a thoroughly complex system (e.g., a scientific simulator), and it highlights how I think one of the most powerful use cases for AI systems is to be able to approximate reality and therefore build prototypes against these imaginary realities.
Read more: Up to two billion times acceleration of scientific simulations with deep neural architecture search (arXiv).
####################################################
Automatically cataloging insects with the BIODISCOVER machine:
…Next: A computer vision-equipped robotic arm…
Insects are one of the world’s most numerous living things, and one of the most varied as well. Now, a team of scientists from Tempere University and the University of Jyvaskyla in Finland, Aarhus University in Denmark, and the Finnish Environmental Institute, have designed a robot that can automatically photograph and analyze insects. They call their device the BIODISCOVER machine, short for BIOlogical specimens Described, Identified, Sorted, Counted, and Observed using Vision-Enabled Robotics. The machine automatically detects specimens, then photographs them and crops the images to be 496 pixels wide (defined by the width of the cuvette) and 496 pixels high.
“We propose to replace the standard manual approach of human expert-based sorting and identification with an automatic image-based technology”, they write. “Reliable identification of species is pivotal but due to its inherent slowness and high costs, traditional manual identification has caused bottlenecks in the bioassessment process”
Testing BIODISCOVER: In tests, the researchers imaged a dataset of nine terrestrial arthropod species collected at Narsarsuaq, South Greenland, gathering thousands of images for each species. They then used this dataset to test out how well two machine learning classification approaches work on the images. They used a ResNet-50 and an InceptionV3 network (both pre-trained against ImageNet) to train two systems to classify the images, and to create data about which camera aperture and exposure settings yield the images that it are easier for machine learning algorithms to classify. In tests, they obtain an average classification accuracy of 0.980 over ten test sets.
Next steps: Now that the scientists have built BIODISCOVER, they’re working on a couple of additional features to help them create automated insect analysis. These include: developing a computer-vision enabled robot arm that can detect insects in a bulk tray, then select an appropriate tool to move the insect into the BIODISCOVER machine, as well as a sorting rack to place specimens into their preferred containers after they’ve been photographed.
Read more: Automatic image-based identification and biomass estimation of invertebrates (arXiv).
####################################################
AI Policy with Matthew van der Merwe:
…Matthew van der Merwe brings you views on AI and AI policy; I (lightly) edit them…
White House propose increased AI spending amidst cuts to science budgets
The White House has released their 2021 federal budget proposal. This is a clear communication of the government’s priorities, but will not become law, as the budget must now pass through Congress, who are expected to make substantial changes.
Increases to AI funding: There is a doubling of proposed R&D spending in non-defense AI (and quantum computing). In defense, there are substantial increases to AI R&D funding via DARPA, and for the DoD’s Joint AI Center. A budget supplement detailing AI spending programs on an agency-by-agency basis is expected later this year.
Substantial cuts to basic science: Overall, proposed R&D spending represents a 9% decrease on 2020 levels. Together with the proposals for AI, this indicates a substantial rebalancing of the portfolio of science funding towards technologies perceived as being strategically important.
Why it matters: The budget is best understood as a statement of intent from the White House, which will be altered by Congress. The proposed uplift in funding for AI will be welcomed, but the scale of cuts to non-AI R&D spending raises questions about the government’s commitment to science. [Jack’s take: I think AI is going to be increasingly interdisciplinary in nature, so cutting other parts of science funding is unlikely to maximize the long-term potential of AI as a technology – I’d rather live in a world where countries invested in science vigorously and holistically).
Read more: FY2021 Budget (White House).
Read More: Press release (Whte House).
AI alignment fellowship at Oxford University:
Oxford’s Future of Humanity Institute is taking applications for their AI Alignment fellowship. Fellows will spend three or more months pursuing research related to the theory or design of human-aligned AI, as part of FHI’s AI safety team. Previously successful applicants have ranged from undergraduate to post-doc level. Applications to visit during summer 2020 will close on February 28.
For more information and to apply: AI Alignment Visiting Fellowship (FHI)
####################################################
Tech Tales:
Dance!
It was 5am when the music ran out. We’d been dancing to a single continuous, AI-generated song. It had been ten or perhaps eleven hours since it had started, and the walls were damp and shiny with sweat. Everyone had that glow of being near a load of other humans and dancing. At least the lights stayed off.
“Did it run out of ideas?” someone shouted.
“Your internet go down?” someone else asked.
“You train this on John Cage,” asked someone else.
The music started up, but it was human-music. People danced, but there wasn’t the same intensity.
The thing about AI raves is the music is always unique and it never gets repeated. You train a model and generate a song and the model kind of continuously fills it in from there. The clubs compete with each other for who can play the longest song. “The longest unroll”, as some of the AI people say. People try and snatch recordings of the music – though it is frowned upon – and after really good parties you see shreds of songs turn up on social media. People collect these. Categorize them. Try to map out the stylistic landscape of big, virtual machines.
There are rumors of raves in Germany where people have been dancing to new stuff for days. There’ve even been dance ships, where the journey is timed to perfectly coincide with the length of the generated music. And obviously the billionaires have been making custom ‘space soundtracks’ for their spaceship tourism operations. Some people are filling their houses with speakers and feeding the sounds of themselves into an ever-growing song.
Things that inspired this short story: MuseNet; Virtual Reality; music synthesis; Google’s AI-infused Bach Doodle.