Import AI 186: AI + Satellite Imagery; Votefakes!; Schmidhuber on AI’s past&present
by Jack Clark
AI + Satellites = Climate Change Monitoring:
…Deeplab v3 + Sentinel satellite ‘SAR’ data = lake monitoring through clouds…
Researchers with ETH Zurich and the Skolkovo Institute of Science and Technology have used machine learning to develop a system that can analyze satellite photos of lakes and work out if they’re covered in ice or not. This kind of capability is potentially useful when building AI-infused earth monitoring systems.
Why they did it: The researchers use synthetic aperture radar (SAR) data from the Sentinel-1 satellite. SAR is useful because it sees through cloud cover, so they can analyze lakes under variable weather conditions. “Systems based on optical satellite data will fail to determine these key events if they coincide with a cloudy period,” they write. “The temporal resolution of Sentinel-1 falls just short of the 2-day requirement of GCOS, still it can provide an excellent “observation backbone” for an operational system that could fill the gaps with optical satellite data”.
How they did it: The researchers paired the Sentinel satellite data with a Deeplab v3+ semantic segmentation network. They tested their approach against three lakes in Switzerland (Sils, Silvaplana, St. Moritz) over satellite data gathered of the lakes during two separate winters (2016/17, and 2017/18). They obtain accuracy scores of around 95%, and find that the network does a reasonable job of identifying when lakes are frozen.
Why this matters: Papers like this show how people are increasingly using AI techniques as a kind of plug&play sensing capability, where they assemble a dataset, train a classifier, and then either build or plan an automated system based on the newly created detector.
Read more: Lake Ice Detection from Sentinel-1 SAR with Deep Learning (arXiv).
####################################################
Waymo dataset + LSTM = a surprisingly well-performing self-driving car prototype:
…Just how far can a well-tuned LSTM get you?…
Researchers with Columbia University want to see how smart a self-driving car can get if it’s trained in a relatively simple way on a massive dataset. To that end, they train a LSTM-based system on 12 input features from the Waymo Open Dataset, a massive set of self-driving car data released by Google last year (Import AI 161).
Performance of a well-tuned LSTM: In tests, an LSTM system trained with all the inputs from all the cameras on the car gets a minimum loss of about 0.1327. That’s superior to other similarly simple systems based on technologies like convolutional neural nets, or gradient boosting. But it’s a far cry from the 99.999% accuracy I think most people would intuitively want in a self-driving car.
Why this matters: I think papers like this emphasize the extent to which neural nets are now utterly mainstream in AI research. It also shows how industry can inflect the type of research that gets conducted in AI purely by releasing its own datasets, which become the environments academics use to test, calibrate, and develop AI research approaches.
Read more: An LSTM-Based Autonomous Driving Model Using Waymo Open Dataset (arXiv).
####################################################
Votefakes: Indian politician uses synthetic video to speak to more voters:
…Deepfakes + Politics + Voter-Targeting = A whole new way to persuade…
An Indian politician has used AI technology to generate synthetic videos of themselves giving the same speech in multiple languages, marking a possible new tool that politicians will use to target the electorate.
Votefakes: “When the Delhi BJP IT Cell partnered with political communications firm The Ideaz Factory to create “positive campaigns” using deepfakes to reach different linguistic voter bases, it marked the debut of deepfakes in election campaigns in India. “Deepfake technology has helped us scale campaign efforts like never before,” Neelkant Bakshi, co-incharge of social media and IT for BJP Delhi, tells VICE. “The Haryanvi videos let us convincingly approach the target audience even if the candidate didn’t speak the language of the voter.”” – according to Vice.
Why this matters: Ai lets people scale themselves – whether by automating and scaling out certain forms of analysis, or here automating and scaling out the way that people appear to other people. With modern AI tools, a politician can be significantly more present in more diverse communities. I expect this will lead to some fantastically weird political campaigns and, later, the emergence of some very odd politicians.
Read more: We’ve Just Seen the First Use of Deepfakes in an Indian Election Campaign (Vice).
####################################################
Computer Vision pioneer switches focus to avoid ethical quandaries:
…If technology is so neutral, then why are so many uses of computer vision so skeezy?…
The creator of YOLO, a popular image identification and classification system, has stopped doing computer vision research due to concerns about how the technology is used.
“I stopped doing CV research because I saw the impact my work was having,” wrote Joe Redmon on Twitter. “I loved the work but the military applications and privacy concerns eventually became impossible to ignore.”
This makes sense, given Redmon’s unusually frank approach to their research. ““What are we going to do with these detectors now that we have them?” A lot of the people doing this research are at Google and Facebook. I guess at least we know the technology is in good hands and definitely won’t be used to harvest your personal information and sell it to…. wait, you’re saying that’s exactly what it will be used for?? Oh. Well the other people heavily funding vision research are the military and they’ve never done anything horrible like killing lots of people with new technology oh wait…”, they wrote in the research paper announcing YOLOv3 (Import AI: 88).
Read more at Joe Redmon’s twitter page (Twitter).
####################################################
Better Satellite Superresolution via Better Embeddings:
…Up-scaling + regular satellite imaging passes = automatic planet monitoring…
Superresolution is where you train a system to produce the high-resolution versions of low-resolution images; in other words, if I show you a bunch of black and white pixels on a green field, it’d be great if you were smart enough to figure out this was a photo of a cow and produce that for me. Now, researchers from Element AI, MILA, the University of Montreal, and McGill University, have published details about a system that can take in multiple low-resolution images and stitch them together into high-quality superresolution images.
HighRes-net: The key to this research is HighRes-net, an architecture that can fuse an arbitrary number of low-resolution frames together to form a high-resolution image. One of the key tricks here is the continuous computation of a shared representation across multiple low-resolution views – by embedding these into the same featurespace, then embedding them jointly with the shared representation, it makes it easier for the network to learn about overlapping versus non-overlapping features, which can help it make marginally smarter super-resolution judgement calls. Specifically, the authors claim HighRes-net is “the first deep learning approach to MFSR that learns the typical sub-tasks of MFSR in an end-to-end fashion: (i) co-registration, (ii) fusion, (iii) up-sampling, and (iv) registration-at-the-loss.”
How well does it work? The researchers tested out their system on the PROBA-V dataset, a satellite imagery dataset that consists of high-resolution / low-resolution imagery pairs. (According to the researchers, lots of bits of superresolution research test on algorithmically-generated low-res images, which means the tests can be a bit suspect). They entered their model into the European Space Agency’s Kelvin competition, obtaining top scores on the public-leaderboard and secondbest scores on a private evaluation.
Why this matters: Techniques like this could let people use more low-resolution satellite imagery to analyze the world around them. “There is an abundance of low-resolution yet high-revisit low-cost satellite imagery, but they often lack the detailed information of expensive high-resolution imagery,” the researchers write. “We believe MFSR can uplift its potential to NGOs and non-profits”.
Get the code for HighRes-net here (arXiv).
Read more: HighRes-net: Recursive Fusion for Multi-Frame Super-Resolution of Satellite Imagery (arXiv).
####################################################
AI industrialization means AI efficiency: Amazon shrinks the Transformer, gets decent results, publishes the code:
…Like the Transformer but hate how big it is? Try out Amazon’s diet Transformers…:
Amazon Web Services researchers have developed three variations on the Transformer architecture, all of which demonstrate significant efficiency gains over the stock Transformer.
Who cares about the Transformer? The transformer is a fundamental AI component that was first published in 2017 – one of the main reasons why people like Transformers is the fact the architecture uses attentional mechanisms to help it learn subtle relationships between data. Its this capability that has made Transformers quickly become fundamental plug-in components, appearing in AI systems as diverse as GPT-2, BERT, and even AlphaStar. But the Transformer has one problem – it can be pretty expensive to use, because the attentional processes can be computationally expensive. Amazon has sought to deal with this by developing three novel variants on the transformer.
The Transformer, three ways: Amazon outlines three variants on the Transformer which are all more efficient, though in different ways. “The design principle is to still preserve the long and short range dependency in the sequence but with less connections,” the researchers write. They test each Transformer on two common language model benchmark datasets: Penn TreeBank (PTB) and WikiText-2 (WT-2) – in tests, the Dilated Transforer gets a test score of 110.92 on PTB and 147.58 on WT-2, versus 103.72 and 140.74 for the full Transformer. This represents a bit of a performance hit, but the Dilated Transformer saves about 70% on model size relative to the full one When reading these, bear in mind the computational complexity of a full transformer is: O(n^2 * h). (n = length of sequence; h = size of hidden state; k = filter size; b = base window size; m = cardinal number).
– Dilated Transformer: O(n * k * h): Use dilated connections so you can have a larger receptive field for a similar cost.
– Dilated Transformer with Memory: O(n * k * c * h): Same as above, along with “we try to cache more local contexts by memorizing the nodes in the previous dilated connections”.
– Cascade Transformer: O(n * b * m^1 * h): They use cascading connections “to exponentially incorporate the local connections”.
Why this matters: If we’re going through a period of AI industrialization, then something worth tracking is not only the frontier capabilities of AI systems, but also the efficiency improvements we see in these systems over time. I think it’ll be increasingly valuable to track improvements here, and it will give us a better sense of the economics of deploying various types of AI systems.
Read more: Transformer on a Diet (arXiv).
Get the code here (cgraywang, GitHub).
####################################################
Schmidhuber on AI in the 2010s and AI in the 2020s:
…Famed researcher looks backwards and forwards; fantastic futures and worrisome trends…
Jürgen Schmidhuber, an artificial intelligence researcher who co-invented the LSTM, has published a retrospective on the 2010s in AI, and an outlook for the coming decade. As with all Schmidhuber blogs, this post generally ties breakthroughs in the 2010s back to work done by Schmidhuber’s lab/students in the early 90s – so put that aside while reading and focus on the insights.
What happened in the 2010s? The Schmidhuber post makes clear how many AI capabilities went from barely works in research to used in production in multi-billion dollar companies. Some highlights of technologies that went from being juvenile to being deployed in production at massive scale:
– Neural machine translation
– Handwriting recognition
– Really, really deep networks: In the 2010s, we transitioned from training networks with tens of layers to training networks with hundreds of layers, via inventions like Highway Networks and Residual Nets – this has let us train larger, more capable systems, capable of extracting even more refined signals from subtle patterns.
– GANs happened – it became easy to train systems to synthesize variations on their own datasets, letting us do interesting things like generating images and audio, and weirder things like Amazon using GANs to simulate e-commerce customers.
What do we have to look forward to in the 2020s?
– Data markets: As more and more of the world digitizes, we can expect data to become more valuable. Schmidhuber suspects the 2020s will see numerous attempts to create “efficient data markets to figure out your data’s true financial value through the interplay between supply and demand”.
– AI for command-and-control nations: Meanwhile, some nations may use AI technologies to increase their ability to control and direct their citizens: “some nations may find it easier than others to become more complex kinds of super-organisms at the expense of the privacy rights of their constituents,” he writes.
– Real World AI: AI systems will start to be deployed into industrial processes and machines and robots, which will lead to AI having a greater influence on the economy.
Why this matters: Schmidhuber is an interesting figure in AI research – he’s sometimes divisive, and occasionally percieved as being somewhat pushy with regard to seeking credit for certain ideas in AI research, but he’s always interesting! Read the post in full, if only to get to the treat at the end about using AI to colonize the “visible universe”.
Read more: The 2010s: Our Decade of Deep Learning / Outlook on the 2020s (arXiv).
####################################################
AI Policy with Matthew van der Merwe:
…Matthew van der Merwe brings you views on AI and AI policy; I (lightly) edit them…
Europe’s approach to AI regulation
The European Commission has published their long-awaited white paper on AI regulation. The white paper is released alongside reports on Europe’s data strategy, and on safety and liability. These build on Europe’s Coordinated Plan on AI (see Import #143) and the recommendations of their high-level expert group (see Import #126).
High-risk applications: The European approach will be ‘risk-based’, with high-risk AI applications subject to more stringent governance and regulatory measures. They propose two necessary conditions for an application to be deemed high-risk:
(1) it is employed in a sector that typically involves significant risks (e.g. healthcare)
(2) the application itself is one likely to generate significant risks (e.g. treating patients).
US criticism: The US Government’s Chief Technology Officer, Michael Kratsios, criticized the proposals as being too ‘blunt’ in their bifurcation of applications into high and low-risk, arguing that it is better to treat risk as a spectrum when determining appropriate regulations, and that the US’s light touch approach is more flexible in this regard, and overall better.
Matthew’s view: To be useful, a regulatory framework has to carve up messy real-world things into neat categories, and it is often better to deal with nuance at a later stage—when designing and implementing legislation. In many countries it is illegal to drive without headlights at night, despite there being no clear line between night and day. Nonetheless, having laws that distinguish between driving at night and day is plausibly better than having more precise laws (e.g. in terms of measured light levels), or no laws at all in this domain. There are trade-offs when designing governance regimes, of which bluntness VS nuance is just one, and they should be judged on a holistic basis. In the absence of much detail on the US approach to AI regulation with regard to risks, it is too early to properly compare it with the Europeans’.
Read more: On Artificial Intelligence – A European approach to excellence and trust (EU)
Read more: White House Tech Chief Calls Europe’s AI Principles Clumsy Compared to U.S. Approach
DoD adopts AI principles:
DefenseOne reports that the DoD plans to adopt the AI principles drawn up by the Defense Innovation Board (DIB). A draft of these principles were published in October (see Import #171).
Matthew’s view: I was impressed by the DIB’s AI principles and the process by which they were arrived at. They had a deep level of involvement from a broad group of experts, and underwent stress testing with a ‘red teaming’ exercise. The principles focus on the safety, robustness, and interpretability of AI systems. They also take seriously the need to develop guidelines that will remain relevant as AI capabilities grow stronger.
Read more: Pentagon to Adopt Detailed Principles for Using AI.
Read more: Draft AI Principles (DoD).
####################################################
Tech Tales:
The Interface
A Corporate Counsel Computer, 2022
Hello this is Ava at the Contract Services Company, what do you need?
Well hey Ava, how’s it going?
It’s good, and I hope you’re having a great day. What services can we provide?
Can you get me access to Construction_Alpha-009 that was assigned to Mitchell’s Construction?
Checking… verified. I sure can! Who would you like to speak to?
The feature librarian.
Memory? Full-spectrum? Auditory?-
I need the one that does Memory, specialization Emotional
Okay, transforming… This is Ava, the librarian at the Contract Services Company Emotional Memory Department, how can I help you?
Show me what features activated before they left the construction site on July 4th, 2025.
Checking…sure, I can do that! What format do you want?
Compile it into a sequential movie, rendered as instances of 2-Dimensional images, in the style of 20th Century film.
Okay, I can do that. Please hold…
…Are you there?
I am.
Would you like me to play the movie?
Yes, thank you Ava.
Things that inspired this story: AI Dungeon, GPT-2, novel methods of navigating the surfaces of generative models, UX, HCI, Bladerunner.