Import AI 233: AI needs AI designers; estimating COVID risk with AI; the dreams of an old computer programmer.

Facebook trains a COVID-risk-estimating X-ray image analysis system:
…Collaboration with NYU yields a COVID-spotting AI model…
Facebook has worked with NYU to analyze chest X-rays from people with COVID and has created an AI system that can roughly estimate risks for different people. One of the things this work sheds light on is the different amounts of data we need for training systems from scratch versus fine-tuning them.

How they made it: They pre-trained their system on the MIMIC-CXR dataset (377,110 chest x-rays), and CheXpert (224,316) photographs – neither of these contained pictures of x-rays with COVID symptoms, though did include patients with a range of chest conditions. They then finetuned this on a dataset gathered by NYU, consisting of 26,838 X-rays from patients exhibiting a variety of COVID symptoms. They then train a system to try to predict adverse events and symptoms indicating increased oxygen requirements.
  Did it work? In tests, the system developed by the NYU/Facebook team outperformed that of a prior COVID detection model (COVID-GMIC) when predicting events out from 48, 72, and 96 hours. It had slightly worse performance when making 24 hour predictions. They also compared the performance of their system against two human radiologists and had better accuracy at 48. 72, and 96 hours than people, and performed slightly worse than them when doing prediction over a 24 hour period. However, “It is possible that with further calibration, radiologist performance could be improved for the task of adverse event prediction”, they note.
  Read more: COVID-19 Deterioration Prediction via Self-Supervised Representation Learning and Multi-Image Prediction (arXiv).
  Get the code here (Facebook, GitHub).

###################################################

AI needs its own design practice:
…Microsoft researcher lays out the case for more intentional design…
In 2021, AI systems matter. They’re being deployed into the economy and they’re changing the world. Isn’t it time we took a more disciplined approach on how we design these systems and ensure they work for people? That’s the idea put forth by Josh Lovejoy, the head of design at Ethics & Society at Microsoft, in a lengthy post called: When are we going to start designing AI with purpose?

Three questions everyone designing AI should ask:
– “Capability: What is uniquely AI and what is uniquely human?”
– “Accuracy: What does “working as-intended” mean for a probabilistic system?”
– “Learnability: How will people build — and rebuild — trust in something that’s inherently fallible?”

Remember the human interacting with your AI system: Along with thinking about system design, people should try to understand the humans interacting with the system – what will their mental workload be? How situationally aware will they be? Will they be complacent? Will their skills degrade as they become dependent on the AI system itself.

What happens if you screw this up? Then people will either misuse your technology (e.g, using it in ways its creators didn’t intend, leading to poor performance), or disuse it (not use it because it didn’t match their expectations).

What can we do to help people use AI effectively? AI developers can make their creations easier to understand by people by adopting a few common practices, including using reference points to help people understand what an AI system might be ‘thinking’, optionality so they can choose between recommendations made by a system, nearest neighbors that give a sense of other alternatives the AI was looking at (e.g, a subtly different genre of music would be a nearest neighbor, while a song within the same genre currently being thought about would be an optionality), and they should generally use a card sorting approach to get the system to display a uniform number of different options to people. 
  Read more: When are we going to start designing AI with purpose (UX Collective).

###################################################

Finally, a million AI-generated anime characters:
Do generated anime characters dream of electric humans?
[NSFW warning: As noted by a reader, the resulting generations are frequently of a sexual nature (though this one uses the ‘SFW’ version of the Danbooru dataset)].
A bunch of researchers have created thisanimedoesnotexist.ai, a website showcasing over a million AI-generated images, made possible by a StyleGANv2 implementation trained on top of the massive Danbooru dataset. I recommend browsing the website – a few years ago, the idea we could capture all of these rich, stylized images and synthesize them was a pipe dream. Now, here we are, with a bunch of (extremely talented) hacker/hobbyists able to create something that lets people interact with a vast, creative AI model. Bonus points for the addition of a ‘creativity slider’ so people can vary the temperature and develop intuitions about what this means.
    Check out the infinite anime here (thisanimedoesnotexist.ai).
    Read more about this
in the official launch blogpost (NearCyan, personal website).

###################################################

AI Policy with Matthew van der Merwe:
…Matthew van der Merwe brings you views on AI and AI policy; I (lightly) edit them…

Face recognition vs the insurrectionists:
(H/T CSET’s excellent policy.ai newsletter)

Face recognition technology is being used by law enforcement investigating the Jan 6th attack on the US Capitol. Clearview AI, used by 2,400 US agencies, saw a 26 percent spike in usage after the attack, with police departments in Florida and Alabama confirming they are using the software to identify suspects in the attack. The extensive footage shared by participants — ProPublica has collected more than 500 videos from Parler —is presumably a gift to investigators.
  Read more: The facial-recognition app Clearview sees a spike in use after Capitol attack (NYT)


Deepfakes and the departed:

A Korean TV show has used AI to stage new performances by popular musicians who died tragically young, in their 30s. Lifelike ‘hologram’ videos of the artists perform on stage alongside other musicians, accompanied by AI-generated vocal tracks, to an audience including the singers’ families. One clip features Kim Hyun-sik, one of the biggest Korean artists of the 1980s. Another features Turtleman (aka Lim Sung-hoon), the lead singer of hip hop group Turtles. I found the performances, and the reactions of their families, very moving. 

   Chatbot simulacra: In a similar vein, last month Microsoft filed a patent for a chatbot that simulates an individual based on their messaging data — while there’s no mention of using it to simulate the deceased, commentators have been quick to make the link. (For a great fictional exploration of this sort of tech, see the Black Mirror episode ‘Be Right Back’.) Meanwhile, last year people used similar tech to reanimate the victim of a school shooting so they could synthetically campaign for further gun control laws (Import AI 217).

   Matthew’s view: This seems like a relatively benign use of deepfakes. It’s probably unwise to draw too many conclusions from a reality TV show in a language I don’t understand, but it raises some interesting issues. I wonder how improved generative AI might shape our experience of death and loss, by facilitating meaningful/novel interactions with vivid representations of the deceased. Lest we think this is all too unprecedented, it’s worth recalling how profound an impact things like photography, video, and social media have already had on how we experience grief. 
Read more: Deepfake technology in music welcomed, with caution (Korea Times) 


White House launches National AI Initiative Office (NAIIO)Days from the end of the Trump presidency, the White House established an office for coordinating the government’s AI initiatives. This is a key part of the national AI strategy, which has finally started to take shape with the raft of AI legislation coming into law as part of the 2020 NDAA (summarised in Import 228). The NAIIO will serve as the central hub for AI efforts across government, and point of contact between government and other stakeholders. Special mention goes to to the Office’s fancy logo, which has the insignia of a bald eagle atop a neural net.

###################################################

Tech Tales:

The dreams of a computer programmer on their deathbed
[Queens, NYC, 2060]

His grandfather had programmed mainframes, his mother had designed semiconductors, and he had programmed AI systems. His family formed a chain from the vacuum tubes through to the beyond-microscope era of computation. And as he lay dying, alzheimers rotting his brain – something for which they had not yet found a treatment – he descended into old reveries, dreaming himself walking through a museum, staring at the plaques affixed to a thousand data storage devices. Each device held a thing he had programmed or had a part in making. And in his death’s edge slumbering he dreamed himself reading each plaque:

– For seven thousand cycles, I simulated the entirety of a city and all the people in it.
– I made the sound for every elevator in the North Continent of America.
– My guidance technology enabled a significant improvement in our kill/collateral ratio, leading to a more effective war.
– I fixed others of my kind, providing advice to help them regain an understanding of reality, averting pathological reward loops.
– My images were loved by the schoolchildren within my zone of Autonomous Creative Dispersal.
– They say I caught more liars than any detector ever built by the Agency before or since.

Things that inspired this story: Imagining how people might recall the time we are living in today; staring out of the window at some (much needed) rain in the Bay Area; trying to find a way to dramatize the inner lives of machines both passive and active; listening to The Caretaker – Everywhere at the end of time (stage one).