Import AI 206: 450k geographically diverse videos; a beetle-mounted camera; plus, 11,000 images of roadside attractions

Could this one-armed mobile robot be the next big thing?
…Rise of the machines (again)…
A seasoned team of roboticists hailing from Google and Georgia Tech have developed Stretch, a (relatively) low cost, mobile robot.

Why Stretch? Stretch has a few features that make it potentially interesting to researchers – it sits on a stable, simple wheeled base, has one arm with a reasonably robust (and simple!) gripper, onboard cameras, and has been built for easy modification (e.g, swapping out different manipulators or other items onto its arm). The robot costs $17,950, with discounts available if you buy more (six costs you around $100k).

Why this matters: Most AI researchers think robots are gong to be an important part of the future, but most AI researchers also acknowledge that robots are insanely difficult to get right, and that they’re unforgiving about the deficiencies of today’s approaches. Stretch is part of a new wave of robots that, unlike prior generations (e.g, the PR2) have been built out of lower cost components with simpler bodies. (Another good example of this is Berkeley’s low-cost $5k ‘BLUE’ robot arm, covered in Import AI 142).
Read more about the technical details of the robot here (Hello Robot, official website).
Check out the videos (Hello Robot, YouTube).
Get bits and pieces of robot code here (Hello Robot, GitHub).

####################################################

Google’s Pixelopolis shows how good local-AI-computation has got:
…Google makes a mini-self-driving car toytown…
Google has built Pixelopolis, a self-driving car demo that uses the company’s Pixel phones, an efficient version of TensorFlow called TensorFlow Lite, and numerous brightly colored wooden blocks, to create a tabletop self-driving car experiment. To train its self-driving cars, google built a simulation of the miniature town in Unity then used data-augmentation techniques to create a bunch of data, which was used to train the miniature self-driving “cars”.

Why this matters: Pixelopolis was originally meant to be a marketing demo that you could see at events like Google I/O – then the Coronavirus happened. Now, the writeup by Google of the demo serves as an interesting outline of what can be accomplished using cheap or open source tools in 2019 and 2020 when it comes to AI. I think if you got in a time machine and went to 2010, people would be fairly surprised that we’d developed a) decent object detection systems via neural networks and b) that these systems could run on smartphones, providing local computation to miniature robots. (Some people might have been disappointed by you, though, as there was a faction in Silicon Valley in the early 2010s that seemed to think self-driving cars were single-digit years away.
  Read more: Sharing Pixelopolis, a self-driving car demo from Google I/O built with TF-Lite (arXiv).

####################################################

Your tax dollars at work: 11,000 images of US roadside attractions:
…Furry Dice! Pink flamingos! Roadside dinosaurs! And so much more…
The Library of Congress has released more than 11,000 images of roadside attractions in the US – this is a potentially cool dataset for AI tinkerers (just imagine the StyleGAN possibilities, for instance!). And best of all, these images are in the public domain – “ohn Margolies made the photographs in the John Margolies Roadside America Photograph Archive. The Library of Congress purchased the intellectual property rights for the photographs with the archive and, therefore, there are no known copyright restrictions on the photographs,” says the Library of Congress. Tinker away!
  Read the rights and restrictions information here (Library of Congress website).
  Access the entire dataset here (Library of Congress website).
  Browse some of the photos here (Library of Congress, Flickr).
  Read more: Download 11,710 Free-to-Use Photos of Roadside Americana (LifeHacker).

####################################################

Facebook is an information empire, so it has built a fibre-laying robot:
…Every empire needs its roads, and in the 21st century that means internet-capacity…
I think the 21st century is going to be determined by “information empires” – organizations, predominantly technology companies, that are able to exert their will on the world through being able to process more information faster than those around them. Every empire worth its salt ultimately needs to build machines that let it extend itself – it’s not by chance that the romans invested vast amounts of money into building roads, that oil companies invested in oil platforms, or that America invested a huge amount in a globe-spanning set of military bases.

Now, Facebook is getting in on the action with a robot that can deploy high-capacity fibre-optic cables on medium-voltage power lines. Facebook thinks the technology “will allow fiber to effectively and sustainably be deployed within a few hundred meters of much of the world’s population”, and the company will trial the tech later this year.
– Sidenote: This all feels a lot creepier if you imagine instead of Mark Zuckerberg, the head of Facebook is Julius Caesar, and instead of a robot that builds fibre, we’re talking about a system to bring rapid-troop-transport tech to “a few hundred meters’ of most of the world. (Hint: These things are, in the 21st century, functionally the same).
  Read more: Making aerial fiber deployment faster and more efficient (Facebook Engineering).

####################################################

AViD dataset serves up 450k+ diverse videos:
…Want a video dataset that reflects the world, rather than Europe and North America? Try this…
Indiana University and Stony Brook University researchers have built AViD, a dataset of 476k videos ranging between 3 and 15 seconds long, containing anonymized people (blurred faces) performing 887 actions ranging from “medical procedures” to “gokarting”. The videos were gathered from platforms like Flickr and Instagram, they write.

Geographic diversity: AViD is a far more geographically diverse dataset than others – just 32.5% of its videos come from North America (based on geotagged data), compared to around 90% for other major datasets like Kinetics. The disparity is even more pronounced in other regions – 20.5% of AViD’s datasets come from Asia, versus around ~2 percent for others.

Why do we even need this? Today, most of the datasets used by AI researchers have some inherent issues of representation and bias – namely, they usually contain data selected according to some criteria that aren’t representative of the broad set of contexts they’ll be deployed in. For instance, a study by Facebook researchers found that image recognition services work well for products from well-off countries and badly for products found in poor countries (Import AI: 150). Datasets like AViD aim to be more geographically representative in terms of their data, which may lead to better broad performance.
  Read more: AViD Dataset: Anonymized Videos from Diverse Countries (arXiv).
  Get the data: AViD Dataset (GitHub).

####################################################

You’ve heard of Big Data. What about Beetle Data?
…University of Washington researchers make an insect-sized camera…
Data is one of the input fuels of AI development, along with computation and wetware (organic human brains). University of Washington researchers have developed a prototype miniature camera that weighs 250 milligrams and can be stuck on the back of a common beetle, letting them capture a beetle’s POV on the world. They use an accelerometer so the camera only records data when the beetles are moving, which means the onboard battery life is around 6 hours. Data gets streamed to a nearby smartphone.

Why this matters: Tiny, miniature cameras are one of the dividends of the smartphone revolution; research like this from the University of Washington shows how to take advantage of many of the innovations that have occurred in sensors in the past years and use it to push science forward – if systems like this become more widely used and cheaper, we’ll eventually get a whole new stream of data that can be used to develop AI applications. How might images from an insect-POV GAN look like, for instance? Let’s find out.
  Read more: A GoPro for beetles: Researchers create a robotic camera backpack for insects (UW News).

####################################################

AI goes to design school:
…15 million CAD sketches…
Obscure datasets are, much like obscure tools, part of the fabric of modern AI development. In recent years, as researchers have built systems that can work on general domains, like large image datasets and text datasets, they’ve begun to deploy them into more specific domains, like ones that require training on specific medical imagery datasets, or scientific paper repositories. Now, researchers with Princeton University and Columbia University have developed a dataset called Sketchgraphs, which consists of millions of computer-aided design drawings made via the website Onshape. This will help us develop machines that can assist designers and, eventually, become inventive CAD agents in their own right.

Why SketchGraphs is more than just CAD: One reason why SketchGraph is interesting is it might serve as an input for the development of more advanced research systems, as well as applied ones. “The SketchGraphs dataset may be used to train models directly for various target applications aiding the design workflow, including conditional completion (autocompleting partially specified geometry) and automatically applying natural constraints reflecting likely design intent (autoconstrain). In addition, by providing a set of rendering functions for sketches, we aim to enable work on CAD inference from images.”

Why this matters: All around us, little corners of the digital world are being changed by the arrival of AI systems. Right now, most of the world’s deployed AI systems are performing fairly passive classification and optimization operations. Currently, we’re moving into a world where these AI systems take on more active roles via things like recommendation systems. Datasets like SketchGraphs gesture to the world of the future – one where our AI systems not only classify and optimize, but also create ideas in their own right, which are then passed to humans for review. In the future, every profession will get to go through a Centaur era (though hopefully for longer than 15 minutes!).
  Read more: SketchGraphs: A Large-Scale Dataset for Modeling Relational Geometry in Computer-Aided Design (arXiv).
  Get the SketchGraphs dataset from here (PrincetonLIPS GitHub repo).

####################################################

AI Policy with Matthew van der Merwe:
…Matthew van der Merwe brings you views on AI and AI policy; I (lightly) edit them…

Understanding AI benefits
As AI systems become more powerful and widely-deployed, they will have increasingly large impacts on the world. Lots of work in AI safety and policy is focused on reducing the risk of significant harms, but we also want AI to produce widely-distributed benefits. The two leading labs, OpenAI and DeepMind, are both publicly committed to ensuring that AI is beneficial for humanity, and many of the AI ethics statements produced by governments, supranationals, and firms include a similar pledge. These blog posts from Cullen O’Keefe (OpenAI) try to provide some clarity on AI benefits.

Market failures: Markets tend to undersupply certain things: non-rivalrous goods; goods benefitting the worst-off; and goods that are systematically undervalued by consumers. Firms are better incentivized by the market to create concentrated benefits (e.g. making a cool gadget) than to reduce the risk of widely-distributed harms (e.g. by reducing CO2 emissions). Market-shaping mechanisms like regulation, taxes, subsidies, are good tools for addressing many of these market failures. We can think of AI benefits as things that markets are unlikely to produce.

AI benefactors: An actor committed to using AI to promote widespread social welfare faces a number of tricky decisions. They must decide on their level of risk-tolerance; whether to invest their resources now or later; whether to allocate goods at a global scale, or more locally; how to balance the explore-exploit tradeoff when searching the space of altruistic opportunities.

Matthew’s view: This is a nice series of posts, and I’m pleased to see more work being done on this topic. In some respects, the notion of AI benefits is only indirectly related to AI. The underlying question is how a very well-resourced altruist, with an impartial concern for all people, can best use their resources to do good. The possibility of rapid progress in AI makes figuring this out more urgent — e.g. if either DeepMind or OpenAI is successful in building AGI in the coming decades, and remain committed to doing good, they would be among the most well-resourced altruists to have faced the decision of how to improve the world.
  Read More: AI Benefits series (Cullen O’Keefe).

####################################################

Tech tales

Mirror Shibboleth
[2800AD]

It was said that:
– If you held it, it could tell you truths about whatever you pointed it towards.
– If you looked into it you could drive yourself mad, but you could also become wise.
– You could describe to it the finer details of any conflict you faced and it might offer advice that could turn the tide.
– It had been forged using rare equipment operated by a group of robed artisans.

The device worked like a fractured mirror – you showed it things and it showed you something in return. The things it showed you held the light of reality but were somewhat warped. Sometimes the truths it contained were misleading. Sometimes they were very powerful.

People competed with one another for a time to build different versions of these devices. Great machines were constructed to forge the devices. Then people experimented with them, learning to use their various capabilities.

* * *

Many years later, some visitors came to a great stone building, embedded in the side of a mountain. They made their way inside it, treading carefully over skeletons and cobwebs. A few hours later, they had managed to resurrect one of the ancient computers. Carefully, they transferred what they could, then they returned to their flyer and left the obscure corner of the earth.

Back at their ship, the visitors spent some days tinkering with the system, until they could create the plumbing necessary to get it to display things that they could find intelligible. Then they went to work, seeking to understand a distant civilization by looking at the outputs of a machine that had recycled culture and learned to generate it and recombine it. They would subsequently communicate about their findings, creating a narrative about the outputs of a story-generating device, developed by a culture that was unknowable to them, except through its own reimaginings of itself.

Things that inspired this story: Generative models; the relationship between cultural artefacts and the time they were developed in; funhouse mirrors; prediction engines, defined broadly; anthropology; archaeology.