AI Safety and Corporate Power – remarks given at the United Nations Security Council

by Jack Clark

On July 18th, I gave a short speech at the United Nations Security Council meeting “Artificial intelligence: opportunities and risks for international peace and security – Security Council, 9381st meeting“. It was a huge honor to be invited to talk and also a reflection of how AI has gone from a backwater ‘sounds interesting, get back to us if anything important happens’ policy area, to a topic of concern for the major powers of the world. 

In writing and giving these remarks, I hope to impress upon people the two central issues of AI in the 21st century as I see it: 

i – The inherent safety challenges of the technology

Ii – The significant political power which AI development creates and how this leverage is currently accruing to private sector actors. 

Lots of people at the nexus of AI policy and AI safety seem to prioritize safety above issues of power concentration. In my mind, these are linked. My basic view is that even if you fully ‘solved’ safety but didn’t ‘solve’ the problem of power centralization with AI development, you’ll suffer from such societal instability that the fact you solved safety may not mean much. 

I will be writing more about this in a while as I recognize it’s a somewhat controversial position to some people. 

As with much of my writing, I share my remarks and the above thoughts here in the spirit of inspiring open debate and discussion. Note that my remarks are slightly different (~20 word delta) to what I said in the speech – as with all speeches, there was some live ad libbing and word tweaking, though no real changes of substance. 

REMARKS:
United Nations Security Council speech

I come here today to offer a brief overview of why AI has become a subject of concern for the world’s nations, what the next few years holds for the development of the technology, and some humble ideas for how policymakers may choose to respond to this historic opportunity. 

The main takeaway from my remarks should be: we cannot leave the development of artificial intelligence solely to private sector actors. The governments of the world must come together, develop state capacity, and  make the development of powerful AI systems a shared endeavor across all parts of society, rather than one dictated solely by a small number of firms competing with one another in the marketplace. 

So, why am I making such a statement? It helps to have a sense of recent history. One decade ago, a company in England called DeepMind published research that showed how to teach an AI system to play old computer games, like Pong and Space Invaders. Fast forward to 2023 and the same techniques that were used in that research are now being used to create AI systems that can do things as varied as: develop autonomous fighter pilots that can beat humans in military simulations, stabilize the plasma in fusion reactors, and even design the layout of next-generation semiconductors.

    Similar trends have played out in computer vision, where a decade ago scientists were able to create basic image classifiers and generate very crude, pixelated images. Today, image classification is used across the world to inspect goods on production lines, analyze satellite imagery, and improve state security. 

   And the AI models which are drawing attention today, like OpenAI’s ChatGPT, Google’s Bard, and my own company Anthropic’s Claude, are themselves also developed by corporate interests. 

   And each year brings new, even more powerful systems. 

We can expect these trends to continue – across the world, private sector actors are the ones that have the sophisticated computers and large pools of data and capital resources to build these systems, and therefore private sector actors seem likely to continue to define the development of these systems. 

While this will bring huge benefits to humans across the world, it also poses potential threats to peace, security, and global stability. These threats stem from two qualities of AI systems – their potential for misuse and their unpredictability – as well as the inherent fragility of them being developed by such a narrow set of actors. 

On misuse, these AI systems have an increasingly broad set of capabilities, and some beneficial capabilities sit alongside ones that can pose profound misuses – for example, an AI system that can help us in understanding the science of biology may also be an AI system that can be used to construct biological weapons.

On unpredictability, in a fundamental sense, we do not fully understand these systems – it is as though we are building engines without understanding the science of combustion. This means that once AI systems are deployed, people can identify new uses for them unanticipated by their developers. Many of these will be positive, but some could be misuses like those mentioned above. Even more challenging is the problem of chaotic behavior – an AI system may, once deployed, exhibit subtle problems or tendencies which were not identified in a lab setting, and which could pose risks. 

Therefore, we should think very carefully about how to make the developers of these systems accountable, so that they build and deploy safe and reliable systems, which do not compromise global security. 

To dramatize this issue, I think it’s helpful to use an analogy: I would challenge those listening to this speech to not think of AI as a specific technology, but instead as a type of human labor – one that can be bought and sold at the speed of a computer, and one which is getting cheaper and more capable over time. And, as I have just described, this is a form of labor that is being developed by one narrow class of actors  – companies. We should be clear eyed about the immense political leverage this affords – if you can create a substitute or augmentation for human labor and sell it into the world, you are going to become more influential over time.

Many of the challenges of AI policy seem simpler to think about if we think of them like this – how should the nations of the world react to the fact that anyone who has enough money and data can now easily create an ‘artificial expert’ for a given domain? Who should have access to this power? How should governments regulate this power? Who should be the actors able to create and sell this so-called human labor? And what kinds of experts can we allow to be created? These are huge questions. 

Based on my experiences, I think the most useful thing we can work on is developing ways to test the capabilities, misuses, and potential safety flaws of these systems. If we’re creating and distributing new types of ‘workers’ which will go into the global economy, then it seems like we would like to be able to accurately characterize them and evaluate their capabilities and failings. After all, humans go through rigorous evaluation and on-the-job testing for many critical roles, ranging from the emergency services, to the military. Why not the same for AI?

For that reason, it has been encouraging to see many countries emphasize the importance of safety testing and evaluation in their various AI policy proposals, ranging from the European Union’s AI framework, to China’s recently announced generative AI rules, to the United State’s National Institute of Standards and Technology’s ‘Risk Management Framework’, to the United Kingdom’s upcoming summit on AI and AI safety. 

Since all of these different AI policy proposals and events rely in some form on testing and evaluation, it would make sense for the governments of the world to find ways to invest in better ways of testing and evaluating AI systems. Right now, there aren’t standards or even best practices for testing frontier AI systems for things like discrimination, misuse, or safety. And because there aren’t best practices, it’s hard for governments to create policies that create more accountability for the private sector actors developing these systems, and correspondingly the private sector actors enjoy an information advantage when dealing with governments. 

In closing, any sensible approach to regulation will start with having the ability to evaluate an AI system for a given capability or flaw, and any failed approach to regulation will start with grand policy ideas that are not supported by effective measurements and evaluations. It is through the development of robust and reliable evaluation systems that governments can keep companies accountable, and companies can earn the trust of the world that they want to deploy their AI systems into. 

If we do not invest in this, then we run the risk of regulatory capture, compromising global security, and handing over the future to a narrow set of private sector actors. If we can rise to this challenge, however, I believe we can reap the benefits of AI as a global community, and ensure there is a balance of power between the developers of AI and the citizens of the world. Thank you.