This paper explores what kind of problems the international community will face in cooperating and reducing the risk of both the unintended consequences and the malicious use of artificial intelligence (AI), along with a governance system that might be best to overcome those problems. To do this, I argue that we can look at the current political issues in cybersecurity/information security using a case study on the fifth United Nations Group of Government Experts on Developments in the Field of Information and Telecommunications in the Context of International Security. Then, by examining some relevant literature on emerging technology governance and governance systems used for cyberspace, we can compare the issues to the strengths and weaknesses of these systems to determine which might be best. I argue that the use of a soft-law based ‘Governance Coordination Committee’ (GCC) within a polycentric ecosystem of organizations might be the best strategy to reduce AI-related risk, as it would be the most flexible system and could help create norms and set precedents that lead to further agreements.
Do you want to learn more about our priority paths? Click here
My thesis was the capstone project for the School of Public and Environmental Affairs' Certificate in Applied Research and Inquiry at Indiana University. My thesis and the presentation of my thesis helped me achieve my current job doing operations and administration at UC Berkeley's Center for Human-Compatible AI.