In the quest for international cooperation on artificial intelligence (AI), political leaders have turned to models from multilateral bodies such as the International Atomic Energy Agency (IAEA) or the Intergovernmental Panel on Climate Change (IPCC). However, simply introducing new agencies will not solve the challenges of AI governance. The author argues that governments should first focus on drafting laws that regulate AI and then establish oversight bodies to enforce those laws. The author proposes three key points for international consensus on AI regulation: the need for AI to be identifiable, limits on the use of AI-enabled weapons, and the inclusion of environmental protection efforts in AI regulation. While international initiatives such as the G7 Code of Conduct and the Global Partnership on AI (GPAI) are steps in the right direction, they lack binding agreements and face challenges in converging on global norms. The author suggests that real progress in international AI oversight will require legal agreements, technical tools to monitor compliance, and the leadership of likeminded democracies in shaping AI regulation.