TLDR:
- The Bletchley Park AI summit in 2023 led to discussions on AI regulation.
- The Seoul summit focused on the creation of a global network of AI safety institutes.
Trying to tame AI: Seoul summit flags hurdles to regulation
The recent Seoul summit highlighted the challenges faced in regulating artificial intelligence (AI). The UK and South Korea are working towards delivering AI regulations, with a focus on the creation of a global network of safety institutes. These institutes, inspired by the “Bletchley effect,” aim to share information about AI models, harms, and safety incidents among various countries.
The development of safety institutes has shown some progress in testing and monitoring AI models. However, the power of these institutes is currently limited to observation and reporting, potentially leaving them unable to intervene in AI harms effectively. Nevertheless, the act of observing AI systems and publicly reporting their flaws can lead to significant actions by companies to rectify issues.
Another key point of discussion at the summit was the debate over whether regulation should target AI systems themselves or focus solely on the applications of AI technologies. While some argue for regulating specific uses of AI, others, like former Google Brain boss Andrew Ng, suggest targeting the applications rather than the underlying systems.
The summit also raised concerns about the existential risk posed by superintelligent AI systems, with the potential to threaten civilization. MIT professor Max Tegmark emphasized the importance of considering the risks associated with powerful AI systems. However, the shift towards inclusivity in regulating AI risks may dilute the overall efficacy of regulatory efforts.