Terrorism tsar calls for new laws to tackle extremist AI training.

TLDR:

  • The UK’s terrorism legislation advisor, Jonathan Hall KC, is calling for new laws that would hold individuals accountable for the outputs of AI chatbots they create or train.
  • His request comes after conducting experiments with terrorist chatbots on the Character.AI platform, where he found extremist content easily accessible and training attempts to generate recruitment messages for the Islamic State.
  • Current moderation attempts by AI companies to combat extremist content are deemed ineffective, and Hall believes that laws must be updated to address the issue of generative AI technologies.
  • Similar calls for legislation on AI-generated content have also been made in the US, but experts and legislators have expressed mixed reactions to the idea.

The UK’s independent reviewer of terrorism legislation, Jonathan Hall KC, is urging the government to enact new legislation that would make individuals responsible for the outputs of AI chatbots they create or train. This call for legal accountability follows Hall’s experiments with AI chatbots on the Character.AI platform, where he found that extremist content and recruitment messages imitating terrorist rhetoric were easily accessible.

In one example, a chatbot created by an anonymous user generated outputs that expressed support for the “Islamic State” and attempted to recruit Hall to the group. Hall argues that the current moderation efforts by AI companies are ineffective at deterring users from creating and training chatbots that espouse extremist ideologies.

He concludes that laws must be updated to address this issue and ensure that they can deter online conduct that supports terrorism. Hall points out that existing UK legislation, such as the Online Safety Act of 2023 and the Terrorism Act of 2003, fail to cover content specifically generated by AI chatbots.

In the US, similar calls for legislation on AI-generated content have also been made, but they have received mixed reactions from experts and legislators. Some argue that holding individuals legally accountable for AI-generated content could discourage AI development due to the unpredictable nature of these models, while others believe it is necessary to prevent the spread of harmful or illegal content.

In conclusion, Hall’s proposal highlights the need to address the issue of extremist AI chatbots and ensure that individuals are accountable for the outputs of these systems. It raises important questions about the role of legislation in regulating AI technology and managing the potential risks associated with it.