Governments seem keen on AI surveillance running amok

Key points:

  • The EU’s Artificial Intelligence Act has come under criticism for loopholes which permit the use of artificial surveillance technologies for law enforcement.
  • Despite the Act’s focus on transparency and consumer-friendliness, critics worry about the inclusion of “narrow exceptions” which allow the use of biometric identification technologies in public spaces for law enforcement purposes.
  • The Act also permits the export of certain “high-risk” AI technologies, allowing companies within the EU to sell these surveillance tools to countries outside the bloc.

The proposed European Union’s Artificial Intelligence Act, which made significant progress in legislative proceedings this week, has received mixed reactions. Although it promises to make Artificial Intelligence (AI) more transparent and user-friendly in EU countries, there are concerns that some provisions in the law could facilitate intrusive police surveillance in public spaces.

The Act bars the use of “high-risk” AI systems such as emotion recognition systems and social scoring systems. However, it also includes provisions that would permit the use of biometric identification systems in publicly accessible spaces for law enforcement, subject to prior judicial authorization.

Damini Satija, Head of the Algorithmic Accountability Lab with Amnesty International, expressed concern over these exceptions. She warned that the use of these technologies for law enforcement purposes could grow in the future and could lead to the implementation of extensive surveillance systems. She also criticized the Act for not banning the export of “high-risk” AI technologies to nations outside the EU, enabling EU-based companies to sell controversial surveillance products to other countries.

Mher Hakobyan, Amnesty’s Advocacy Adviser on AI, claimed that the Act’s provisions demonstrate a double-standard. While the EU presents itself as a global leader in promoting ‘secure, trustworthy and ethical Artificial Intelligence’, it refuses to stop EU companies selling rights-violating AI systems to the rest of the world.

Christoph Schmon, International Policy Director for the Electronic Frontier Foundation, also raised concerns about the Act’s approach to law enforcement technologies. “The law enforcement exceptions in the AI deal seem to make the ban of face recognition in public and the restrictions on predictive policing look like Swiss cheese,” he said.

The final version of the AI Act’s policy details is still being refined and the final form of the bill is expected to be finalized in January.