AI-as-a-Service Providers at Risk of Privilege Escalation and Attacks


TLDR:

Artificial intelligence-as-a-service providers like Hugging Face are at risk of privilege escalation and cross-tenant attacks, allowing threat actors to access other customers’ models. Shared inference infrastructure and CI/CD pipeline takeover are the main threats identified. Recommendations include using models from trusted sources and enabling multi-factor authentication.

Key Elements:

  • AI-as-a-service providers vulnerable to privilege escalation and cross-tenant attacks
  • Risks include shared inference infrastructure and CI/CD pipeline takeover
  • Malicious models pose a major risk to AI systems
  • Research found vulnerabilities in Hugging Face platform
  • Recommendations for users to mitigate risks

New research has uncovered vulnerabilities in AI-as-a-service providers, particularly Hugging Face, that could allow threat actors to escalate privileges and gain access to other customers’ models. The shared inference infrastructure and CI/CD pipeline were identified as key areas of risk, enabling threat actors to run untrusted models and perform supply chain attacks. The study recommended using models only from trusted sources and enabling multi-factor authentication to enhance security.

The research also highlighted the risks associated with generative AI models like OpenAI ChatGPT and Google Gemini, which could be used to distribute malicious code packages to unsuspecting developers. Security measures such as enabling IMDSv2 with Hop Limit and avoiding pickle files in production environments were suggested to mitigate these risks. Overall, the findings underscore the importance of exercising caution when utilizing AI models and implementing robust security measures.