NRI Secure, your AI Red Team, security for generative AI systems.

NRI Secure Technologies, a global provider of cybersecurity services, has launched AI Red Team, a new service to assess security for systems utilising generative AI. The service aims to identify potential vulnerabilities associated with Large Language Models (LLMs), a form of AI technology experiencing growing use across various sectors. AI Red Team will highlight issues such as bias risks, inappropriate content generation and sensitive information disclosure.

  • The service, led by a team of experts from NRI Secure, conducts simulated attacks on real-world systems, evaluating AI-specific vulnerabilities and system-wide problems, including those linked to the AI itself.
  • The assessment also identifies risks in LLM alone as well as analyzes the entire system, including LLM.
  • A report outlining the detected problems and recommended solutions is then produced, helping companies to take the necessary steps in mitigating such security issues.
  • NRI Secure has also developed its own assessment application for automated testing, helping to efficiently detect vulnerabilities.

The AI Red Team service tackles any vulnerabilities linked to the probabilistic nature and the complexities in understanding the internal operations of generative AI. The comprehensive assessment helps in identifying whether the AI-induced vulnerabilities are evident or not, based on the company’s long-accumulated security assessment expertise.

Where AI vulnerabilities are identified, the system evaluates the degree of actual risk from a wider system perspective. It then proposes alternative countermeasures that sidestep addressing vulnerabilities in the AI itself. This approach is expected to help reduce the cost of such countermeasures.

Alongside AI Red Team, NRI Secure is also developing AI Blue Team, a service designed for continuous security measures for generative AI. Regular AI application monitoring is a key feature of this service, which is expected to launch in April 2024. A Proof of Concept (PoC) is planned and NRI Secure is seeking participating companies.

NRI Secure’s dual approach with the AI Red and Blue teams shows their enduring commitment to maintaining safe and secure information systems by delivering products and services to enhance the information security measures of organizations.