This article discusses four ways to handle concerns involving generative Artificial Intelligence (AI), including ethical worries, security issues, and hallucinations. Four business leaders share their insights on mitigating risks and harnessing the benefits of this emerging technology.
- Exploit new opportunities in an ethical manner: Birgitte Aga, head of innovation and research at Munch Museum, emphasizes the importance of ethical considerations when utilizing AI. She encourages involving employees in discussions about AI use, and considering biases, stereotyping, and technical limitations.
- Build a task force to mitigate risks: Avivah Litan, VP analyst at Gartner, suggests building an AI task force which involves experts from across the business and hold considerations about privacy, security, and risk. This helps ensure everyone is on the same page, mitigates risks and aligns on model performance.
- Restrain your models to reduce hallucinations: Thierry Martin, senior manager for data and analytics strategy at Toyota Motors Europe, points out the problem of hallucinations in generative AI, and emphasizes on the need for stable Large Language Models (LLMs). He suggests creating models tied specifically to provided data to reduce hallucinations and enhance model performance.
- Progress slowly to temper expectations: Bev White, CEO of Nash Squared, advises caution in the deployment of generative AI and counteracting the hype associated with the technology. She suggests proceeding slowly, taking a step back when necessary, and ensuring regulatory measures keep pace with the technology.
This article further highlights the potential of generative AI and underscores the importance of strategic and responsible implementation to effectively exploit the benefits and minimize the pitfalls associated with this rapidly evolving technology.