In the dynamic landscape of digital transformation, generative AI (Gen AI) is set to change how businesses operate and interact with customers. The rise of GenAI has unleashed a race to gain a competitive advantage and organizations are exploring different ways to use it.
Privacera, an AI and data security governance company by Apache Ranger, released findings of its State of AI and Data Security Governance report which sheds light on the increasing interest in GenAI and the associated concerns regarding data security and governance. The report was compiled by surveying 250 US-based Heads of AI, Chief Information Officers (CIOs), Chief Data Officers (CDOs), and Chief Information Security Officers (CISOs).
The findings of the Privacera report show that an overwhelming majority of business leaders (96 percent) have either implemented GenAI for their businesses or are exploring ways to implement it. It also shows that organizations are investing substantially in GenAI’s transformative potential, with nearly half (48%) of organizations planning to invest up to $1M toward GenAI in the next two years. While there is enthusiasm about GenAI, there are also some concerns.
Data leakage and breaches are a top concern of business leaders. Almost half (49 percent) of respondents shared that they had concerns about the potential vulnerabilities in GenAI usage. Other major concerns included the potential for abuse or data bias (39 percent) and potential erotion of customer trust (37 percent).
The 2023 State of Unstructured Data Management Survey by Komprise also highlighted similar concerns of business leaders about the data governance risks of AI, including privacy, security, and the lack of data source transparency.
Two-thirds of the respondents (66 percent) in the Privacera report shared that they plan to implement a data security and governance strategy to mitigate the risks of using AI models. The report shows a high preference (57 percent) for using a dedicated data security platform.
The findings reveal a disparity between the importance of having a consistent and automated approach to data security (98 percent) and the intention to use different security tools for individual use cases (64 percent). Using a variety of AI models poses some potential risks, and this indicates a need for a unified data security framework.
“With the emergence of generative AI, public and private Large Language Models (LLMs), organizations are looking for strategies to deploy and apply universal data security and governance as part of the end-to-end lifecycle for modern AI applications,” said Piet Loubser, SVP of Marketing at Privacera.
Loubser also shared that “These broader security considerations must include together, the securing and compliant use of data for training and fine-tuning of AI models in a consistent manner. While businesses of any size prioritize security, simply piecing together tools and point solutions for specific use cases will not suffice. Data-driven organizations need a comprehensive, unified data security platform to safeguard a wide range of use cases and data applications effectively and at scale.”
The Privacera report also shares some best practices for adopting GenAI AI for businesses. The top recommendations include investing in employee training, establishing comprehensive security policies, and utilizing unified data security platforms.
According to Privacera, If organizations want to secure sensitive data they should implement real-time controls. In addition, Primavera recommends centralizing comprehensive auditing for practice identification and enforcement of security measures.