Data privacy concerns swirl around generative AI adoption
Technology professionals say data privacy tops their list of ethical worries surrounding the deployment of generative AI in the enterprise, according to a Deloitte report. The firm surveyed 1,848 business and technology professionals.
Nearly three-quarters of professionals ranked data privacy among their top three ethical concerns about the technology's use, according to the report. Two in 5 respondents flagged data privacy as their No. 1 concern this year, almost double the 1 in 4 that cited data privacy in Deloitte's 2023 survey.
Tech leaders are poring over the infrastructure and talent needs of their organizations as they help guide generative AI adoption. Ethical concerns should also make it on the checklist.
"GenAI collapses the ‘expertise barrier’: more people can get more out of data, with less technical knowledge needed," said Sachin Kulkarni, managing director, risk and brand protection, Deloitte LLP, in the report. "While a benefit, the potential for data leakage may also increase as a result."
Professionals are also worried about the impacts of generative AI on transparency, data provenance, intellectual property ownership and hallucinations. Job displacement, though often cited as a top concern, was only flagged by 16% of respondents.
Across emerging technology categories, business and IT professionals identified cognitive technologies — a category that includes large language models, machine learning, neural networks and generative AI, among others — as posing the most severe ethical risks.
The category surpassed digital reality, autonomous vehicles and robotics, among other technology verticals. However, respondents also ranked cognitive technologies as the most likely to drive social good.
Due to its reliance on data, the majority of executives are concerned about how generative AI tools can increase cybersecurity risks by expanding their attack surface, a Flexential survey published earlier this month found.