AI Data Breaches Can Rise From Cross-Border GenAI Misuse; Reports

AI Data Breaches Can Rise From Cross-Border GenAI Misuse; Reports

The lack of consistent global best practices and standards for AI and data governance exacerbates challenges

Poulami SahaUpdated: Tuesday, February 18, 2025, 01:21 PM IST
article-image
Dark side of Gen AI | Freepik

The swift adoption of GenAI technologies by end-users has outpaced the development of data governance and security measures. Eventually, raising concerns about data localization due to the centralized computing power required to support these technologies.

As reported by Gartner, Inc. by 2027, more than 40% of AI-related data breaches will be caused by the improper use of generative AI (GenAI) across borders. Here's a detailed look at how things can go in future!

Gen AI and Its Loopholes

The lack of consistent global best practices and standards for AI and data governance exacerbates challenges. Thus, causing market fragmentation and forcing enterprises to develop region-specific strategies.

This can limit their ability to scale operations globally and benefit from AI products and services.

The safety nets

To mitigate the risks of AI data breaches, particularly from cross-border GenAI misuse, and to ensure compliance, Gartner recommends several strategic actions for enterprises:

Enhance Data Governance: Organizations must ensure compliance with international regulations and monitor unintended cross-border data transfers by extending data governance frameworks to include guidelines for AI-processed data. This involves incorporating data lineage and data transfer impact assessments within regular privacy impact assessments.

Establish Governance Committees: Form committees to enhance AI oversight and ensure transparent communication about AI deployments and data handling. These committees need to be responsible for technical oversight, risk and compliance management, and communication and decision reporting.

Strengthen Data Security: Use advanced technologies, encryption, and anonymization to protect sensitive data. For instance, verify Trusted Execution Environments in specific geographic regions and apply advanced anonymization technologies, such as Differential Privacy, when data must leave these regions.

Invest in TRiSM Products: Plan and allocate budgets for trust, risk, and security management (TRiSM) products and capabilities tailored to AI technologies. This includes AI governance, data security governance, prompt filtering and redaction, and synthetic generation of unstructured data. Gartner predicts that by 2026, enterprises applying AI TRiSM controls will consume at least 50% less inaccurate or illegitimate information, reducing faulty decision-making.

RECENT STORIES

China Deploys Humanoid AI Robot Xiao He At SCO Summit 2025, Helps Media With Translation,...

China Deploys Humanoid AI Robot Xiao He At SCO Summit 2025, Helps Media With Translation,...

Who Is Xuechen Li? All About xAI’s Former Engineer Sued For Stealing Its Secrets To Give To OpenAI

Who Is Xuechen Li? All About xAI’s Former Engineer Sued For Stealing Its Secrets To Give To OpenAI

India’s Broadband Subscriber Base Grows 984.69 Million At The End Of July, Marking 0.51% Monthly...

India’s Broadband Subscriber Base Grows 984.69 Million At The End Of July, Marking 0.51% Monthly...

Apple, Samsung Send Legal Notices To Xiaomi Over Ad Campaign

Apple, Samsung Send Legal Notices To Xiaomi Over Ad Campaign

Dream Sports-Owned FanCode To Shut Sports Merchandise Business By October

Dream Sports-Owned FanCode To Shut Sports Merchandise Business By October