Prediction: Fueled by AI, data privacy regulation will accelerate
It’s no secret that there’s a laundry list of concerns when it comes to generative AI. Perhaps the most pervasive is how a user’s personal data — and even an organization’s proprietary data — is collected and used to train the large language models (LLMs) that power generative AI platforms. Before implementing AI-driven tools, organizations must instill clear practices for what data they feed into an LLM. It’s also up to governments to assure the safety of their citizens' data through effective regulation. But it won’t be easy.
Recently, the Biden Administration issued a milestone Executive Order establishing new AI safety, security and privacy standards, calling on Congress “to pass bipartisan data privacy legislation to protect all Americans” in response to global concerns.
However, Europe will be the first to enact overarching legislation, a precedent that was set with the adoption of the EU’s General Data Protection Regulation (GDPR), which went into effect in 2018. In fact, there are already specific acts being developed around AI and the use of personal data in Europe that have caused companies to consider not launching certain services there.
Meanwhile, the Australian government is also prioritizing its citizens’ data rights in the wake of the 2022 Optus data breach, which left over a third of the country’s population with compromised personally identifiable information (PII). In Japan, too, data privacy has become a national concern. Around the world, urgency to enact regulation around AI and the use of personal data has only intensified, which is leading to a confusing patchwork of proposed laws.