AI and Data Privacy: How Regulations Are Shaping the Landscape

Walter Dannemiller - Apr 01 2024
Published in: Technology
| Updated Apr 01 2024
This article explores the evolving global landscape of AI regulation, highlighting the diverse approaches taken by regions like Europe, North America, and APAC, and emphasizes the importance of proactive engagement with these regulations for businesses operating in the AI space.

This article originally appeared in the Q4 2023 issue of Mobility magazine.

I recently attended a conference on developments in the cybersecurity and data privacy landscape. But in a room full of lawyers, government regulators, and IT practitioners, all anyone could talk about was the staggering rise of artificial intelligence (AI). I suppose it was to be expected—it’s the 800-pound gorilla in the room—a tool that the average person had not heard of a year ago is now being rapidly utilized by individuals and businesses alike. 

Early in the conference, an attendee stood and asked one of the panels a vague question: “What do you think of programs like ChatGPT? Are they a fad, or will they become an integral part of business and society moving forward?” As quickly as the question concluded, a well-known legal scholar sitting on the panel quipped “Stop using it; we’re all gonna die.” Certainly a dramatic response but one that is striking a chord with government regulators. History proves that regulation almost always follows innovation, and the rapid deployment and adoption of AI tools is no exception. So, let’s look at how the global regions are addressing these concerns.

Europe

The European Union (EU) continues to solidify itself as the global leader in the regulation of consumer data and its interaction with rapidly changing technology. From the renowned General Data Protection Regulation of 2018 (GDPR) to the recently proposed European Union AI Act, the EU Parliament has shown its preference for taking early and proscriptive action in these matters. The first of its kind, the European Union AI Act, in its draft form, will apply to all entities involved in the creation or dissemination of AI tools within the EU. 

The act aims to restrict or prohibit certain uses of these tools including social scoring, predictive policing, and biometric identification. The act will also require generative AI systems to credit copyrighted materials in training models and label return results as AI-generated. Penalties associated with violation of the EU AI Act are more substantial than those imposed under GDPR, and European citizens will have the ability to file complaints directly with local regulators. The EU AI Act is predicted to come into force sometime in 2025. 

In contrast to the EU’s proscriptive approach to the regulation of AI, the United Kingdom plans to leverage its existing privacy laws and regulators to oversee AI, with a dedicated task force collaborating with generative AI companies to set safety and security standards. Although the precise mechanisms for accomplishing this remain unclear, the overall approach aligns with the UK’s pro-growth and innovation-oriented policy framework announced in March 2023, which emphasizes fundamental principles for AI regulation including security, transparency, and redress.

North America

In the United States, AI regulation is still evolving, with various state and federal initiatives offering insights into future regulation. In Congress, though no comprehensive stance has yet materialized, several bipartisan bills have been introduced, each focusing on different policy issues. In September, the U.S. Senate began holding a series of forums to educate lawmakers on these tools. Simultaneously, the White House has introduced a “Blueprint for an AI Bill of Rights” that outlines five voluntary principles to guide the design and use of AI, which has already been adopted by seven major U.S.-based tech companies. The National Institute of Standards and Technology and the National Artificial Intelligence Advisory Committee have each advised on managing AI-related risks to individuals, organizations, and society. Such frameworks are expected to receive deference from the federal government, paving the way for a national standard if approved.

In the absence of federal action, six U.S. states have enacted or will enact laws by the end of 2023 on the topic of AI. California’s Governor Gavin Newsom signed an executive order directing all state agencies to develop reports and guidance regarding the use of such technologies, aiming to cement California as a leader in AI innovation and regulation. 

In Canada, Parliament is working through the Artificial Intelligence and Data Act (AIDA), which is designed to place responsibility on entities for the AI activities that they develop or deploy. If ratified, the AIDA will require businesses to identify, address, and document potential risks and biases in their AI systems; assess the intended uses and limitations of their AI systems, ensuring user comprehension; and implement effective risk mitigation strategies and maintain ongoing system monitoring. 

APAC

The regulatory landscape for AI in the APAC region remains largely uncharted. However, China has recently emerged as the AI leader in APAC, implementing its initial set of regulations governing such tools. These regulations, which came into force in August, are applicable to companies involved in generative AI technology offering services to the public, requiring them to obtain a license from Chinese authorities before conducting operations within the country. Once licensed, these providers are obligated to conduct routine security assessments of their platforms, document with the government all tools and systems capable of influencing public sentiment, and ensure the protection of user data, all in the name of safeguarding China’s “core values of socialism.” 

Other countries in APAC like Australia and Taiwan are leaning toward more prescriptive regulations, while Singapore and Hong Kong opt for voluntary guidelines. South Korea and Japan plan to combine government guidance with sector-specific restrictions.

Impact to Global Mobility

For businesses operating in the global mobility industry, it is important to take note of the regulatory patchwork that is certain to come into effect as the use of AI becomes more mainstream. Businesses can best prepare by understanding how regulatory authorities view and govern the various AI tools they’d like to deploy. This will allow compliance measures to be seamlessly integrated into deployed AI tools, leading to a better quality product. 

It is also important for businesses to consult with their clients and transferring employees to understand whether the use of AI related to the provision of services is acceptable. Remember that AI tools use large amounts of data, including personally identifiable data, to produce outcomes. Even if a business is complying fully with AI and data privacy regulations, it may be the policy of the client or the preference of the employee to not utilize such AI-based tools due to security and ethical considerations. In this case, disclosure and transparency are key.

Conclusion

In the rapidly evolving landscape of AI regulation, it’s evident that AI is no longer just an emerging technology evoking curiosity but a critical consideration for governments, businesses, and society. While we won’t know for some time whether the conference panelist was correct or if the unbridled embrace of AI tools is just another Y2K moment, proactive engagement with evolving regulations will be vital for businesses to thrive in this dynamic, interconnected landscape.

Walter Dannemiller is the vice president of legal for Dwellworks LLC with responsibility for all aspects of the company’s global legal and compliance affairs. Prior to joining Dwellworks in January 2019, he served as legal counsel for a large commercial laundry retailer.