Skip to main content
SearchLoginLogin or Signup

AI Regulation Paradigms and the Struggle over Control Rights

Who has the power to shape AI systems, and for whose benefit? Scott Timcke explores regulatory paradigms of AI control rights in China, the EU, and the US, and problematizes the exclusion of the Global South from shaping these evolving paradigms.

Published onMay 15, 2024
AI Regulation Paradigms and the Struggle over Control Rights

AI Regulation Paradigms and the Struggle over Control Rights

Scott Timcke

1. Introduction

AI systems will likely reshape global social and economic relations. Sensing this, several AI governance paradigms have emerged from the West and China as governments seek to become more assertive in regulating their information and communication technology sectors. These different paradigms reflect not just disagreements over technical standards, but ascendent geopolitical realities. Specifically, they reflect a deeper struggle between different visions of control rights. Here, control rights mean the power to determine who gets to shape AI, for what purposes, and for whose benefit. This matters because the creation of AI systems has mostly been controlled by a handful of firms in the West and China. This has left most of the Global South as net importers of AI systems without much clout or regulatory capacity to impact the rollout of AI.

Still, these struggles over control rights cannot be understood in isolation or without an appreciation of recent shifts in international politics. These shifts are partially driven by the consequences of globalization, like the vulnerability of supply chains, the realization that the West’s foreign-directed investments have contributed to the rise of China as a formidable global competitor, and the provincialization of the West as the main driver of world historical change. In an attempt to manage some of these shifts, the African Union (AU) has been invited to join the G20, ostensibly to counter the expansion of the BRICS bloc (Brazil, Russia, India, China, and South Africa). Nevertheless, the shifts are a profound shock to imaginations shaped by the now-passing unipolar moment.

While acknowledging the complexity of political and social contexts – each rife with multiple interests, factional contests, and competing values – this paper explores the vision of these control rights that may come to shape the development and deployment of AI systems across the world.1

2. China’s regulatory paradigm

China is perhaps the most influential actor in the global AI landscape. This is partly because the country is one of the leading national investors in AI research and development, and partly because it is the biggest data market in the world. Through sequential and interactive regulation, Chinese agencies have built a knowledge of AI systems, their harms, risks, capabilities and affordances. China is a very fast mover in AI regulation, and the acquisition of regulatory expertise can be marked by the passing of the Cybersecurity Law of 2017, rules for recommendation algorithms in 2021, deep synthesis in 2022, and generative AI in 2023. Algorithms are the fundamental unit of regulation in this paradigm. Secondary emphasis is put on training data. Thus far, regulation is sector-specific, not systematic (i.e. food delivery workers, not GPS tracking). This approach is based on a vision of AI that appeals to ideals like human dignity, social harmony, national sovereignty, and global peace. 

Chinese regulators are building policies and technical tools to fulfil this mandate. The goal is to develop a comprehensive regulatory paradigm for AI, which is currently being drafted. Stakeholders like researchers, academics, companies, and other government agencies provide input, feedback, and recommendations on various aspects of the regulatory paradigm such as risk assessment, impact assessment, code of conduct, data governance, and international cooperation.

Facing the inherent tension of controlled innovation, China seeks to regulate the infrastructure that supports or enables AI systems to ensure their security, reliability, and efficiency. In March 2023, the Central Science and Technology Commission and the National Data Administration were established. It appears that these two bodies are intended to link governance, strategic planning and technology development around data and AI infrastructure to ensure its alignment with the state’s interests and goals.

It seems that the Chinese government aims to balance the promotion of innovation and competitiveness with the protection of security and stability. China’s approach to AI takes its cue from state-led developmental industrial policy. The Chinese government treats AI as a strategic resource that can bring significant benefits like economic growth, social development, and international influence.

In summary, the main objective of China’s approach to AI is to ensure state information control, in principle and in practice. While it is an older, encumbered term, ‘the Great Firewall’ speaks to the state’s longstanding agenda to control data flows and set standards, a temperament also present in the AI policy space.

3. The EU’s regulatory paradigm

The EU is developing a comprehensive regulatory paradigm for AI that aims to weigh economic innovation against the protection of fundamental rights. The European Commission (EC) characterizes its project as an “approach to artificial intelligence [that] centers on excellence and trust, aiming to boost research and industrial capacity while ensuring safety and fundamental rights.” This paradigm is based on a vision of “trustworthy AI” that respects human dignity, autonomy, democracy, equality, the rule of law, and human rights.

The EU’s paradigm hinges on the principle of “regulatory innovation.” Using their high capacity and effective administrative states, the EU is able to update laws to address specific challenges that arise across the lifecycle of AI systems, from design to deployment to use. The EU also acknowledges that AI is a cross-cutting and horizontal technology that affects multiple sectors and domains, like health, education, transport, security, justice, and the environment. For this reason, the EU places considerable emphasis on safety, rights-based regulations, and civil liability.

The main instrument of the EU’s paradigm is the Artificial Intelligence Act (AIA), which was proposed by the EC in April 2021 as part of a package that included a review of coordinated plans on AI and an updated digital strategy. The AIA is a legislative proposal that establishes “horizontal rules for development, commodification and use of AI driven products, services and systems within the territory of the EU.”

The AIA classifies AI systems into four risk categories, with each category defining different requirements and enforcement mechanisms for non-compliance. The AIA prohibits AI systems that pose an unacceptable risk to fundamental rights or public safety, like those that manipulate human behavior or exploit vulnerabilities. A possible example is a children’s toy that comes with a voice assistant. Other AI systems categorized as unacceptable risks are those that implement social scoring or rating systems, or those that use real-time or remote biometric identification systems in public spaces for law enforcement purposes, with some exceptions. These AI systems are deemed incompatible with the EU’s values and are banned from being developed or used in the region.

The AIA imposes strict obligations for high-risk AI systems that have a significant impact on people’s life chances or access to essential services, like those used for recruitment, education, health care, justice, law enforcement, migration, or public administration. These AI systems must undergo a prior conformity assessment before being placed on the market or put into service in the EU.2 They must also comply with certain quality criteria throughout their lifecycle, like accuracy, robustness, security, human oversight, and transparency. High-risk AI systems must be registered in a dedicated EU database and carry the CE marking to indicate their compliance with the AIA.

The AIA imposes transparency obligations for limited-risk AI systems that interact with humans or generate content or emotions. Examples include chatbots, virtual assistants, AI-generated articles or audio or social robots. These AI systems must inform users about their artificial nature, help users understand the nature of the interaction, and enable them to opt out of using them, if desired. They must also ensure that their output is not misleading or harmful to users or third parties. This process is also known as “from lab to market” and is encapsulated in terms like ‘trustworthy AI’.

The AIA establishes a governance structure for overseeing and monitoring the implementation and enforcement of its provisions. One body is the European Artificial Intelligence Board (EAIB). This organization is composed of representatives from national authorities and experts from various fields. The EAIB is responsible for providing guidance, advice, and recommendations on aspects of AI regulation like risk assessment, conformity assessment, standards development, codes of conduct, data governance, international cooperation, and so on. The AIA also sets out administrative fines for infringements of its rules that can reach up to 6% of global turnover.

The main goal of the EU’s AI regulation is to achieve economic innovation while also protecting fundamental rights from known and potential harms. The EU seeks to achieve this goal through a comprehensive regulatory framework supported by effective administrative states.

4. The US’s regulatory paradigm

AI policy in the US has seen iterative development, albeit in different areas depending on the party in power and the sway of various federal agencies, many of which have put forth a series of AI-related initiatives. The regulatory environment is also shaped by the headquarters of firms like Alphabet, Microsoft, Amazon, and Meta through lobbying of politicians. While there is considerable private sector sway in the policy process, the rudder for AI policy comes from the executive branch of the US government. Accordingly, it is worth briefly looking at these administrations.

The Obama administration focused on the management of AI risk. This administration released two reports outlining its plans for the future of AI, plans characterized more by sensibility than speculation. There was focused attention to security, privacy, and safety matters, many of which also had immediate economic implications. For example, the Select Committee on AI within the National Science and Technology Council was established in June 2018. This committee included representatives from defense, intelligence, commerce, treasury, transportation, energy, and labor, which sought “to prioritize and promote AI R&D, leverage Federal data and computing resources for the AI community, and train the AI-ready workforce.”

The Trump administration emphasized AI’s role in economic growth and competitiveness. National AI Research Institutes were established in 2020 and focused on a range of AI research or corporate applications like machine learning, synthetic manufacturing, and precision agriculture. The Biden administration has sought to return to the policy course established by the Obama Administration, focusing on protecting the public from algorithmic discrimination and ensuring privacy protections while also adding more focused attention to industrial policy.

In February 2023, President Biden signed an Executive Order directing federal agencies to eliminate bias in their design and use of new technologies, including AI. This was aimed at protecting the public from algorithmic discrimination. In May 2023, the Biden administration released a blueprint for an AI Bill of Rights intended to serve as a framework for the use of AI technology by both the public and private sectors, encouraging anti-discrimination and privacy protections. Along with more recent regulatory developments suggest a focus on giving state agencies the greenlight to address AI issues within their existing scope of rulemaking.

US AI regulation is in a state of flux, shaped by competing considerations around national security, individual rights, and the interests of various stakeholders. The overarching desire seems to concern the minimization of algorithmic discrimination and protection of individual privacy, while also leveraging AI as part of an emerging industrial policy to bolster the country’s global economic competitiveness.

5. The Exclusion of the Global South

Considering the enormous amount of capital, computing power, and capacity that AI demands, early movers have consolidated their power and ability to set the policy agenda. While there may be a wide adoption of AI systems over time, the gap between those on the frontier and new actors is likely to widen, with the Global South becoming further disadvantaged. If the gap in technical production is mirrored in global policy, the current pattern risks creating global standards blind to dynamics in the Majority World. Furthermore, the Global South lacks a proportional voice and presence in the venues that debate the path forward on AI governance. Africa has a single member country in the prestigious Global Partnership on AI, which has 29 members, mostly from North America, Europe, and Asia.

Groups like the African Union (AU) have sought to formulate common positions on AI governance centered on continental needs. The AU’s Data Policy Framework asserts collective data rights beyond individual privacy. It prioritizes increased data access, skill development, and responsible innovation around shared African challenges. The Framework advocates for data governance for the common good as a pathway to realize economic rights to decentralize and redistribute capital and power that is currently highly concentrated in a few countries and companies. Such viewpoints need far greater prominence in global debates. Nevertheless, more resources are needed to boost the research, advocacy, and clout of AI policy from the Global South.

Meanwhile, as net importers of AI systems, the Global South is susceptible to having their policy decisions overdetermined by the paradigms developed in the West and China, especially as Chinese-headquartered firms seek to enter foreign markets and acquire assets abroad. Dependence on China for finance, expertise, and infrastructure also leaves countries in the Global South vulnerable to coercion. China’s approach to AI within its muscular industrial policy and national planning in strategic sectors resonates with the statist developmental aspirations of many Global South countries. This is especially true where nationalist ruling elites see themselves as the primary agents of development, as is the case in Ethiopia, Rwanda, and South Africa. Similar general concerns apply for Western-headquartered firms, although their methods of engagement are somewhat different.

The consolidation of effective control rights over AI in the hands of a few Chinese, EU, and US-headquartered corporations will likely further concentrate wealth and influence to the detriment of the Global South. Such concentration provides limited guardrails against potentially harmful applications, as profit maximization trumps human rights. For some in the Global South, the prevailing AI governance paradigms will simply deepen dependence on foreign corporations. These relationships bare an uncanny resemblance to the discussions in the 1970s and 1980s around “dependency by design” or “dependent development by necessity”, depending on perspective. These underlying power relations are one reason for the emerging rhetoric about data sovereignty as a means to assert national control over AI systems.

Recognizing that AI governance is a complex interplay of laws, policies, and strategies implemented by numerous actors, what seems to be missing in the policy discourse – notwithstanding efforts of scholars and researchers – is attention to how the interplay of financialization, dominant trade and IP regimes, and technocratic capture of democratic decision-making have helped to initiate a neo-colonial AI order. A deeper discussion of these issues requires a separate essay.

6. Efforts at the United Nations

Allowing AI governance frameworks to be set by the dominant providers of AI systems would simply reproduce existing global inequalities. The UN Secretary-General has called for global coordination on AI governance “to harness AI for humanity while addressing its risks and uncertainties as the AI-related services, algorithms, computing capacity, and expertise become more widespread internationally.” The UN in its entirety has important contributions to make towards AI governance. A multi-disciplinary, multi-stakeholder advisory panel has been announced to help chart a way ahead on global governance in the lead-up to the Summit of the Future scheduled for September 2024. This panel has sought out comments on their interim report, a report that has placed considerable emphasis on global inequalities that arise of power concentration in and around AI, even if there are fewer details about mechanisms of redress.

The UN seeks to promote initiatives for ethical AI, as exemplified by UNESCO and other activities like the UN Common Agenda and the Global Digital Compact. As the most representative global organization, the UN and its agencies have the standing and perspective to monitor AI development. However, it is unlikely that the UN’s involvement will evolve beyond soft law initiatives. There is much discussion about the formation of a new UN secretariat on AI governance. To be useful, such a secretariat would need to be complementary, not duplicative while leveraging the strengths of the UN system.

The UN and its agencies can play an important role in articulating universal norms and principles, advocating for global public goods over narrow national or corporate agendas. Efforts like this can help rally universal consensus and spur the adoption of human rights norms into domestic policies. However, without any enforcement mechanisms, the UN is limited to moral persuasion to coordinate actors who have little interest in finding common ground.

7. The Struggle Over Control Rights

These competing paradigms leave ample room for conflict. Governance of transformative technologies like AI ultimately reflects the struggle over control rights and the political economy of who influences design, development, and deployment. Absent meaningful participation from the Global South, rules and norms imposed from above risk reinforcing existing power relations. Furthermore, a transparency and accountability governance paradigm will not suffice to shift the unjust outcomes of current AI business practices and self-regulation. Transparent exploitation is still exploitation. Separating the inequalities and injustices perpetrated in the automated decision-making in millions of transactions each day perpetuates the status quo – negatively impacting those at the intersections of multiple inequalities. While the growing literature on AI ethics has focused on the need for social justice and protection of human rights, there has been less attention paid to the dimension of economic injustice arising from the uneven distribution of opportunities related to data value creation including through AI.

Additionally, we may be witnessing the end of human rights language as a consensus rhetoric. The right to protest government, for example, is being constrained by burdensome regulations or surveillance. “As a result, thousands of people are being unlawfully dispersed, arrested, beaten and even killed during demonstrations” according to Amnesty International’s analysis of their data. Even if specific actors did not embrace the concepts and commitments of that language, they at least had to engage with and frame their actions in it.

Notwithstanding the appeal to human rights, US-headquartered corporations with unprecedented levels of market capitalization have the unmatched capacity to invest in computing and physical infrastructures essential for AI innovation. Current trade and IP regimes enable cross-border data flows that lead to unrestricted commodification and enclosure of public knowledge. Despite numerous AI ethics initiatives, deployment of AI – whether in the market, state, or society – is currently based on profit imperative and utility maximization. Sadly, these conversations are often conducted without substantive attention to exploitation of the labor that creates data. Nor do these conversations conclude with a call for radical redistribution to offset the social costs that come from the concentration of capital derived from intense exploitation.

When taking all these factors into account, the Global South has little to no control rights around AI with few prospects to establish any. A new conception of AI governance is needed, one that rebalances control rights more equitably and is not biased towards statist or corporate interests. This kind of paradigm would incorporate measures for redress of longstanding social and global inequalities, regardless of whether they were caused by AI. This requires the UN and other entities to foster the inclusion of the Global South, challenge entrenched power hierarchies, and build consensus around the full range of political, civic, and economic human rights, even if they are waning. AI governance should also address the market dominance and corporate concentration of the AI industry. These considerations about economic justice emerge from research on AI (in)justices being undertaken across the world. Only by democratizing the global AI debates can the outcomes serve the interests of populations that have been marginalized from governance debates.

Acknowledgements

I wish to thank Theresa Schültken and Andrew Rens for conversations on comparative AI governance. I also wish to thank Alice Marwick and the team at BITAP for their editorial support.

Comments
0
comment
No comments here
Why not start the discussion?