With more than 42 artworks successfully sold through weekly auctions, this artist has garnered a profit of more than 2,47M USD, making them the 17 best-selling artist on SuperRare, a digital auction site for NFTs (non-fungible tokens). However, these impressive numbers are to be met with a surprising truth: the said artist is not human. Known as Botto, this generative AI - later abbreviated as GenAI - is an experiment conducted to observe how machines can create artworks autonomously with no direct human intervention.
As of recent years, GenAI, typically used to generate images, texts, and audiovisual files by reading prompts created by human programmers, has risen in significance. As much as it is a marvel, the fact that GenAI has the ability to create works so similar to those made by human hands is also a source of alarm to many people, mostly because of the ubiquitous fear that these machines may one day replace humans. This has given rise to new dilemmas on its legal implications, especially in interstate disputes. Under the umbrella of international law, the polemics of GenAI's ethics of usage, including how to overcome challenges regarding a holistic and just implementation, become a blur.
Perhaps a most notorious feature of international law is that, being highly dynamic and debatable, it is never a "one-size-fits-all" concept. Take international copyright law as an example. Legal solutions to international copyright disputes may differ according to who created what and why. One example is the 2018 "Paintings Generated by Artifical Intelligence" case, brought to recognition by the High Court of England and Wales. The case involved a clique of artists who used GenAI to create paintings, which were then sold in exhibitions. The court held that the copyright of these paintings belonged to the artists who created the AI algorithms, as they were deemed as the true people behind the machines' creative processes. Yet, just two years prior, in the "Next Rembrandt Project" aimed to reincarnate the titular Dutch painter's art style through GenAI, the court held that the case was to remain undecided as the paintings were for marketing campaign purposes only and were not sold or auctioned for commercial profit.
The lack of universal standards on GenAI has given way to the realization regarding the newfound urgency to analyze the regulative aspects of this matter under international law. In order to pave the way for an international digital governance, a standardized set of rules and objectives regulating the use of GenAI is urgently required. Looking north, the European Union (EU) is the trendsetter for this, being the first international institution to have a set of widely-known regulations about GenAI through its 2021-born AI Act (Zhuk, 2023). The organization's so-called "Brussels effect" has a potential in widening the scope of the Act, pushing other actors to incorporate it into their domestic law and fostering regional and international cooperation in AI governance. Yet, numerous challenges remain, and there is a growing skepticism on whether the EU can truly ride the wave of globalization and benefit its "Brussels effect" to the fullest. This essay aims to address these challenges and their implications on how contemporary actors of international relations may navigate these uncertainties.
Getting to Know the European Union's AI Act
By the time this piece was written, there are no universal norms regulating GenAI, as it is a rather new form of technology. Before the AI Act was born on 2021, the EU has referred to existing legal provisions regarding the matter, though none were specifically concentrated on GenAI as a distinctive form of artificial intelligence. Even after the AI Act, GenAI tends to be treated as a sui generis: an "exception" in international law due to its special reliance on nonhuman data algorithms, making its legal status highly debatable (Zhuk, 2023).
Two main regulations on digital data referred to by the EU on a regular basis are the EUCD (European Union Copyright Directive) and the CDSM (Copyright in the Digital Single Market). The EUCD, amended on 2019, fixates copyright law as belonging to all owners of any type of original work as long as it fulfills two main criteria, "originality" and "creativity". Yet, what these two criteria truly define are left unexplained. Works generated through data algorithms, such as those created through GenAI mechanisms, can be interpreted as part of this explanation. On the other hand, the CDSM, released on the same year, also implicitly regulates GenAI through an article stating that data used for academic research or TDM (text and data mining purposes) are free, while data used for generating GenAI prompts must have their licenses paid and their usage limited to a certain extent.
Another essential regulation to be addressed is the GDPR (General Data Protection Regulation). GDPR aims to facilitate a safe and accessible data sharing environment for all layers of EU society. Norms of data protection as stated in the GDPR are widely incorporated by other actors, such as India and ASEAN, in their domestic or regional laws. Even the U.S. state of California incorporates part of the GDPR to the CCPA (California Consumer Protection Act). This type of phenomenon is called the Brussels effect. Taken from the headquarters of the EU, Brussels, Belgium, where the official seats of the European Council, European Commission, and Council of the European Union are located, this term refers to the seemingly automatic diffusion of norms from the EU to other states, regions, and institutions. With its prime position as the world's largest trading bloc, norms and regulations implemented in the EU are like "contagions", diffusing to the EU's trading partners solely due to its market size. The EU's stringent regulations are also deemed to make trade easier for regulated institutions, such as corporations, causing them to conform to the EU's standards for the sake of practicality (Bach & Newman, 2007).
When it was born on 2021 due to the regulatory deficiencies of the EUCD and CDSM, the GDPR-inspired AI Act was billed as the world's first formal regulation on AI, including GenAI. On June 2024, the Act was published in the Official Journal (OJ) of the EU and will consequently apply on August 2026, exactly 24 months after its entry to force. The Act is known for its newfangled categorization of AI into four types based on its risk: unacceptable risk, high risk, medium risk, and low risk. GenAI is considered to be a high-risk type of AI so that it requires a fixed procedure and high transparency rate in its management (EU, 2024).
The challenge remains, though, in which despite of the large-scale formal acceptance of the AI Act, domestic implementations greatly vary, based on each country's capacity to do so (Tarafder & Vadlamani, 2023). With its status as a relatively new form of technology, many countries still face technical challenges in implementing the AI Act: lack of funding, lack of trained personnel, and lack of national consensus in laws regarding AI. This fact brings another question into the spotlight: can the so-called "GDPR of AI" produce the same Brussels effect as its predecessor?
Challenges Behind the AI Act
Three points highlight the AI Act's weakness in fostering a legitimate digital governance on GenAI. The first point relates to the obscurity of intellectual property laws. Despite institutions - organizations and treaties that lead to the creation of collective norms - that focus on intellectual property matters, such as the WIPO or the WTO's TRIPs treaty, the implementation of these laws heavily rely on the domestic capacities of each state. Existing treaties may not be ratified by a number of states, and most organizations also have limited jurisdictions, such as the TRIPs treaty, which is only limited to WTO member states. Another hindering aspect is that intellectual property laws risk being seen as illegitimate, and therefore, not something urgent to implement.
Existing regulations on intellectual property also get fuzzy when it comes to AI. When the creators of a certain artwork aren't even human in the first place, it is no wonder its credibility is questioned. A prominent debate regarding relates to who is the rightful owner of an artwork created by GenAI - the creator of the algorithm, the artists who utilised the program, or the individual who commercialized it. There are even opinions stating that in the case of AI-generated artworks, there is no such thing as intellectual property at all, as no direct creative process occurs. In other words, AI-generated artworks are not protected by copyright laws as they lacks the bedrock requirement, human authorship. This kind of interpretation is found in U.S. courts (Hung, 2023).
Secondly, there are also ethical aspects of the utilization of AI-generated content. GenAI tends to be used to plagiarize content, falsify personal data, and spread false or discriminative information. Ironically, the "free", unregulated use of GenAI is at times justified by the notion of freedom of expression found in the Universal Declaration of Human Rights. One example of the ethnical controversies following GenAI in the European context took place in the Russo-Ukrainian War, in which Russia utilised deepfake to produce a fake video of Ukrainian president, Volodymyr Zelenskyy, reading out a statement that said the war had ended and that the Ukrainian people should surrender. Such content may increase public skepticism, especially in the characteristic anomie of conflicts and wars, and destabilize states and regions altogether due to social distrust and civil unrest (Irish, 2024). The EU's response towards this move, which mostly focused on the general aspects of Russia's invasion through economic sanctions, was not accompanied by measures that specifically targeted the usage of GenAI in Russia's propaganda. This indicates that even in contexts of conflict, the ethical aspects surrounding GenAI remain marginal, further blurring the lines between international law and state responsibility.
The third obstacle is the lack of optimal implementation of the AI Act's regulations, either in substantial or bureaucratic aspects. The Act is often deemed "radical" for its unique take on categorizing AIs based on their risks instead of their purposes, causing it to be subjective, highly interpretative, and debatable. This causes the Act to become counterproductive as the risk level of a certain type of AI may vary according to the context in which it is utilised. As an example, deepfake, which is categorised as a limited-risk AI in the Act, may be considered as a high-risk AI in the Russo-Ukrainian context mentioned above (EU, 2024).
The AI Act also misses out on regulations on intellectual property and other technical matters such as data confidentiality. Even though it is claimed as the continuation of the GDPR, the two vastly differ in that while the GDPR contains guidelines on protecting confidential private data and how stakeholders come together in solving international disputes on digital governance affairs, the AI Act does not (Tarafder & Vadlamani, 2023).
Said challenges may cause the EU's interests to clash with others' interests. As an example, the EU's focus on risk mitigation may contradict the USA's regulations which concentrate more on post-hoc resolutions, including system repairs after the damage is done (Tarafder & Vadlamani, 2023). Such differences may slow down the process of norm diffusion the Brussels effect aims to create. Typically, Silicon Valley corporations engaged in the production of AI technologies - say, Google, Meta, and OpenAI - may not see the Act as a legitimate regulation because it goes against their commercial interests. A research by Bommasani et al. (2023) from Stanford found that these U.S.-based companies have low rates of compliance towards the AI Act in terms of intellectual property, risks and mitigations, data governance, energy, and many other aspects.
New Doubts, New Urgencies
The EU's "unique" take on AI through the Act signals a potential start on the widespread diffusion of norms regarding digital governance (Zhuk, 2023). However, a renewal of the Act is urgently required - by categorizing AI through a purpose-specific instead of risk-based manner, the Act may increase legitimacy through an implementation of more objective standards. Through holistic attempts at research and development leading up to international consensus, the Act's status may change from soft law to hard law, increasing its binding ability and legitimacy. Said research and development efforts may result in clearer rules on data sharing, data security, and intellectual property. Also, learning from other successful treaties in nontraditional issues - such as the Montreal Protocol - the Act should be able to distinguish nations based on their capacities in complying. Efforts such as providing technical and financial assistance to less developed nations, for example, may form the groundwork for a just transition to digital governance in the context of AI.
Cooperation with institutions like the UN, ITU (International Telecommunication Union), and ENISA (European Union Agency for Cybersecurity) may also prove beneficial in socializing the significance of the AI Act. By collaborating with international organizations, academic institutions, mass media, and civil society organizations, the EU may create wider networks in promoting its standards, easing a much-desired Brussels effect, thus leveraging the Act's legitimacy worldwide. The Act may even integrate other forms of technology, such as blockchain - which stores data in unmodifiable chain due to its system of decentralized distributed ledgers - in order to guarantee data confidentiality (Ramos & Ellul, 2024). Creating a balance between freedom of expression and legal-ethical implications of the widespread use of Ai requires legitimacy and transparency. Viewed from a neoliberal perspective, the EU can act as a fulcrum to foster legitimacy in the realm of artificial intelligence, thus forming a newfound digital governance.