Deciphering the AIGC Compliance Blueprint (Part I): Regulatory Rationale Behind the AIGC Measures By Cai Peng 2023-07-31




On July 13, 2023, the Cyberspace Administration of China (“CAC”), in concert with six other ministries, issued the Interim Measures for the Management of Generative Artificial Intelligence Services (the “AIGC Measures”). Composed with consideration of feedback judiciously gleaned from a broad spectrum of stakeholders on the previous draft for comment, the AIGC Measures stresses the principles of harmonizing technological progress with security measures and fostering innovation within a lawful governance framework. In the foreseeable future, generative artificial intelligence, a technology that applies algorithms and datasets to autonomously generate content, will invariably be thrust into the limelight in the tech sphere. Consequently, discussions surrounding data compliance and other related issues will shoot up. As Part I in a series of articles intended to chart the regulatory course for AIGC and explore its potential trajectory under the current legal evolvements, this article seeks to decode the regulatory intent embedded within the AIGC Measures. 





The previous draft of the AIGC Measures was heavily focused on imposing legal obligations concerning the content security of AIGC on enterprises. This approach overlooked the practical challenges that businesses may encounter during their fulfillment of such obligations and might even contest the underlying logic of AIGC technology. Hence, the previous draft of the AIGC Measures has provoked widespread discussions upon its release.


In contrast, the current AIGC Measures has prudently refined the compliance obligations for providers of AIGC services (“Providers”), establishing a buffer zone that allows enterprises to be prepared for compliance measures without hindering their growth momentum. This calibrated approach demonstrates law’s modesty. Additionally, the AIGC Measures has alleviated some compliance burdens on enterprises. For instance, the mandatory duty in the draft that required enterprises to validate the truthfulness and accuracy of the training dataset has been moderated to a commitment based on due diligence. Furthermore, the Measures has taken into account the limited control that Providers have over user-generated content and removed certain prohibitive provisions.


It’s also noteworthy that the AIGC Measures has provided a degree of flexibility for Providers in meeting compliance obligations. Providers now can utilize service agreements to shift part of the compliance obligations to users, thereby controlling and reasonably transferring the compliance risks. The previous draft, which demanded Providers to suspend or terminate AIGC services upon detection of any online hype, malicious postings and comments, spam, or malicious software, has been revised. Providers are now afforded discretion to take a variety of responsive measures, such as issuing warnings, restricting certain functionalities, or suspending or terminating services as per the law and the agreements between Providers and their users. The draft’s requirement for Providers to optimize training models within three months upon discovery of any illegal content has been removed in hopes to safeguard the autonomy of Providers in improving algorithm model performance and content management.





The AIGC Measures reflects a national commitment to fostering the growth of the AIGC industry. These provisions not only provide strategic guidance and legal safeguards for industrial innovation but also establish a balanced, rational, and scientifically based framework for industry regulation:


* Principle of Balance. The AIGC Measures introduces principles that underscore the necessity of harmonizing development with security and boosting innovation within the framework of law, setting the tone for the regulation of the AIGC industry.

* Legislation Fostering Technological Advancements. The AIGC Measures regards the Law on the Progress of Science and Technology as one of its superior laws, emphasizing the primary purposes of fostering technological advancements in AIGC services.

* Encouraging Multi-level Development. The AIGC Measures puts forth a diverse, multi-level approach to advocate the development and innovation within the AIGC sector.


a) On the technical level, the AIGC Measures encourages innovation across the supply chain and at all production stages, encompassing areas such as algorithm design, framework construction, chip manufacturing, and the development of supporting software platforms.

b) Regarding infrastructure, the AIGC Measures fosters collaborative efforts between all tiers of government and private enterprises to build robust AIGC infrastructure and public platforms for training data resources, aiming to enhance computational resources sharing.

c) In terms of market engagement, the AIGC Measures calls for cooperation among a broad array of organizations, both profit and non-profit, in areas such as technological innovation, data resource development, commercialization and application, and risk prevention in the AIGC field.

d) Concerning resource investment, the AIGC Measures supports the use of public data for algorithm training.

e) For industrial support, the AIGC Measures promotes enterprises’ procurement of secure, reliable chips, software, tools, computational power, and data resources.


These provisions will provide consistent support and bolster China’s competitiveness in the global AIGC technology race in the upcoming years.





Compared to the draft, the AIGC Measures puts more emphasis on the distinctive features of industry-specific regulation. As outlined in Article 16 of the AIGC Measures, regulatory bodies including the CAC, NDRC, MOE, MOST, MIIT, MPS, and NRTA are each assigned to fortify the supervision of AIGC services within their respective remit. The relevant supervisory authorities are expected to adapt and refine their regulatory methods to keep abreast with the innovative development of AIGC and craft appropriate rules or guidelines tailored to various categories and tiers of AIGC technology. This indicates a potential future shift towards more nuanced, industry-specific, and targeted supervision of AIGC services. 


The industry-specific regulatory approach aligns with the technologically intensive nature of AIGC, allowing a diverse range of sectors to develop more precise and effective regulations, measures, and standards based on their specific needs. For instance, prevention of AI-generated and propagated false news is potentially one of the key regulatory objectives for AIGC applied in the news industry. In the case of the financial industry, it becomes imperative to maintain the objectivity and fairness of user profiling, as well as ensure business continuity in the event of system attacks.


This industry-specific approach puts regulatory authorities in a better position to understand and manage AIGC services within their respective remit, paving the way for the development of specific regulatory measures and guidelines for different industries. By avoiding the pitfalls of a one-size-fits-all regulatory methodology, this tailored approach also helps prevent potential roadblocks in the overall development of AIGC services.





Globally, the regulation of artificial intelligence is attracting considerable attention, with numerous countries actively exploring its possibilities and potential applications. In the realm of AIGC-related laws and regulations, two primary themes have come to the limelight: algorithmic transparency and the categorization and tiering of AIGC.


Algorithmic transparency refers to the ability to reveal or explain the principles, logic, data, results, and other information involved in the design, training, optimization, and operation of an algorithm, allowing for effective supervision and evaluation. Algorithmic transparency is pivotal for regulating AIGC and promoting the sustainability of society at large. By facilitating a better understanding of the decision-making basis of the system, transparent algorithms bolster trust and acceptance of AIGC technology among users and other stakeholders. Furthermore, algorithmic transparency empowers regulatory bodies to verify and assess the compliance and fairness of AIGC technology, safeguarding against discriminatory, biased, or inappropriate practices. Transparent algorithms also stimulate innovation and advancement in AIGC technology, promoting system improvement through learning and error correction. The emphasis on algorithmic transparency is embodied in Article 4 of the AIGC Measures, which instructs Providers to adopt effective measures to improve transparency and increase the accuracy and reliability of AIGC services. The stipulation for algorithmic registration in the AIGC Measures, a provision that also can be found in the Internet Information Service Algorithmic Recommendation Management Provisions, further underscores its commitment to ensuring transparency.


The categorization and tiering of AIGC, an approach that addresses the diverse potential risks and impacts associated with AIGC technology and its applications, stratifies these into different levels and applies corresponding regulatory measures thereto. A regulatory framework built on this approach allows for the tailoring of requirements and measures to suit specific application areas, risk levels, and technological maturity of AIGC systems. This ensures precision in regulatory action, avoiding the pitfalls of a ‘one-size-fits-all’ approach. Moreover, a categorized and tiered framework fosters innovation and development, as it enables businesses and research institutions to plan and manage technological research and development, thereby reducing compliance costs and risks more effectively. The AIGC Measures echoes this approach, explicitly advocating for AIGC categorization and tiering, aligning with the Data Security Law’s requirement for data categorization and tiering.





While AIGC services carry enormous potential and promising prospects, they have also brought about a multitude of challenges and risks. The AIGC Measures, the first piece of legislation in the People’s Republic of China explicitly tailored to govern AIGC services, espouses an approach of inclusivity and caution. It strikes a considerate balance between promoting AIGC innovation and curtailing potential misuse. The rollout of the AIGC Measures not only sets a robust legal foundation and provides safeguards for enterprises, thereby encouraging the sturdy growth and regulated use of AIGC, but also offers valuable insights and benchmarks for the global regulation of AIGC.