Introduction
The wave of artificial intelligence is profoundly reshaping the production and dissemination of information content, raising the question of how to effectively utilize and govern it. One of the thematic forums at the 2026 China Internet Media Forum, titled “Effective Use and Governance: AI Content Regulation Development,” was recently held in Zhengzhou, Henan. The forum focused on the standardized development of AI content, showcasing governance achievements, exchanging practical experiences, and discussing long-term strategies for internet ecological governance.
AI Content Innovations
The short film “100 Seconds to Honor the Palace Museum’s Century of Guardianship” utilized AI technology to restore a wealth of historical materials, creating stunning visuals that blend ancient and modern scenes. Another project, “AI Creative Video: Nezha, Ao Bing, and Wukong Are Here! Myths Come to Reality with a ‘Billion’ Cool Factor,” showcased a creative interpretation of Chinese mythology through advanced technologies. Additionally, the piece “Waking Up in 2025 with Su Shi” cleverly juxtaposed the Song Dynasty poet Su Shi with the vibrant development of Sichuan in 2025, illustrating the province’s dynamic growth. These examples from the 2025 China Positive Energy Network Communication AI showcase the role of AI in enhancing content creators’ efficiency and expanding their imaginative horizons.
Risks of AI Misuse
However, the risks of AI misuse have also become prominent, presenting new challenges for internet ecological governance. Professor Shi Jianzhong from China University of Political Science and Law stated, “When technology is misused, what we see is no longer credible, and what we hear is no longer trustworthy.” Deep forgery erodes the foundation of trust. In practice, some businesses have used AI to synthesize the likenesses and voices of celebrities, hosts, and professionals without authorization, impersonating them to promote products, which not only infringes on personal rights but also constitutes commercial fraud.
Challenges in AI Content Governance
The values and output quality of large AI models heavily depend on their training data. Zhang Peng, CEO of Beijing Zhipu Huazhang Technology Co., Ltd., noted that biases and errors in training data could subtly influence audience perceptions through continuous output. Academician Zheng Zhiming from the Chinese Academy of Sciences emphasized that issues such as unverifiable sources, uncontrollable boundaries, unassignable responsibilities, and untraceable processes are deep-rooted challenges in AI content governance.
The Impact of AI on Content Production
AI technology has significantly lowered the barriers to content production, leading to an explosive growth of homogenized and low-quality information on the internet. The personalized information distribution mechanism enabled by algorithms may exacerbate the “information cocoon” effect. Lei Binyi, founder and CEO of Wuyou Media Group, remarked, “The more powerful the technology, the more content producers must maintain a sense of reverence and consistently adhere to a positive value orientation.”
Strengthening Internet Ecological Governance
On November 28, 2025, the Political Bureau of the CPC Central Committee conducted its 23rd collective study on strengthening internet ecological governance. This has significant and far-reaching implications for promoting high-quality development in the internet sector and accelerating the construction of a strong internet nation. Under top-level design, governance practices are continuously deepening. The Central Cyberspace Affairs Commission has launched the “Clear” series of special actions to address prominent issues such as the use of AI technology to produce and disseminate false information and obscene content, advancing the standardized management of AI-generated content.
Legal Framework and Standards
At the same time, the legal foundation is being solidified. Last year, the National Internet Information Office and four other departments jointly issued the “Identification Measures for AI-Generated Synthetic Content,” along with the mandatory national standard “Cybersecurity Technology - Identification Methods for AI-Generated Synthetic Content,” both of which came into effect on September 1, 2025. This set of measures constructs a collaborative governance loop of “source identification - distribution review - dissemination verification - user declaration,” successfully transforming principled requirements into executable governance practices. Regulatory documents in the field of information content are being formulated, and the governance system is becoming increasingly refined, with governance effectiveness becoming more apparent.
Technological Solutions to Governance Issues
The problems brought about by technology ultimately need to be solved with technology. Zheng Zhiming elaborated on the technical framework of “Trustworthy Intelligence,” which aims to achieve rights confirmation, evidence preservation, traceability, and accountability through blockchain technology. Privacy computing ensures that high-value data is “usable but invisible, computable but not leakable,” while content governance shifts from being “large and comprehensive” to being “precise, specialized, and controllable,” embedding governance into the “before, during, and after” stages of content generation.
Corporate Innovations in AI Governance
Many enterprises are exploring how to transform AI technology into a governance tool. Tencent launched “Qinghe Guardian,” which uses reinforcement learning to inject a large number of samples into its large model, enabling it to better conduct timely risk screening and build a proactive defense system for social and information flows. Douyin initiated the “Smart Shield Plan,” leveraging large model technology to assess risks of online violence from a global perspective, transitioning from passive response to proactive defense. Baidu developed the “Qingliu Jian” product, relying on the Wenxin large model to implement three lines of defense through multimodal intelligent detection, content traceability, and AI + human collaborative verification, assisting the public in identifying deep forgery content and online rumors. The People’s Daily’s “Tianmu” intelligent recognition system is exploring a new model of content risk control by using AI to govern AI, detecting deep forgery content and tracing the sources of synthetic methods.
Building a Healthy Ecological Foundation
The foundation of high-quality professional corpus and industry self-discipline is essential for constructing a healthy ecosystem. Shi Qiming, founder and CEO of Wuhan University of Technology Digital Communication Engineering Co., Ltd., emphasized that high-quality professional corpus will determine the upper limit of model capabilities, becoming a watershed and important variable in the international AI competitive landscape. He suggested leveraging the publishing industry, with its strict review processes, complete texts, and well-established systems, to build a self-controllable, healthy, and orderly supply system for high-quality Chinese AI corpus. At the forum, 52 enterprises related to AI content development collectively signed the “Self-Regulatory Convention for AI-Generated Synthetic Content,” working together to uphold standards and promote collaborative governance.
Comments
Discussion is powered by Giscus (GitHub Discussions). Add
repo,repoID,category, andcategoryIDunder[params.comments.giscus]inhugo.tomlusing the values from the Giscus setup tool.