AI generated 'infringement minefield' on live streaming platforms
2026 01/30
Hotspots · Analysis
AI generated 'infringement minefield' on live streaming platforms
Behind the hustle and bustle, there are hidden currents surging. The "2025 Report on the Development of China's AI Live Streaming Industry" reveals that the domestic AI live streaming market has exceeded 10 billion yuan, with over a million related accounts. AI is no longer a gimmick, but a real growth engine. However, disputes such as "AI counterfeit celebrity sales", "script plagiarism and proofreading", and "virtual scene image theft and infringement" have erupted one after another. As the core of traffic, live streaming platforms, even if they do not directly create content, may still bear legal responsibility for indirect infringement due to inadequate supervision.
1、 The legal logic for live streaming platforms to bear indirect infringement liability
The determination of indirect infringement liability on live streaming platforms revolves around two core elements: "fault" and "obligation", forming a clear legal application logic. This logic balances the operational reality of the platform with the protection needs of rights holders, and provides clear guidance for liability determination.
The core attribution principle for platforms to bear indirect infringement liability is "fault+necessary measures", which means that platforms only need to bear joint and several liability with directly infringing users if they know or should know that users are using their services to generate and disseminate infringing content, and have not taken effective measures to stop it. This principle clarifies the boundaries of platform responsibility, not requiring the platform to conduct a comprehensive review of all AI generated content without discrimination, but rather using "whether there is fault" as the key trigger for responsibility, to avoid excessively increasing the platform's operational burden.
In terms of fault determination and duty of care, there is a clear gradient division at the legal level. The core criterion for determining fault is "knowing or should have known", and it is not allowed to directly determine the platform's fault based solely on "not actively reviewing"; But if the platform fails to take reasonable technical measures to prevent infringement risks, it may be presumed to be at fault. At the same time, if the platform directly obtains economic benefits from AI generated live content, such as profiting from advertising revenue sharing, paid live streaming revenue, etc., its duty of care will be correspondingly increased, and the standards for fault determination will be stricter, which reflects the basic legal spirit of matching rights and obligations.
2、 Criteria for determining indirect infringement liability of platforms in judicial practice
The "Measures for the Supervision and Administration of Live E-commerce" and related judicial interpretations have further refined the judgment standards. In judicial practice, the court's judgment of indirect infringement liability of platforms mainly focuses on two dimensions: fault determination and obligation fulfillment, forming specific judgment rules that are in line with the actual situation of the industry.
Knowing or should have known "is the core of fault determination, and there is a clear distinction between the two in judicial practice. The determination of "knowing" is mainly based on the clear notice of the right holder. If the right holder provides specific links to the infringing content, proof of rights, and other valid materials that can enable the platform to clearly identify the infringement facts, and the platform fails to handle it in a timely manner, it can be directly determined as a fault at the "knowing" level. 'Should have known' belongs to presumed fault, and the court usually makes a comprehensive judgment based on three major factors:
One is the popularity of infringing content dissemination. If an AI live stream receives a large number of traffic recommendations due to obvious infringing elements, and its infringing characteristics are significant enough, the platform may be presumed to "know" what it did not notice;
The second is the profit correlation between the platform and infringing content. If the platform conducts profit operations such as traffic bias and advertising placement on the infringing live broadcast, it will significantly increase the probability of fault inference;
The third is the feasibility of technical identification. If the platform has the corresponding AI content infringement detection capability but fails to effectively use it, resulting in the dissemination of infringing content, it may also be deemed as "should know".
Whether the platform has fulfilled its corresponding obligations is mainly judged from two aspects: the timeliness and effectiveness of the measures, and the degree of fulfillment of the duty of care. After the rights holder issues a valid infringement notice, the platform needs to take substantive measures within a reasonable time recognized by industry practices, usually within 24-48 hours. If the platform only performs formal processing and fails to completely delete infringing materials, or delays processing resulting in the expansion of damages, it will be deemed as insufficient fulfillment of obligations.
In the absence of notice from the rights holder, the degree of fulfillment of the duty of care should be judged based on the actual situation of the platform: if the platform directly profits from the generation of live content through AI or uses algorithms for precise push, the court will require it to bear a higher duty of care and actively use technological means to investigate infringement risks; On the contrary, if the platform can prove that it has adopted infringement detection technology that meets industry standards but still finds it difficult to detect infringing content, it can be deemed to be at fault.
3、 AI generated compliance path construction for platforms in live streaming scenarios
1. Clarifying the division of responsibilities among partners is the key to source prevention and control. When the platform conducts business with MCN institutions, AI technology providers, and other cooperative entities, a dedicated intellectual property compliance clause should be added to the cooperation agreement to clearly define the rights and responsibilities of each party. The terms should clearly stipulate that the cooperating party shall ensure that all elements of the AI generated live content provided, including digital human images, scripts, scenes, etc., have obtained complete and legal authorization and will not infringe on the legitimate rights and interests of any third party; If infringement disputes arise due to the content provided by the cooperating party, the cooperating party shall independently bear all legal responsibilities and compensate the platform for all related expenses such as economic losses, lawyer fees, litigation costs, etc. suffered as a result. Transmitting compliance obligations to upstream through contractual agreements can not only force partners to strengthen compliance management, but also reserve a recovery channel for the platform, reducing the risk of passive liability.
2. Strengthening the AI generated content review mechanism is the core link of process control. At the technical level, the platform should establish a specialized AI content compliance detection system, connect with genuine material libraries, and use advanced technologies such as portrait comparison, copyright fingerprint recognition, and AI generated identification detection to conduct pre review of core content such as digital human images, live streaming scripts, and virtual backgrounds, in order to intercept infringing content from the source. At the same time, it is necessary to regularly update detection algorithms and databases based on the iterative upgrading of AI technology to ensure that detection capabilities always keep up with the pace of industry development.
At the operational level, it is necessary to improve the mechanism for handling infringement complaints and establish a long-term constraint mechanism. The platform should establish an AI generated content exclusive complaint channel, clarify the list of materials that rights holders need to submit when complaining, including proof of rights, screenshots of infringing content, AI generated identification basis, etc., simplify the complaint process, and improve response efficiency. At the same time, it is necessary to publicly commit to a processing time limit of no more than 48 hours, and promptly provide feedback on the processing results to the rights holder, keeping complete processing records for verification. In addition, a blacklist system for infringing entities should be established, and graded punishment measures such as restricting upload permissions, account bans, and terminating cooperation should be taken for users and partners who repeatedly upload AI generated infringing content, forming continuous compliance constraints and guiding industry entities to regulate content production and dissemination behavior.
The dividends brought by AI to the live streaming industry are real, but the risk of infringement is also real. Too many platforms, due to neglecting compliance and earning enough money to compensate, end up leaving in disappointment. In the live streaming industry, compliance is not a cost, but a lifeline. The more prosperous the industry is, the stricter the regulation becomes. Only by holding onto the legal red line can we achieve long-term success.
Industry · New Policies
On January 1, 2026, the Implementation of the National Law of the People's Republic of China on the Common Language and Writing System (Revised in 2025)
Article 14: In the following situations, the basic language and characters used shall be the national common language: (2) Language and characters used in online literary and artistic programs, online dramas, online movies and other online audio-visual programs, as well as online publications such as online games. Newly added online audiovisual programs, online games, etc. should use the national common language and characters as the basic language.
On January 1, 2026, the Implementation of the Law of the People's Republic of China on Public Security Administration Punishments (Revised in 2025)
Article 35: Anyone who commits any of the following acts shall be detained for not less than five days but not more than ten days or fined not less than one thousand yuan but not more than three thousand yuan; if the circumstances are serious, he shall be detained for not less than ten days but not more than fifteen days and may also be fined not more than five thousand yuan: (3) those who insult, defame or otherwise infringe upon the names, portraits, reputations, and honors of heroes and martyrs, and harm the public interest; (4) those who desecrate or deny the deeds and spirits of heroes and martyrs, or produce, disseminate, spread, publicize, beautify the war of aggression, acts of aggression, or other articles such as pictures, audio and video, etc., and disturb public order. Article 80 stipulates that those who produce, transport, copy, sell, rent obscene materials such as books, periodicals, pictures, films, audiovisual products, or use information networks, telephones, and other communication tools to disseminate obscene information shall be detained for not less than 10 days but not more than 15 days and may be fined not more than 5000 yuan; those with minor circumstances shall be detained for not more than 5 days or fined not less than 1000 yuan but not more than 3000 yuan.
If the obscene materials or information mentioned in the preceding paragraph involve minors, they shall be punished severely. That is, new penalty clauses have been added for damaging the image of heroes and martyrs, online rumors, and vulgar content involving minors.
Starting from January 1, 2026, the State Administration of Radio, Film and Television will launch a one month special campaign nationwide to rectify the chaos of "AI modified" video dissemination.
The key focus of the special governance is to clean up the following videos based on TV dramas such as the Four Great Classical Novels, historical themes, revolutionary themes, and heroic figures that have undergone "AI magic modification":
Firstly, it seriously violates the core spirit and character image of the original work, subverts basic cognition, and deconstructs universal consensus;
Secondly, the content renders bloody violence, sensationalism and vulgarity, promotes erroneous values, and violates public order and good customs;
The third issue is the prominent problem of appropriation and tampering with Chinese culture, which leads to a significant misplacement of understanding of the true historical time, space, and symbols of Chinese civilization, and impacts cultural identity.
Special governance and simultaneous cleaning up of various cult animations adapted from well-known and beloved animated characters of children and adolescents.
On January 8, 2026, the Network Audiovisual Department of the State Administration of Radio and Television issued the "Management Tips (Children's Micro Dramas)".
The following are tips for creating micro dramas that focus on children's perspectives or have children as the core characters:
1. Rooted in children's lives, respecting the laws of growth, and curbing the tendency towards "adult like" behavior. Implement the relevant requirements of the "Regulations on the Management of Minors' Programs", follow the laws and characteristics of children's physical and mental development at different stages, and ensure that the characters' words and deeds, as well as the story plot, are in line with the cognitive abilities, life experiences, and social moral standards of the corresponding age group. It is not allowed to deliberately shape the image of cunning and scheming children for the purpose of creating dramatic conflicts or using the excuse of reincarnation through time travel, and promote concepts such as using evil to suppress evil and calculating power. It is strictly prohibited to perform adult oriented plotlines such as "domineering CEO", participate in "campus bullying", or show "inciting opposition" as a child.
2. Focus on protecting rights and interests, eliminate profiting from children, and correct the tendency towards "instrumentalization". Consciously abide by relevant national laws and regulations, invite children to participate in micro dramas, and obtain written informed consent from their legal guardians in accordance with the law, effectively safeguarding the personal safety, mental health, and right to compulsory education of child actors. It is not allowed to engage in "gnawing on the small" commercial hype under the name of star making, promoting that fame should be achieved early and that appearance is justice, and inducing families to pay high training and packaging fees. It is not allowed to arrange child actors to shoot and perform violent, thrilling, emotional entanglements and other scenes beyond their physical and mental capacity. Resolutely prevent children from being treated as "tools" to satisfy adults' fantasies of overnight wealth, emotional compensation, or traffic harvesting.
3. Strictly review and control, strengthen value guidance, and resist the trend of "entertainment oriented". Based on promoting the healthy growth and comprehensive development of minors, we encourage the creation of excellent micro dramas that are close to reality, educational and entertaining, cultivate patriotism, promote moral cultivation, broaden knowledge and insight, and vividly showcase the positive and upward spirit of children in the new era. Strengthen the management of children's micro dramas, regulate the total amount, improve quality, and maintain the foundation of childlike innocence and innocence. Avoid creating vulgar and vulgar content that lacks basic logic and is detached from children's cognition under the guise of humorous entertainment. Eliminate the use of artistic imagination and promote the concept of utilitarian growth.
AI generated 'infringement minefield' on live streaming platforms
Behind the hustle and bustle, there are hidden currents surging. The "2025 Report on the Development of China's AI Live Streaming Industry" reveals that the domestic AI live streaming market has exceeded 10 billion yuan, with over a million related accounts. AI is no longer a gimmick, but a real growth engine. However, disputes such as "AI counterfeit celebrity sales", "script plagiarism and proofreading", and "virtual scene image theft and infringement" have erupted one after another. As the core of traffic, live streaming platforms, even if they do not directly create content, may still bear legal responsibility for indirect infringement due to inadequate supervision.
1、 The legal logic for live streaming platforms to bear indirect infringement liability
The determination of indirect infringement liability on live streaming platforms revolves around two core elements: "fault" and "obligation", forming a clear legal application logic. This logic balances the operational reality of the platform with the protection needs of rights holders, and provides clear guidance for liability determination.
The core attribution principle for platforms to bear indirect infringement liability is "fault+necessary measures", which means that platforms only need to bear joint and several liability with directly infringing users if they know or should know that users are using their services to generate and disseminate infringing content, and have not taken effective measures to stop it. This principle clarifies the boundaries of platform responsibility, not requiring the platform to conduct a comprehensive review of all AI generated content without discrimination, but rather using "whether there is fault" as the key trigger for responsibility, to avoid excessively increasing the platform's operational burden.
In terms of fault determination and duty of care, there is a clear gradient division at the legal level. The core criterion for determining fault is "knowing or should have known", and it is not allowed to directly determine the platform's fault based solely on "not actively reviewing"; But if the platform fails to take reasonable technical measures to prevent infringement risks, it may be presumed to be at fault. At the same time, if the platform directly obtains economic benefits from AI generated live content, such as profiting from advertising revenue sharing, paid live streaming revenue, etc., its duty of care will be correspondingly increased, and the standards for fault determination will be stricter, which reflects the basic legal spirit of matching rights and obligations.
2、 Criteria for determining indirect infringement liability of platforms in judicial practice
The "Measures for the Supervision and Administration of Live E-commerce" and related judicial interpretations have further refined the judgment standards. In judicial practice, the court's judgment of indirect infringement liability of platforms mainly focuses on two dimensions: fault determination and obligation fulfillment, forming specific judgment rules that are in line with the actual situation of the industry.
Knowing or should have known "is the core of fault determination, and there is a clear distinction between the two in judicial practice. The determination of "knowing" is mainly based on the clear notice of the right holder. If the right holder provides specific links to the infringing content, proof of rights, and other valid materials that can enable the platform to clearly identify the infringement facts, and the platform fails to handle it in a timely manner, it can be directly determined as a fault at the "knowing" level. 'Should have known' belongs to presumed fault, and the court usually makes a comprehensive judgment based on three major factors:
One is the popularity of infringing content dissemination. If an AI live stream receives a large number of traffic recommendations due to obvious infringing elements, and its infringing characteristics are significant enough, the platform may be presumed to "know" what it did not notice;
The second is the profit correlation between the platform and infringing content. If the platform conducts profit operations such as traffic bias and advertising placement on the infringing live broadcast, it will significantly increase the probability of fault inference;
The third is the feasibility of technical identification. If the platform has the corresponding AI content infringement detection capability but fails to effectively use it, resulting in the dissemination of infringing content, it may also be deemed as "should know".
Whether the platform has fulfilled its corresponding obligations is mainly judged from two aspects: the timeliness and effectiveness of the measures, and the degree of fulfillment of the duty of care. After the rights holder issues a valid infringement notice, the platform needs to take substantive measures within a reasonable time recognized by industry practices, usually within 24-48 hours. If the platform only performs formal processing and fails to completely delete infringing materials, or delays processing resulting in the expansion of damages, it will be deemed as insufficient fulfillment of obligations.
In the absence of notice from the rights holder, the degree of fulfillment of the duty of care should be judged based on the actual situation of the platform: if the platform directly profits from the generation of live content through AI or uses algorithms for precise push, the court will require it to bear a higher duty of care and actively use technological means to investigate infringement risks; On the contrary, if the platform can prove that it has adopted infringement detection technology that meets industry standards but still finds it difficult to detect infringing content, it can be deemed to be at fault.
3、 AI generated compliance path construction for platforms in live streaming scenarios
1. Clarifying the division of responsibilities among partners is the key to source prevention and control. When the platform conducts business with MCN institutions, AI technology providers, and other cooperative entities, a dedicated intellectual property compliance clause should be added to the cooperation agreement to clearly define the rights and responsibilities of each party. The terms should clearly stipulate that the cooperating party shall ensure that all elements of the AI generated live content provided, including digital human images, scripts, scenes, etc., have obtained complete and legal authorization and will not infringe on the legitimate rights and interests of any third party; If infringement disputes arise due to the content provided by the cooperating party, the cooperating party shall independently bear all legal responsibilities and compensate the platform for all related expenses such as economic losses, lawyer fees, litigation costs, etc. suffered as a result. Transmitting compliance obligations to upstream through contractual agreements can not only force partners to strengthen compliance management, but also reserve a recovery channel for the platform, reducing the risk of passive liability.
2. Strengthening the AI generated content review mechanism is the core link of process control. At the technical level, the platform should establish a specialized AI content compliance detection system, connect with genuine material libraries, and use advanced technologies such as portrait comparison, copyright fingerprint recognition, and AI generated identification detection to conduct pre review of core content such as digital human images, live streaming scripts, and virtual backgrounds, in order to intercept infringing content from the source. At the same time, it is necessary to regularly update detection algorithms and databases based on the iterative upgrading of AI technology to ensure that detection capabilities always keep up with the pace of industry development.
At the operational level, it is necessary to improve the mechanism for handling infringement complaints and establish a long-term constraint mechanism. The platform should establish an AI generated content exclusive complaint channel, clarify the list of materials that rights holders need to submit when complaining, including proof of rights, screenshots of infringing content, AI generated identification basis, etc., simplify the complaint process, and improve response efficiency. At the same time, it is necessary to publicly commit to a processing time limit of no more than 48 hours, and promptly provide feedback on the processing results to the rights holder, keeping complete processing records for verification. In addition, a blacklist system for infringing entities should be established, and graded punishment measures such as restricting upload permissions, account bans, and terminating cooperation should be taken for users and partners who repeatedly upload AI generated infringing content, forming continuous compliance constraints and guiding industry entities to regulate content production and dissemination behavior.
The dividends brought by AI to the live streaming industry are real, but the risk of infringement is also real. Too many platforms, due to neglecting compliance and earning enough money to compensate, end up leaving in disappointment. In the live streaming industry, compliance is not a cost, but a lifeline. The more prosperous the industry is, the stricter the regulation becomes. Only by holding onto the legal red line can we achieve long-term success.
Industry · New Policies
On January 1, 2026, the Implementation of the National Law of the People's Republic of China on the Common Language and Writing System (Revised in 2025)
Article 14: In the following situations, the basic language and characters used shall be the national common language: (2) Language and characters used in online literary and artistic programs, online dramas, online movies and other online audio-visual programs, as well as online publications such as online games. Newly added online audiovisual programs, online games, etc. should use the national common language and characters as the basic language.
On January 1, 2026, the Implementation of the Law of the People's Republic of China on Public Security Administration Punishments (Revised in 2025)
Article 35: Anyone who commits any of the following acts shall be detained for not less than five days but not more than ten days or fined not less than one thousand yuan but not more than three thousand yuan; if the circumstances are serious, he shall be detained for not less than ten days but not more than fifteen days and may also be fined not more than five thousand yuan: (3) those who insult, defame or otherwise infringe upon the names, portraits, reputations, and honors of heroes and martyrs, and harm the public interest; (4) those who desecrate or deny the deeds and spirits of heroes and martyrs, or produce, disseminate, spread, publicize, beautify the war of aggression, acts of aggression, or other articles such as pictures, audio and video, etc., and disturb public order. Article 80 stipulates that those who produce, transport, copy, sell, rent obscene materials such as books, periodicals, pictures, films, audiovisual products, or use information networks, telephones, and other communication tools to disseminate obscene information shall be detained for not less than 10 days but not more than 15 days and may be fined not more than 5000 yuan; those with minor circumstances shall be detained for not more than 5 days or fined not less than 1000 yuan but not more than 3000 yuan.
If the obscene materials or information mentioned in the preceding paragraph involve minors, they shall be punished severely. That is, new penalty clauses have been added for damaging the image of heroes and martyrs, online rumors, and vulgar content involving minors.
Starting from January 1, 2026, the State Administration of Radio, Film and Television will launch a one month special campaign nationwide to rectify the chaos of "AI modified" video dissemination.
The key focus of the special governance is to clean up the following videos based on TV dramas such as the Four Great Classical Novels, historical themes, revolutionary themes, and heroic figures that have undergone "AI magic modification":
Firstly, it seriously violates the core spirit and character image of the original work, subverts basic cognition, and deconstructs universal consensus;
Secondly, the content renders bloody violence, sensationalism and vulgarity, promotes erroneous values, and violates public order and good customs;
The third issue is the prominent problem of appropriation and tampering with Chinese culture, which leads to a significant misplacement of understanding of the true historical time, space, and symbols of Chinese civilization, and impacts cultural identity.
Special governance and simultaneous cleaning up of various cult animations adapted from well-known and beloved animated characters of children and adolescents.
On January 8, 2026, the Network Audiovisual Department of the State Administration of Radio and Television issued the "Management Tips (Children's Micro Dramas)".
The following are tips for creating micro dramas that focus on children's perspectives or have children as the core characters:
1. Rooted in children's lives, respecting the laws of growth, and curbing the tendency towards "adult like" behavior. Implement the relevant requirements of the "Regulations on the Management of Minors' Programs", follow the laws and characteristics of children's physical and mental development at different stages, and ensure that the characters' words and deeds, as well as the story plot, are in line with the cognitive abilities, life experiences, and social moral standards of the corresponding age group. It is not allowed to deliberately shape the image of cunning and scheming children for the purpose of creating dramatic conflicts or using the excuse of reincarnation through time travel, and promote concepts such as using evil to suppress evil and calculating power. It is strictly prohibited to perform adult oriented plotlines such as "domineering CEO", participate in "campus bullying", or show "inciting opposition" as a child.
2. Focus on protecting rights and interests, eliminate profiting from children, and correct the tendency towards "instrumentalization". Consciously abide by relevant national laws and regulations, invite children to participate in micro dramas, and obtain written informed consent from their legal guardians in accordance with the law, effectively safeguarding the personal safety, mental health, and right to compulsory education of child actors. It is not allowed to engage in "gnawing on the small" commercial hype under the name of star making, promoting that fame should be achieved early and that appearance is justice, and inducing families to pay high training and packaging fees. It is not allowed to arrange child actors to shoot and perform violent, thrilling, emotional entanglements and other scenes beyond their physical and mental capacity. Resolutely prevent children from being treated as "tools" to satisfy adults' fantasies of overnight wealth, emotional compensation, or traffic harvesting.
3. Strictly review and control, strengthen value guidance, and resist the trend of "entertainment oriented". Based on promoting the healthy growth and comprehensive development of minors, we encourage the creation of excellent micro dramas that are close to reality, educational and entertaining, cultivate patriotism, promote moral cultivation, broaden knowledge and insight, and vividly showcase the positive and upward spirit of children in the new era. Strengthen the management of children's micro dramas, regulate the total amount, improve quality, and maintain the foundation of childlike innocence and innocence. Avoid creating vulgar and vulgar content that lacks basic logic and is detached from children's cognition under the guise of humorous entertainment. Eliminate the use of artistic imagination and promote the concept of utilitarian growth.
Related recommendations
- The proliferation of "AI modified" videos raises three major red lines and legal risks for creators to be wary of
- AI generated 'infringement minefield' on live streaming platforms
- Watch and clock in carefully, don't let the "screen camera" step on the red line
- Analysis of Indirect Infringement Risks and Construction of Compliance Paths on Platforms under the Popularity of Micro Dramas



