Features
Development of AI Tools on Social Media Platforms and Emerging Policies
Published: February 11, 2026
Cecilia Borgenstam SILKA Stockholm, Sweden Enforcement Committee
Emily Burns Santa Clara University School of Law Santa Clara, California, USA Enforcement Committee
Over the past few years, the rapid deployment of artificial intelligence (AI) has reshaped how digital platforms are used and regulated. Social media networks, creative platforms, and content-sharing services have adopted AI in multiple ways, ranging from enhancing user experiences to policing their ecosystems. Many platforms now provide AI-powered creative tools to users (for example, generative text, image, audio, and video features) which significantly expand the scope of user-generated content but also introduce new risks around intellectual property (IP) and authenticity. Additionally, AI tools have been integrated by platforms themselves to detect harmful material, identify infringing content, and enforce community guidelines on a larger scale than possible with purely manual moderation.
This dual role of AI, as both an enforcement mechanism and a content-creation tool, has prompted platforms to establish policies governing its use. Some platforms explicitly address how AI-generated content should be labeled or disclosed, while others emphasize user responsibility to avoid infringement or impersonation when using AI. An added challenge is ensuring transparency and accountability. Both brands and users increasingly expect clarity on (1) whether their content can be used to train AI systems; (2) what rights apply to AI-generated material; and (3) how platforms manage the rising volume of AI-created content.
What follows below is a discussion of AI-related policies of several of the major online platforms and/or their specific products, that is, Google Play, Meta (owner of Facebook and Instagram), Midjourney, Snapchat (owned by Snap Inc.), Soundcloud, TikTok, and YouTube (owned by Google).
These platforms lead the way in constructing individualized policies on topics like the following:
- Ownership and permissible uses of user-generated AI content;
- Labeling and other identification of generative AI content on online platforms; and
- Content identification and moderation on IP and community guidelines or policy grounds.
In addition to these individualized AI-related policies, many platforms have joined coalitions to standardize treatment of these subjects through industry-wide initiatives. One such initiative is the Coalition for Content Provenance and Authenticity (C2PA).
The C2PA was formed to create and publish an open-source technical standard that stakeholders can use to identify the provenance and authenticity of digital content, including generative AI content. Called “Content Credentials,” this standard is designed to label content and provide key details such as: the creator or producer of the content, the tools used to create the content, whether AI was used in the creation of the content, and other information that helps users understand the content in context.
These measures, both on an individual platform basis as well as on an industry-wide basis, reflect a growing recognition that AI-generated content is no longer a novelty but a mainstream element of digital ecosystems, requiring thoughtful governance to balance creativity, user safety, and protection of IP.
Some platforms explicitly address how AI-generated content should be labeled or disclosed, while others emphasize user responsibility to avoid infringement or impersonation when using AI.
Using User Content for AI Training Purposes
Meta
Meta has been explicit that it trains its generative AI models on a combination of publicly available data, licensed material, and user interactions with its AI features. According to the company’s Generative AI Privacy page, this includes public posts and comments shared on Facebook and Instagram, but not private messages—unless a user or someone in the conversation chooses to share those messages with Meta’s AI tools. All individuals, whether or not they have a Meta account, can object to the use of their personal information in AI training, with opt-out processes most clearly formalized for EU and UK users under the General Data Protection Regulation (GDPR). Meta frames these measures as part of its broader commitment to transparency, accountability, and the responsible use of AI.
Midjourney
Midjourney’s Terms of Service grant the platform a broad license over both user prompts and the AI-generated outputs they produce. This license permits Midjourney to use inputs and outputs for purposes such as operating and improving its services. It does not offer any opt-out, so agreeing to this license is a condition of using the platform.
Snapchat
Snap Inc’s Terms of Service specifically identify AI inputs and outputs as data that the user has agreed to license to Snap while using its services. Snap further notes that it may use user’s publicly shared content to develop, train, and improve its generative AI products. A Snapchat user may opt out of this by changing sharing permissions in Settings.
TikTok
TikTok’s public guidance on AI centers on labeling and responsible use, not on data-use controls for training. Its AI-generated content policy lays out disclosure rules but does not address whether creators can opt out of model training. Separately, TikTok’s EEA/UK Privacy Policy states it uses information to train, test, and improve its technology—including machine-learning models and algorithms—to operate and develop the platform. TikTok provides no dedicated in-app opt-out mechanism for such training, though users may exercise GDPR rights (for example, object to processing based on legitimate interests). TikTok also participates in C2PA Content Credentials to help label and verify AI-generated media.
YouTube
Rights holders and YouTube creators can choose whether to permit the use of their content for AI data training purposes through opt-in settings in YouTube Studio. By default, this setting is turned off, and YouTube’s Terms of Service prohibit the scraping or unauthorized downloading of YouTube content for this purpose. If rights holders or YouTube creators choose to permit their content to be used for AI training purposes, they can choose to share this data with specific companies, or with all third parties.
Google may collect information from YouTube AI Features (such as prompts, outputs, and content shared with YouTube AI Features) and use this data to improve and develop Google’s AI technologies. YouTube’s AI Features include Dream Screen, Dream Track, Photo-to-Video in YouTube Shorts, Posts, or AI Stickers. Users can delete a prompt, output, or related content from their YouTube AI Feature activity.
Ownership of User-Generated AI Content
Meta
Meta treats AI-generated outputs in much the same way as other user content: users retain rights over what they create, but grant the platform a broad license to host, share, and use the content in connection with its services. Its Generative AI Privacy page notes that it may process user interactions with AI features to improve models, but does not suggest that Meta claims ownership of outputs. Instead, users own what they produce while Meta maintains platform rights typical of most user-generated content.
Midjourney
Midjourney’s Terms of Service state that users own the images and other outputs they create, subject to certain conditions. For instance, companies with more than $1 million in annual revenue must hold a Pro or Mega plan to retain ownership, and the original creator of upscaled images retains ownership of them. At the same time, by using the service, users grant Midjourney a broad license over both prompts and outputs, allowing the platform to reproduce, distribute, and otherwise use this material, including for service improvement.
Snapchat
Snap’s Terms of Service confirm that users retain pre-existing ownership of content that they upload, post, send, receive, and store. The Snap Terms of Service suggest that users own the content that they create with the service, but Snapchat’s Generative AI page is silent on this question.
Snap’s Terms of Service include a broad license for Snap to use the content that users create, submit, or make available to Snap’s services.
TikTok
TikTok’s Terms of Service confirm that users retain ownership of their creations, including AI-generated videos, but grant TikTok a broad, worldwide license to host, distribute, adapt, and otherwise use that content as needed to operate and develop the platform. TikTok’s AI-generated content policy makes clear that AI-generated media is treated as user-submitted content and therefore subject to the same rules and responsibilities as other uploads. TikTok does not distinguish ownership of AI-generated content from other user contributions but emphasizes that the platform may use uploaded material in the ways necessary to provide its services.
YouTube
Without mentioning AI-generated content specifically, YouTube’s Terms of Service confirm that users retain ownership rights over their content. Similar to other platforms, users grant YouTube a license to use their content for the purposes of providing or promoting the YouTube Service and the services of YouTube’s affiliates.
Without mentioning AI-generated content specifically, YouTube’s Terms of Service confirm that users retain ownership rights over their content.
Labeling of AI Content
Meta
Meta has adopted a “label-first” strategy for synthetic content, especially photorealistic images and realistic media. According to its announcement in February 2024 and its Help Center guidance, it already marks visibly and embeds its own AI-generated images with metadata for transparency. The platform is now rolling out detection of industry-standard signals (such as C2PA/IPTC metadata) to label images created with other tools. For videos and audio that are photorealistic or realistic-sounding, content creators must self-disclose their use of AI or face possible penalties. Meta emphasizes that this labeling approach is designed to inform users, not to remove content, unless it violates policy.
Midjourney
Midjourney’s policies do not appear to impose any requirement for users to label AI-generated outputs when they are posted or shared outside the platform.
Snapchat
Several features on Snapchat use generative AI, including My AI, AI Lenses, and AI Snaps. Snapchat informs users that these tools use generative AI in several ways, including using a “sparkle” icon, Context Cards, tool tips, and/or specific disclaimers. If a user creates an image using Snapchat’s generative AI tools and saves or exports this image, Snap marks the image with a “Snap Ghost with sparkles” watermark. It is a violation of Snap’s Terms of Service to remove this watermark.
TikTok
TikTok has introduced one of the most detailed AI labeling frameworks. Its AI-generated content policy requires users to disclose when they upload synthetic or AI-generated media, especially where it contains realistic images, audio, or video. TikTok also applies automatic “AI-generated” labels, including when creators use TikTok AI effects or when content carries Content Credentials metadata from the C2PA. Together, user disclosure and automated labeling improve transparency and reduce risks of impersonation or misinformation.
YouTube
YouTube requires the disclosure of altered or synthetic content, including AI-generated content when that content seems realistic. This includes content that makes real people appear to say or do something they did not say or do, that alters footage of real events or places, or that generates realistic-looking scenes that did not occur.
A YouTube creator identifies such content by using the Altered Content setting in YouTube Studio. YouTube may also apply these labels to user content to reduce the risk of harm to YouTube viewers. The platform may penalize YouTube creators who do not label content to comply with this policy, which may include terminating accounts.
YouTube will automatically label content created using the platform’s generative AI tools Dream Screen and Dream Track to disclose the use of AI. YouTube may also leverage Content Credentials (C2PA) to disclose other details about how the YouTube content was created.
User Obligations Not to Infringe IP or Violate Content Guidelines
Meta
Meta applies its Community Standards and Terms of Service equally to AI-generated and human-created content. It prohibits users from posting material that infringes third-party rights, spreads misinformation, or impersonates others, regardless of whether users made it with AI or not. The introduction of AI tools has not changed these core responsibilities—synthetic content must still comply with the same rules—and Meta may remove or restrict content that violates its policies. This approach emphasizes consistency and fairness across all content types.
Midjourney
Midjourney’s Terms of Service and Community Guidelines prohibit unlawful, harmful, or infringing use of its generative tools. Users must keep content safe for work, avoid prompts or outputs that are abusive, misleading, or that violate rights, and respect others’ creations. Midjourney only permits commercial use of generated assets within the limits of the user’s subscription tier. Violations may lead to suspension or banning from the platform.
Snapchat
Snap’s Terms of Service specifically mention the use of AI features and prohibit the use of the service to violate the content rights of others, to violate the Community Guidelines, to obscure or remove watermarks, or to misrepresent that AI-generated content was made solely by humans.
Snap’s Community Guidelines also specifically state that these policies apply to content created with its AI generation tools.
TikTok
TikTok’s Community Guidelines include a dedicated section on “Edited Media and AI-Generated Content (AIGC).” The Guidelines prohibit users from uploading AI-generated content that could mislead viewers into thinking it depicts real people or events without proper disclosure. In addition, TikTok’s Intellectual Property Policy makes clear that users are responsible for ensuring that their uploads—whether AI-created or otherwise—do not infringe copyrights or other rights. Violations can result in content removal or account-level penalties.
YouTube
YouTube’s Terms of Service prohibit the uploading of content that infringes the IP of others or violates YouTube’s Community Guidelines. Although these policies generally apply to all content uploaded to YouTube, and mostly are not AI-specific, YouTube’s Privacy Policy explicitly pertains to altered or synthetic content that generates a voice or image of an individual without their permission.
Content Moderation Practices Relating to AI-generated Content
Meta
Meta has extended its moderation framework to cover AI-generated content, but its approach is disclosure-first rather than automatic removal. It generally allows labeled synthetic media to remain on the platform unless it breaches existing rules. It may remove or restrict content that violates the Community Standards, for example, non-consensual sexual imagery, coordinated inauthentic behavior, or harmful misinformation. This model reflects Meta’s view that much AI-generated content is legitimate or creative, but that transparency and enforcement are necessary when synthetic media risks harm or deception.
Meta has extended its moderation framework to cover AI-generated content, but its approach is disclosure-first rather than automatic removal.
Midjourney
Midjourney enforces its Community Guidelines through a mix of automated filters, moderator review, and community reporting. It blocks certain prompts and outputs automatically, and users can flag content for investigation. The platform may suspend or ban accounts that repeatedly breach the rules. While the moderation system is less formalized than those of larger social platforms, it is intended to prevent AI generation from being used for abuse, deception, or illegal activity.
Snapchat
Snap’s Terms of Service indicate that it has put safeguards in place for the output of AI-generated content. Its Staying Safe with AI page provides more details on how it programmed its My AI chatbot to integrate the same safeguards and tools that are put in place across Snapchat. It has also trained My AI to avoid amplifying harmful or inaccurate information and fine-tuned it to reduce biases in language and prioritize factual information.
Snapchat uses a combination of automated and human review to moderate content on its public surfaces. On some feeds, like Spotlight, where user-generated content may be amplified, Snapchat uses an initial automated review, followed by a human review, as the content gets more viewership. Snap also uses a combination of automated and human review to determine whether content is suitable for recommendation to teens.
TikTok
TikTok sets out detailed enforcement mechanisms for AI-generated media in its AI-generated content policy, which treats undisclosed realistic AI content as misleading and subject to removal and prohibits uses such as impersonation, crisis misinformation, or depictions of minors, all of which may trigger stricter action. TikTok combines user reporting, automated detection, and Content Credentials metadata to identify and moderate AI-generated content at scale. As its Terms of Service and Privacy Policy confirm, moderation involves both automated systems and human review. Repeat or severe violations can result in suspension or account termination.
YouTube
All content on YouTube, whether AI-generated or not, is subject to its content moderation practices, including specific practices for violations of Community Guidelines, copyright infringement, and failure to disclose altered or synthetic content.
YouTube uses a combination of human review and AI detection to review and remove content in violation of its Community Guidelines.
Reporting of Violative AI-Generated Content
Meta
Meta offers multiple channels for reporting content, which apply equally to AI-generated and conventional material. Rights holders can file takedown requests through the Intellectual Property Reporting Center, while anyone can report impersonation or fake account concerns via the impersonation help page. They can also flag harmful or misleading synthetic media under the Community Standards policy. These pathways give brands and individuals several options to act when AI-generated content infringes rights or undermines trust.
Midjourney
Midjourney provides reporting channels for both IP and community-related violations. Those with copyright and trademark complaints can submit them under its takedown policy, while they can flag harmful prompts or outputs through the reporting tools described in its Community Guidelines. The platform may remove reported content and suspend or ban accounts in line with enforcement practices.
Snapchat
Snapchat encourages users to report violations of its Community Guidelines using in-app reporting features. Users can use dedicated web forms for copyright infringement, trademark infringement, and counterfeit goods to report IP violations. Snapchat does not appear to have separate reporting channels specifically for reporting AI-generated content.
TikTok
TikTok has multiple reporting channels tailored to different risks. Copyright holders can submit complaints through the Copyright Report Form, while impersonation and fake accounts can be reported using TikTok’s impersonation reporting process. In addition, users can report AI-generated media that violates TikTok’s AI-generated content policy, including synthetic content that is misleading or harmful. These overlapping routes reflect TikTok’s recognition that AI-generated content can infringe rights or erode trust in different ways.
YouTube
YouTube has multiple pathways to reporting violative content, including by flagging videos that violate YouTube’s community guidelines, reporting videos that infringe trademark rights or promote the sale of counterfeit goods, and identifying videos that may be subject to copyright restrictions. The platform encourages individuals whose voice or likeness has been generated in an altered or synthetic way to use YouTube’s privacy reporting form to request removal of this content.
Up to Date
The policies, platform practices, and URLs this article refers to reflect the information available as of the date of publication. AI technologies and related platform guidelines continue to evolve rapidly, so readers should confirm the most up-to-date details with the platforms concerned.
Although every effort has been made to verify the accuracy of this article, readers are urged to check independently on matters of specific concern or interest. The opinions expressed in this feature are those of the authors and do not purport to reflect the views of INTA or its members.
© 2026 International Trademark Association