When you come to Twitter to see what’s happening in the world, we want you to have context about the content you’re seeing and engaging with. Deliberate attempts to mislead or confuse people through manipulated media undermine the integrity of the conversation.
That’s why we recently announced our plan to seek public input on a new rule to address synthetic and manipulated media. We’ve called for public feedback previously because we want to ensure that — as an open service — our rules reflect the voice of the people who use Twitter. We think it’s critical to consider global perspectives, as well as make our content moderation decisions easier to understand.
What is synthetic and manipulated media?
The Twitter Rules, the service, and its features are always evolving, based on new behavior we see online. We routinely consult with experts and researchers to help us understand new issues like synthetic and manipulated media. Based on these conversations, we propose defining synthetic and manipulated media as any photo, audio, or video that has been significantly altered or fabricated in a way that intends to mislead people or changes its original meaning. These are sometimes referred to as deepfakes or shallowfakes.
Here’s a draft of what we’ll do when we see synthetic and manipulated media that purposely tries to mislead or confuse people:
Twitter may:
- place a notice next to Tweets that share synthetic or manipulated media;
- warn people before they share or like Tweets with synthetic or manipulated media; or
- add a link – for example, to a news article or Twitter Moment – so that people can read more about why various sources believe the media is synthetic or manipulated.
In addition, if a Tweet including synthetic or manipulated media is misleading and could threaten someone's physical safety or lead to other serious harm, we may remove it.
We want to hear from you
You’ll find here a brief survey, which is available in English, Hindi, Arabic, Spanish, Portuguese, and Japanese. For languages not represented here, our team is working closely with local non-governmental organizations and policymakers to ensure their perspectives are represented.
If you prefer to Tweet your feedback, we'll be listening there, too. Use the hashtag #TwitterPolicyFeedback.
Additionally, if you’d like to partner with us to develop solutions to detect synthetic and manipulated media, fill out this form.
The feedback period will close on Wednesday, Nov. 27 at 11:59 p.m. GMT. At that point, we’ll review the input we’ve received, make adjustments, and begin the process of incorporating the policy into the Twitter Rules, as well as train our enforcement teams on how to handle this content. We will make another announcement at least 30 days before the policy goes into effect.
We’re committed to serving the public conversation on Twitter and doing our work in an open and transparent manner. Thank you for taking the time to be part of this process — we look forward to hearing what you think.