Our continued collaboration with trusted partners

By @TwitterSafety
Friday, 17 December 2021

As we work to improve the health of the public conversation, we’re committed to reaching beyond Twitter’s virtual walls to integrate diverse perspectives that make our service better for everyone. That’s why we regularly collaborate with trusted partners on our Trust and Safety Council to develop products and programs, and to improve the Twitter Rules. 

Today, we’re sharing a recap of some of the work we’ve accomplished hand-in-hand with these trusted partners, as well as more about our ongoing commitment to incorporating the expertise of global experts, researchers, and developers to support healthy public conversation on Twitter.

Incorporating feedback to make Twitter safer

We know the best version of Twitter is the one built by the people who use it. Over the past year, we’ve engaged with the Trust and Safety Council on thirteen projects early in the development process. We distilled and put to use their feedback on ways we can offer a better and safer experience for people using Twitter. Their feedback directly informed our approach on several products.

  • Communities: We incorporated feedback on the need to manage expectations on the role that moderators play by limiting the number of responsibilities and building tools to help them manage potential harassment.
  • Tips: We incorporated feedback on the need to emphasize that the people on our service are responsible for transactions in a user-friendly way by asking people to agree to terms of service when enabling the feature.
  • Safety Mode: To mitigate the effect on limiting counter-speech, a concern raised by the council particularly for people in positions of power, we decided to automatically time out interventions for seven days.
  • Conversation settings: We started testing a notification that reminds people they can change who can reply to their Tweets to increase awareness and adoption as a direct recommendation from the council.

 

This post is unavailable
This post is unavailable.

Twitter has become an extremely important communication tool in India, and it is encouraging to see Twitter take active interest and respond to the feedback given by council members. We’re happy to be part of a team of trusted partners which gets heard and is able to make Twitter a safer platform for all, especially women.

Centre for Social Research

@CSR_India

As a founding member of the Trust and Safety Council, we’ve worked alongside Twitter to help influence positive change for over a decade. We believe that everyone should benefit from technology free from abuse and harassment and bring this perspective to all council meetings. We look forward to continued collaboration with Twitter to ensure that matters raised by our organization and the people we support are integrated into Twitter’s products and policies.

SWGfL

UK Safer Internet Centre partner

@SWGfL_Official

Transparency is core to Twitter’s approach. Through initiatives such as our open developer platform, our information operations archive, and our disclosures in the Twitter Transparency Center and Lumen, we continue to support third-party research of what’s happening on Twitter. We’ll continue to build on these efforts and inform the public as we improve Twitter in the open. The following are highlights from the past year.

  • Twitter API for Academic Research: In early 2021, we launched a dedicated Academic Research product track on the new Twitter API giving qualified researchers access to the entire history of public conversation and elevated access to real-time data for free. 
  • Algorithmic bias bounty challenge: When we introduced our commitment to responsible machine learning, we also said, “the journey to responsible, responsive, and community-driven machine learning systems is a collaborative one.” That’s why we introduced the industry’s first algorithmic bias bounty competition to draw on the global ethical AI community’s knowledge of the unintended harms of saliency algorithms to expand our own understanding and to reward the people doing work in this field.
  • Twitter Moderation Research Consortium (TMRC): We announced the creation of a new global expert group  of academics, members of civil societies and NGOs, and journalists to study platform governance issues. We look forward to deeper analysis from this range of global experts and expect the collaboration to result in expanded disclosures beyond information operations to include sharing data in areas like misinformation, coordinated harmful activity, and safety.  
  • Launch of an API curriculum: “Getting started with the Twitter API for Academic Research” is now being used at universities, enabling students and teachers to learn how to use Twitter data for academic research. It is currently starred by over 200 academics on Github. 
  • Creation of a Developer Platform Academic Research advisory board: This group of 12 scholars began work with our team this year to better understand how we can enhance the use of the Twitter API for academic research, while increasing meaningful dialogue between the Twitter Academic program and the academic community.
  • Developer research highlights: We published and continued to spotlight key research areas Twitter teams are working on today in an effort to inspire even more researchers to pursue these topics.

What’s next? Increasing transparency and understanding on our approach to content moderation

As we continue to invite trusted partners and the public to share feedback on ways to make Twitter safe, it’s important to be transparent about how we develop and enforce the Twitter Rules. Our newly formed Content Governance Initiative (CGI) aims to do this by developing a governance framework that provides a consistent and principled approach to the development, enforcement, and assessment of our global rules and policies. To build our governance framework, we’re engaging external stakeholders and have created an additional advisory group on our Trust and Safety Council. We’ll continue collaborating with this group and cross-functional teams across Twitter to establish standardized guidelines on policy development, enforcement, and appeals that help drive a common understanding of Twitter’s approach to content moderation. The framework's principles and guidelines will aim to fulfill the following objectives:

  1. Build legitimacy and trust through transparency and accountability.
  2. Deepen our commitment to good governance and human rights.
  3. Provide additional clarity on Twitter’s content moderation processes.
  4. Affirm our commitment to serving a diverse and inclusive global community.

We recognize that achieving these objectives will not be easy. Content moderation at scale is a highly complex and challenging process. This initiative reflects our ongoing commitment to working systematically — in partnership with external stakeholders around the world — to improve the transparency and consistency of our content moderation processes.

Here’s to another year of building together 

We want to acknowledge the members of the Trust and Safety Council, research partners, civil society representatives, and you, the people using Twitter, for continuing to hold us accountable. You challenge us, offer different perspectives, and support us in our mission to safeguard the public conversation. We’re looking forward to this next chapter and can’t wait to see what we build together. 

 

This post is unavailable
This post is unavailable.