@Régis Suhner/CoE

Overview

In today's digital age, tackling online abuse has emerged as a critical issue, that requires implementation of effective legislation. Testimonies from all sport actors, in particular players and referees, provide valuable insights into the nature of this phenomenon and its profound impact on both their personal and professional lives, as well as those of their families. It is therefore crucial to raise awareness of the detrimental effects of online abuse on the mental well-being and performance of victims. In response to this growing concern, many companies have developed software to monitor, moderate, and combat online abuse, and to protect social channels from hate speech, harassment, and other unlawful content.

Their cutting-edge solutions play a vital role in safeguarding the online experience of sportspeople, fostering a safer and more inclusive digital environment for all users.

In this page you will find examples of the experiences of some relevant companies, alongside a compilation of best practices to combat online abuse

Private sector practices

Companies combating online abuse

Trustlab

Detect: collect harmful content and misinformation across any online platform; Analyse: classify and assess online data against a wide selection of key threat vectors; Investigate: uncover trends, online narratives, and the discoverability of harmful content.

Read More

Trollrensics “Uncovering disinformation campaigns on social media” (slogan)

Developing software for tracking disinformation and hybrid warfare campaigns on social media platforms such as X (Twitter), TikTok, Youtube and so on

Read More

Signify Group

Ethical data science company. Threat Matrix service delivers AI-powered real-time monitoring (at scale), identification and investigation of online abuse and threats targeting sporting participants.

Read More

Bodyguard.ai

Content Moderation & Audience Analysis, toxic content moderation & user-generated content analysis and use of Artificial Intelligence / Natural Language Processing (NLP) technology

Read More

Arwen.ai

Arwen's sophisticated AI-driven platform automatically detects and removes toxicity and abuse on social media channels in 30 languages, including emojis, misspellings, and spam.

Read More

Best practices

Back Efficient reporting system for online hate speech in French professional football

 

Country: France

Organisation responsible: French Professional Football League (LFP)

Main topic addressed: Online hate

Type of resource/practice: Reporting system


Approach: Monitoring

Target group(s): French Professional clubs (L1 & L2)

Timing: Ongoing

Language: French

Brief description of the practice: In partnership with the social networking platforms Facebook and Twitter, the French Professional Football League (LFP) worked to create working groups and dedicated tools to strengthen and improve the fight against online hate in French football.

Context and objectives: This initiative is born in February 2021 during the COVID-19 pandemic. At this period of time, several actors of the game as well as their relatives were the target of online hate speech on social networks. More broadly the LFP claims that every football professional game is the opportunity for haters to spread their hatred.

The initiative was therefore decided considering an increase of online hate speech towards football actors in France notably.

One of the main objectives of the initiative is to provide clubs with a solution to face any hate content that might offence them on public social medias. 

Resources required: Agreements with social media platforms and partnerships with online protection services.

Resources required: 30 000 hate messages moderated over 545 000 during the 2021-2022 season.

Over this 2 years partnership (2021-2023), Bodyguard.ai moderated more than 1.7 million comments for the LFP and identified 71, 000 of them (4.1%) as hate messages.

Follow-up ideas and future plans: This initiative for dedicated tools to counter online hate towards clubs on social networks was followed by an improved initiative to tackle the issue.

The LFP partnered with the French tech company Bodyguard.ai which provides online security services thanks to artificial intelligence. Bodyguard.ai detects and moderate hatred content on the social media of the LFP. This partnership must allow a faster, more effective response to online hate on the nights of French professional football games.

Further information: 

La LFP dénonce un climat de haine sur les réseaux sociaux

La LFP et Bodyguard.ai renouvellent leur partenariat

Filter by
Approach
Awarding
Control and monitoring
Monitoring
Penalisation and rehabilitation
Prevention
Prevention and punishement
Social cohesion
Country
Austria
Belgium
England
France
Germany
Greece
Ireland
Italy
Montenegro
Spain
Sweden
Resource type
Campaign
Education programme
Legislation
Media/communication
Policy measure
Reporting system
Research and data collection
Reset Filter
Resource centre