Online Hate and Discrimination in the Age of AI
This project explores the impact of online hate speech and discrimination, focusing on transnational perspectives and the influence of AI-driven algorithms on social platforms.
The rapid evolution of AI-driven technologies has significantly transformed the digital landscape, amplifying the prevalence and influence of online hate and discrimination. From antisemitism and anti-LGBTQI+ rhetoric to misogyny and racism, harmful content now easily crosses borders, affecting global communities in profound and complex ways. As AI technologies progress rapidly, they not only enhance the accessibility and dissemination of harmful content but also intensify its effects online.
In response to these challenges, this project examines trends in online hate speech and explores the role of AI policy and technological solutions in mitigating emerging risks and improving digital safety.
Aims and objectives
This initiative is designed to enhance comprehension of the relationships among online hate, digital platforms, and the rapid evolution of artificial intelligence. The project intends to outline a detailed framework of how harmful content arises and spreads internationally, influenced by technological advancements. The primary objective is to evaluate the risks and opportunities that emerging technologies present in digital environments and to gain insights into how we can utilise technological innovations to reduce the effects of online hate speech while improving digital safety. Additionally, the project will focus on identifying the most effective regulatory frameworks and preventive measures in this contemporary digital landscape.
Through a series of expert-led discussions and other outputs, the project’s objective is to assess existing regulatory and technological interventions, identify gaps in current approaches, and propose forward-looking strategies for enhancing digital safety.
The initiative aims to connect research, policy, and the private/technology sectors to help inform evidence-based responses and contribute to the development of more effective regulations. The findings and discussions will support the broader, overarching goal of fostering a safer, more inclusive online environment, while addressing the intricate challenges posed by online hate speech and its relationship with advancements in artificial intelligence and governance.
Project outputs
The project will commence with a closed-door roundtable in Berlin, organised in collaboration with ISD Germany, convening key stakeholders from the public, private, and technology sectors. This gathering is designed to facilitate robust knowledge-sharing on emerging risks associated with AI-amplified online hate speech and proactive engagement on mitigation strategies.
This will be followed by region-specific online workshops focusing on the unique challenges and dynamics of AI-amplified online hate speech within the African and Latin American contexts. These workshops will engage experts and practitioners across the respective regions to address the specific manifestations of online hate and the role of AI in exacerbating these issues. Special attention will be given to strategies that resonate with the socio-political and cultural realities of the regions in question. The workshops will also explore effective approaches to monitoring, regulating, and countering hate speech in a way that upholds democratic values and social harmony.
The event series will be accompanied by a conference report that captures the key discussions, insights, and recommendations. This report will serve as valuable resource for stakeholders unable to attend the sessions and will contribute to the broader knowledge base on combating online hate speech.