Towards the Development of an Online Violence Alert and Response System
Gender-based online violence against women journalists is one of the biggest contemporary threats to press freedom globally. This talk describes a dashboard we are developing for monitoring and exploring relevant social media data, as well as some findings in the form of recently published big data case studies investigating online violence targeted at a number of emblematic women journalists from around the world. In order to conduct this large scale analysis of online abuse, we have developed NLP tools to identify and characterise online abuse from Twitter targeted at specific individuals, with the ultimate aim of developing an "early warning system" to help predict the escalation of online abuse into offline harm and violence, based on indicators from the analysis. The dashboard we have developed provides a rich understanding of abuse towards one or more journalists, but also comparisons between different journalists over time, and indicators of factors such as coordinated abusive behaviour, gaslighting, or potential for escalation to offline harm. Finally, we present a set of indicators we have developed that signify potential escalation of abuse, and some guidelines for monitoring violence against women journalists.
Dr Diana Maynard is a Senior Research Fellow in the Computer Science department at the University of Sheffield, UK. She has a PhD in Natural Language Processing (NLP) and has more than 30 years of experience in the field. Since 2000 she has been one of the key developers of the GATE NLP toolkit, leading work on Sheffield’s open-source multilingual text analysis tools. Her main research interests are in practical, multidisciplinary approaches to text and social media analysis, in a wide range of fields including cultural heritage, human rights, law, journalism, sustainability and the environment, geography, politics, and natural disasters. She is currently working on various projects based around the detection and analysis of online hate speech, including methods for removing bias in Machine Learning, and for early warning detection of abuse escalation.