Full Professor (Profesor Catedrático)
Technical University of Valencia, Spain
Paolo Rosso is full professor at the Universitat Politecnica de Valencia, Spain where he is also member of the PRHLT research center. His research interests focus mainly on author profiling, irony detection, fake reviews detection, plagiarism detection, and recently hate speech and fake news detection. Since 2009 he has been involved in the organisation of PAN benchmark activities at CLEF and at FIRE evaluation forums, mainly on plagiarism / text reuse detection and author profiling. At SemEval he has been co-organiser of shared tasks on sentiment analysis of figurative language in Twitter (2015), and on multilingual detection of hate speech against immigrants and women in Twitter (2019). He is co-ordinator of the activities of FIRE and IberEval evaluation forums. He has been PI of national and international research projects funded by EC and U.S. Army Research Office. At the moment, in collaboration with Carnegie Mellon University, he is involved in a project funded by Qatar National Research Fund on author profiling for cyber-security. He serves as deputy steering committee chair for the CLEF conference and as associate editor for the Information Processing & Management journal. He has been chair of *SEM-2015, and organisation chair of CERI-2012, CLEF-2013 and EACL-2017. He is the author of 400+ papers, published in journals, book chapters, conference and workshop proceedings.
Talk Title: I HATE you, BELIEVE me.
Social media have become the default channel for people to access information and express their opinions. Unfortunately there are also undesired effects of this democratization of knowledge. An harmful effect is that the relative anonymity of social media facilitates the propagation of toxic, hate and exclusion messages. Paradoxically, social media contribute to the polarization of society, as we have recently witnessed in events such as the last presidential elections in US, the Brexit and the Catalan referendums. Moreover, social media foster information bubbles and echo chambers, and every user may end up receiving only the information that matches her personal biases and beliefs. A perverse effect is that social media are a breeding ground for the propagation of fake news: when a piece of news matches with our beliefs or outrages us, we tend to share it without checking its veracity. In this talk I will address the two above problems and describe shared tasks that have been recently organised and some approaches to detect hate speech and fake news.
Natural Language Processing and Information Retrieval in Social Media