Skip to Main content Skip to Navigation
Conference papers

Detection of abusive messages in an on-line community

Abstract : Moderating user content in online communities is mainly performed manually, and reducing the workload through automatic methods is of great interest. The industry mainly uses basic approaches such as bad words filtering. In this article, we consider the task of automatically determining whether a message is abusive or not. This task is complex, because messages are written in a non-standardized natural language. We propose an original automatic moderation method applied to French, which is based on both traditional tools and a newly proposed context-based feature relying on the modeling of user behavior when reacting to a message. The results obtained during this preliminary study show the potential of the proposed method, in a context of automatic processing or decision support.
Document type :
Conference papers
Complete list of metadata

Cited literature [13 references]  Display  Hide  Download
Contributor : Etienne Papegnies Connect in order to contact the contributor
Submitted on : Monday, April 10, 2017 - 6:01:40 PM
Last modification on : Monday, March 21, 2022 - 5:34:44 PM
Long-term archiving on: : Tuesday, July 11, 2017 - 2:18:02 PM


Files produced by the author(s)


Distributed under a Creative Commons Attribution - NonCommercial - ShareAlike 4.0 International License




Etienne Papegnies, Vincent Labatut, Richard Dufour, Georges Linares. Detection of abusive messages in an on-line community. 14ème Conférence en Recherche d'Information et Applications (CORIA), Mar 2017, Marseille, France. pp.153-168, ⟨10.24348/coria.2017.16⟩. ⟨hal-01505017⟩



Record views


Files downloads