Detection of abusive messages in an on-line community

Abstract : Moderating user content in online communities is mainly performed manually, and reducing the workload through automatic methods is of great interest. The industry mainly uses basic approaches such as bad words filtering. In this article, we consider the task of automatically determining whether a message is abusive or not. This task is complex, because messages are written in a non-standardized natural language. We propose an original automatic moderation method applied to French, which is based on both traditional tools and a newly proposed context-based feature relying on the modeling of user behavior when reacting to a message. The results obtained during this preliminary study show the potential of the proposed method, in a context of automatic processing or decision support.
Document type :
Conference papers
Complete list of metadatas

Cited literature [13 references]  Display  Hide  Download

https://hal-univ-avignon.archives-ouvertes.fr/hal-01505017
Contributor : Etienne Papegnies <>
Submitted on : Monday, April 10, 2017 - 6:01:40 PM
Last modification on : Sunday, July 7, 2019 - 5:52:02 PM
Long-term archiving on : Tuesday, July 11, 2017 - 2:18:02 PM

Files

main.pdf
Files produced by the author(s)

Licence


Distributed under a Creative Commons Attribution - NonCommercial - ShareAlike 4.0 International License

Identifiers

Collections

Citation

Etienne Papegnies, Vincent Labatut, Richard Dufour, Georges Linarès. Detection of abusive messages in an on-line community. 14ème Conférence en Recherche d'Information et Applications (CORIA), Mar 2017, Marseille, France. pp.153-168, ⟨10.24348/coria.2017.16⟩. ⟨hal-01505017⟩

Share

Metrics

Record views

250

Files downloads

345