Clever domain adaptation strategies for BERT in the task of hostile-language detection
Cyberbullying has experienced a surge in recent years, mainly due to the widespread adoption of social media platforms. This trend manifests in multiple ways, with hostile language being one of the most common. The latter underscores the urgent need for robust detection methods to address this issue effectively. To address this problem, we propose a novel pipeline to enhance hostile language detection in social media. Our approach consists of a combination of two ideas: First, we propose conducting a Domain Adaptation procedure to specialize the knowledge of a pre-trained BERT, making it more specialized in the domain of social media. For this adaptation, we modify the traditional random Masked Language Modeling technique and propose three novel strategies for selecting the subset of tokens to mask out cleverly. Second, we tailor an Adversarial Regularizer when fine-tuning the adapted BERT for specific hostile-language datasets. We evaluate the performance of our method for detecting hate speech, aggressiveness, offensiveness, and sexism. Our results show that the Domain Adaptation procedure significantly outperforms vanilla BERT, and the Adversarial Regularizer can lead to more robust fine-tuning, thereby enhancing performance. Moreover, we demonstrate that these methods can be used together to achieve an even more significant performance boost.
Palabras clave: Hostile language, Domain adaptation, Social media analysis, Text classification
Publicación: Artígo
1775043054669
1 de abril de 2026
/research/publications/clever-domain-adaptation-strategies-for-bert-in-the-task-of-hostile-language-detection
Cyberbullying has experienced a surge in recent years, mainly due to the widespread adoption of social media platforms. This trend manifests in multiple ways, with hostile language being one of the most common. The latter underscores the urgent need for robust detection methods to address this issue effectively. To address this problem, we propose a novel pipeline to enhance hostile language detection in social media. Our approach consists of a combination of two ideas: First, we propose conducting a Domain Adaptation procedure to specialize the knowledge of a pre-trained BERT, making it more specialized in the domain of social media. For this adaptation, we modify the traditional random Masked Language Modeling technique and propose three novel strategies for selecting the subset of tokens to mask out cleverly. Second, we tailor an Adversarial Regularizer when fine-tuning the adapted BERT for specific hostile-language datasets. We evaluate the performance of our method for detecting hate speech, aggressiveness, offensiveness, and sexism. Our results show that the Domain Adaptation procedure significantly outperforms vanilla BERT, and the Adversarial Regularizer can lead to more robust fine-tuning, thereby enhancing performance. Moreover, we demonstrate that these methods can be used together to achieve an even more significant performance boost. - Emilio Villa-Cueva, Mario Ezra Aragón, Adrián Pastor López-Monroy, Fernando Sánchez-Vega - 10.1007/s11042-026-21521-1
publications_gl