Building Job Seekers’ Profiles: Can LLMs Level the Playing Field?

Research output: Contribution to journalConference articlepeer-review

8 Downloads (Pure)

Abstract

This study investigates the impact of language complexity on the performance of an NLP-based recommender system that assists job seekers in adding relevant occupation labels and skills to their profiles. The system, deployed by Job Market Finland (JMF), was evaluated to determine whether it biases its recommendations towards more complex language inputs, potentially disadvantaging users who employ simpler language. Additionally, the study explores the effectiveness of using large language models (LLMs) to enhance simpler descriptions and mitigate potential biases. By utilizing a stratified sample of occupations and crafting varied descriptions (original, simple, complex, and LLM-improved), we analyzed the system’s recommendations against a ground truth. Results indicate that the system favored more complex language, improving occupation label suggestions (but not skill recommendations). This bias is not mitigated by the use of an LLM, suggesting potential unintended consequences for users who employ simpler language and highlighting the opacity in optimizing such systems.

Original languageEnglish
JournalCEUR Workshop Proceedings
Volume3908
Publication statusPublished - 2024
Event3rd European Workshop on Algorithmic Fairness, EWAF 2024 - Mainz, Germany
Duration: 1 Jul 20243 Jul 2024

Keywords

  • Algorithmic bias
  • Human-machine interaction
  • Job matching
  • Large language models
  • Natural language processing

Cite this