Indexed by:
Abstract:
Large language models (LLMs) can generate personas based on prompts that describe the target user group. To understand what kind of personas LLMs generate, we investigate the diversity and bias in 450 LLM-generated personas with the help of internal evaluators (n=4) and subject-matter experts (SMEs) (n=5). The research findings reveal biases in LLM-generated personas, particularly in age, occupation, and pain points, as well as a strong bias towards personas from the United States. Human evaluations demonstrate that LLM persona descriptions were informative, believable, positive, relatable, and not stereotyped. The SMEs rated the personas slightly more stereotypical, less positive, and less relatable than the internal evaluators. The findings suggest that LLMs can generate consistent personas perceived as believable, relatable, and informative while containing relatively low amounts of stereotyping.
Keyword:
Reprint 's Address:
Email:
Version:
Source :
PROCEEDINGS OF THE 2024 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYTEMS, CHI 2024
Year: 2024
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 2
Affiliated Colleges: