Evaluation of Code Generation for Simulating Participant Behavior in Experience Sampling Method by Iterative In-Context Learning of a Large Language Model

Alireza Khanshan (Corresponding author), Pieter Van Gorp (Corresponding author), Panos Markopoulos (Corresponding author)

Research output: Contribution to journalArticleAcademicpeer-review

1 Citation (Scopus)
42 Downloads (Pure)

Abstract

The Experience Sampling Method (ESM) is commonly used to understand behaviors, thoughts, and feelings in the wild by collecting self-reports. Sustaining sufficient response rates, especially in long-running studies remains challenging. To avoid low response rates and dropouts, experimenters rely on their experience, proposed methodologies from earlier studies, trial and error, or the scarcely available participant behavior data from previous ESM protocols. This approach often fails in finding the acceptable study parameters, resulting in redesigning the protocol and repeating the experiment. Research has shown the potential of machine learning to personalize ESM protocols such that ESM prompts are delivered at opportune moments, leading to higher response rates. The corresponding training process is hindered due to the scarcity of open data in the ESM domain, causing a cold start, which could be mitigated by simulating participant behavior. Such simulations provide training data and insights for the experimenters to update their study design choices. Creating this simulation requires behavioral science, psychology, and programming expertise. Large language models (LLMs) have emerged as facilitators for information inquiry and programming, albeit random and occasionally unreliable. We aspire to assess the readiness of LLMs in an ESM use case. We conducted research using GPT-3.5 turbo-16k to tackle an ESM simulation problem. We explored several prompt design alternatives to generate ESM simulation programs, evaluated the output code in terms of semantics and syntax, and interviewed ESM practitioners. We found that engineering LLM-enabled ESM simulations have the potential to facilitate data generation, but they perpetuate trust and reliability challenges.

Original languageEnglish
Article number255
Number of pages19
JournalProceedings of the ACM on Human-Computer Interaction
Volume8
Issue numberEICS
DOIs
Publication statusPublished - 17 Jun 2024

Bibliographical note

Publisher Copyright:
© 2024 Owner/Author.

Keywords

  • Behavior Simulation
  • Experience Sampling Method
  • Large Language Model
  • Prompt Engineering

Fingerprint

Dive into the research topics of 'Evaluation of Code Generation for Simulating Participant Behavior in Experience Sampling Method by Iterative In-Context Learning of a Large Language Model'. Together they form a unique fingerprint.

Cite this