LLM twins for human choice behaviors

Keywords: LLM twins, human choice behavior, marketing research, prompt engineering, retrieval-augmented generation, training data inspection, tracing
Project partners: Miguel Brendl, Ulf Bockenholt
CeDA collaborators: Rodrigo C. G. Pena, Konstantinos Ntemos
Repository: behavior-twin

Context

This project examines whether Large Language Models (LLMs) can be developed into useful digital twins of human choice behavior. In collaboration with Prof. Miguel Brendl (Marketing, Faculty of Business and Economics) and Prof. Ulf Bockenholt (Northwestern Kellogg School of Management), we investigate how closely model-generated choices can approximate observed human decisions in controlled experimental settings.

If robust LLM twins can be established, they could support marketing researchers in rapidly prototyping and refining study designs before conducting resource-intensive human trials.

Project objectives

  • Evaluate whether LLM-based agents can simulate human choice behavior with sufficient fidelity for pre-study experimentation.
  • Identify which modeling and prompting strategies improve alignment between simulated and observed human choices.
  • Establish practical workflows for using behavioral LLM twins as an experiment design aid in marketing research.

Expected outcomes

A recipe for settings where LLMs can be used as a reliable pre-screening twins for human choice experiments, and figure out settings where they cannot.