Keywords:
LLM twins, human choice behavior, marketing research, prompt engineering, retrieval-augmented generation, training data inspection, tracing
Project partners:
Miguel Brendl, Ulf Bockenholt
CeDA collaborators:
Rodrigo C. G. Pena, Konstantinos Ntemos
Repository:
behavior-twin
This project examines whether Large Language Models (LLMs) can be developed into useful digital twins of human choice behavior. In collaboration with Prof. Miguel Brendl (Marketing, Faculty of Business and Economics) and Prof. Ulf Bockenholt (Northwestern Kellogg School of Management), we investigate how closely model-generated choices can approximate observed human decisions in controlled experimental settings.
If robust LLM twins can be established, they could support marketing researchers in rapidly prototyping and refining study designs before conducting resource-intensive human trials.
A recipe for settings where LLMs can be used as a reliable pre-screening twins for human choice experiments, and figure out settings where they cannot.