Monroe Hall 120
Open-response questions are designed to elicit a broad range of replies unencumbered by the structure imposed by targeted questions with predefined responses. The question “Please rate on a scale from 0 to 5 the candidate’s ability to reach voters through social media” might receive a score of 5 from an individual, whereas that same individual’s answer to “What do you think of the candidate?” may have provided a nuanced list of perceived characteristics of the candidate. The individual’s selection of characteristics and topics to discuss contains information about what is important in their understanding of the object of interest. Open-response questions provide a means for eliciting this kind of information. Motivated by the need to assess a shift in how participants prioritize their attention in a complex scenarios, we propose a framework for testing using open-response questions. We discuss our method, which leverages recent successes in natural language processing, in the context of a simple randomized trial, and describe a proof-of-concept experiment we are running on mTurk. Time-permitting, we will discuss extensions of the basic framework, including to observational settings (for instance, using tweets as open responses to some exposure).