ebook img

Measurement in Social Psychology PDF

278 Pages·2019·4.596 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Measurement in Social Psychology

MEASUREMENT IN SOCIAL PSYCHOLOGY Although best known for experimental methods, social psychology also has a strong tradition of measurement. This volume seeks to highlight this tradition by introducing readers to measurement strategies that help drive social psychological research and theory development. The book opens with an analysis of the measurement technique that dominates most of the social sciences, self-report. Chapter 1 presents a conceptual framework for interpreting the data generated from self-report, which it uses to provide practical advice on writing strong and structured self-report items. From there, attention is drawn to the many other innovative measurement and data-collection techniques that have helped expand the range of theories social psychologists test. Chapters 2 through 6 introduce techniques designed to measure the internal psychological states of individual respondents, with strategies that can stand alone or complement anything obtained via self-report. Included are chapters on implicit, elicitation, and diary approaches to collecting response data from participants, as well as neurological and psychobiological approaches to inferring underlying mechanisms. The remaining chapters introduce creative data-collection techniques, with particular attention given to the rich forms of data humans often leave behind. Also included are chapters on textual analysis, archival analysis, geocoding, and social media harvesting. The many methods covered in this book complement one another, such that the full volume provides researchers with a powerful toolset to help them better explore what is “social” about human behavior. This is fascinating reading for students and researchers in social psychology. Hart Blanton is Professor of Communication at Texas A&M University. He conducts research in the areas of social influence, health communication, and research methodology. Jessica M. LaCroix is Research Assistant Professor at the Uniformed Services University of the Health Sciences and specializes in health psychology, research methodology, and military suicide prevention. Gregory D. Webster is Associate Professor of Social Psychology at University of Florida with graduate degrees from the College of William & Mary and the University of Colorado. Frontiers of Social Psychology Series Editors: Arie W. Kruglanski University of Maryland at College Park Joseph P. Forgas University of New South Wales Frontiers of Social Psychology is a series of domain-specific handbooks. Each volume provides readers with an overview of the most recent theoretical, methodological, and practical developments in a substantive area of social psychology, in greater depth than is possible in general social psychology handbooks. The editors and contributors are all internationally renowned scholars whose work is at the cut- ting edge of research. Scholarly, yet accessible, the volumes in the Frontiers series are an essential resource for senior undergraduates, postgraduates, researchers, and practitioners and are suitable as texts in advanced courses in specific subareas of social psychology. Published Titles Intergroup Conflicts and their Resolution Bar-Tal Social Motivation Dunning Social Cognition Strack & Förster Social Psychology of Consumer Behavior Wänke For continually updated information about published and forthcoming titles in the Frontiers of Social Psychology series, please visit: https://www.routledge.com/ psychology/series/FSP MEASUREMENT IN SOCIAL PSYCHOLOGY Edited by Hart Blanton, Jessica M. LaCroix, and Gregory D. Webster First published 2019 by Routledge 711 Third Avenue, New York, NY 10017 and by Routledge 2 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN Routledge is an imprint of the Taylor & Francis Group, an informa business © 2019 Taylor & Francis The right of Hart Blanton, Jessica M. LaCroix, and Gregory D. Webster to be identified as the authors of the editorial material, and of the authors for their individual chapters, has been asserted in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Library of Congress Cataloging-in-Publication Data A catalog record for this title has been requested ISBN: 978-1-138-91323-3 (hbk) ISBN: 978-1-138-91324-0 (pbk) ISBN: 978-0-429-45292-5 (ebk) Typeset in Bembo by Apex CoVantage, LLC CONTENTS 1 From Principles to Measurement: Theory-Based Tips on Writing Better Questions 1 Hart Blanton and James Jaccard 2 Implicit Measures: Procedures, Use, and Interpretation 29 Bertram Gawronski and Adam Hahn 3 Elicitation Research 56 William A. Fisher, Jeffrey D. Fisher, and Katrina Aberizk 4 Psychobiological Measurement 75 Peggy M. Zoccola 5 It’s About Time: Event-Related Brain Potentials and the Temporal Parameters of Mental Events 102 Meredith P. Levsen, Hannah I. Volpert-Esmond, and Bruce D. Bartholow 6 Using Daily Diary Methods to Inform and Enrich Social Psychological Research 127 Marcella H. Boynton and Ross E. O’Hara 7 Textual Analysis 153 Cindy K. Chung and James W. Pennebaker vi Contents 8 Data to Die For: Archival Research 174 Brett W. Pelham 9 Geocoding: Using Space to Enhance Social Psychological Research 201 Natasza Marrouch and Blair T. Johnson 10 Social Media Harvesting 228 Man-pui Sally Chan, Alex Morales, Mohsen Farhadloo, Ryan Joseph Palmer, and Dolores Albarracín Index 265 1 FROM PRINCIPLES TO MEASUREMENT Theory-Based Tips on Writing Better Questions Hart Blanton and James Jaccard Self-reports are the dominant assessment method in the social sciences and a large part of their appeal is the ease with which questions can be generated and admin- istered. In our view, however, this apparent ease obscures the care that is needed to produce questions that generate meaningful data. In this chapter, we introduce and review basic principles of measurement, which we then use as a foundation to offer specific advice (“tips”) on how to write more effective questions. Principles of Measurement A Measurement Model Suppose a researcher wanted to measure consumers’ judgments of the quality of a product. Perceptions of product quality cannot be observed directly—perceived quality is a latent, theoretical psychological construct, assumed to be continu- ous in character, such that it can only be inferred indirectly through observable actions. One such action can be ratings a consumer makes on a rating scale. Sup- pose consumers are asked to rate the perceived quality of a product on a scale that ranges from 0 (“very low quality”) to 6 (“very high quality”). By seeking to quantify product perceptions in this manner—and whether the researcher has realized it or not—a formal measurement model has been embraced. This model is depicted in Figure 1.1. The rectangle labeled “Q” in Figure 1.1 represents the rating on the 0-to-6 scale. This rating does not, by fiat, reveal “true” quality perceptions of the respond- ent, which is conceptualized as an unobservable latent construct and represented in Figure 1.1 by the circle with the word “quality” in it. The researcher assumes that the observed “Q” is influenced by true, latent quality perceptions, but that 2 Hart Blanton and James Jaccard ε Q Quality FIGURE 1.1 Measurement Model the correspondence between latent and observed constructs is less than perfect. Ratings on Q are thus a function of both the consumers’ true evaluations and measurement error (represented as “ε” in Figure 1.1). This can be expressed alge- braically in the form of a linear model: Q =α+λQuality+ε [1] where α is an intercept, λ is a regression coefficient (also frequently called a loading), and ε is measurement error. When the relationship is linear, as assumed in Equation 1, then Q is an interval-level measure of the latent construct of perceived quality. If the relationship is non-liner but monotonic, Q is an ordinal measure of the latent construct. Articulation of this formal model focuses atten- tion on one of the primary challenges facing researchers who wish to create self- report questions—the need to reduce the influence of error on observed ratings. We next consider two sources of error, random and systematic, as well as their implications for characterizing the reliability and validity of self-report items. Random Error and Reliability Random error represents random influences, known or unknown, that arbitrarily bias numeric self-reports upward or downward. Often referred to as “noise,” ran- dom error can be generated by such factors as momentary distractions, fluke mis- understandings, transient moods, and so on. This form of error is commonplace, but its relative magnitude can vary considerably from one question to the next. As such, it is meaningful to think about the degree to which a given question From Principles to Measurement 3 or set of questions is susceptible to random error. This represents the concept of reliability. The reliability of observed scores conveys the extent to which they are free of random error. Statistically, a reliability estimate communicates the percentage of variance in the observed scores that is due to unknown, random influences as opposed to systematic influences. Thus, if the reliability of a set of scores is 0.80, then 80% of their variation is systematic and 20% is random. The presence of random error in measures can bias statistical parameter estimates, potentially attenuating correlations and causing researchers to think they have sufficiently controlled for constructs in an analysis, when they have not. Systematic Error and Validity Another form of measurement error is called systematic error. This source of error often introduces variance into observed self-report items that is non-random; i.e., that is a function of one or more psychological constructs that are something different than the construct of interest. Consider the model in Figure 1.2. Here a researcher hopes to measure both drug use and grade-point average (GPA) via self-report. Each of these constructs are influenced by the true latent constructs that are of interest (as in Figure 1.2), but another latent construct is also exerting influence on the two measures, social desirability. Social Desirability ε1 ε2 Reported Reported Drug Use GPA True Drug True Drug Use Use FIGURE 1.2 Example of Systematic Error

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.