Evaluating Research in Academic Journals Evaluating Research in Academic Journals is a guide for students who are learning how to evaluate reports of empirical research published in academic journals. It breaks down the process of evaluating a journal article into easy-to-understand steps, and emphasizes the practical aspects of evaluating research – not just how to apply a list of technical terms from textbooks. The book avoids oversimplification in the evaluation process by describing the nuances that may make an article publishable even when it has serious methodological flaws. Students learn when and why certain types of flaws may be tolerated, and why evaluation should not be performed mechanically. Each chapter is organized around evaluation questions. For each question, there is a concise explanation of how to apply it in the evaluation of research reports. Numerous examples from journals in the social and behavioral sciences illustrate the application of the evaluation questions, and demonstrate actual examples of strong and weak features of published reports. Common-sense models for evaluation combined with a lack of jargon make it possible for students to start evaluating research articles the first week of class. New to this edition n New chapters on: – Evaluating mixed methods research – Evaluating systematic reviews and meta-analyses – Program evaluation research n Updated chapters and appendices that provide more comprehensive information and recent examples n Full new online resources: test bank questions and PowerPoint slides for instructors, and self-test chapter quizzes, further readings, and additional journal examples for students. Maria Tcherni-Buzzeo is an Associate Professor of Criminal Justice at the University of New Haven. She received her PhD in Criminal Justice from the University at Albany (SUNY), and her research has been published in the Journal of Quantitative Criminology, Justice Quarterly, and Deviant Behavior. Evaluating Research in Academic Journals A Practical Guide to Realistic Evaluation Seventh Edition Fred Pyrczak and Maria Tcherni-Buzzeo Seventh edition published 2019 by Routledge 711 Third Avenue, New York, NY 10017 and by Routledge 2 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN Routledge is an imprint of the Taylor & Francis Group, an informa business © 2019 Taylor & Francis The right of Fred Pyrczak and Maria Tcherni-Buzzeo to be identified as authors of this work has been asserted by them in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. First edition published by Pyrczak Publishing 1999 Sixth edition published by Routledge 2014 Library of Congress Cataloging-in-Publication Data A catalog record has been requested for this book ISBN: 978-0-8153-6568-6 (hbk) ISBN: 978-0-8153-6566-2 (pbk) ISBN: 978-1-351-26096-1 (ebk) Typeset in Times New Roman and Trade Gothic by Florence Production Ltd, Stoodleigh, Devon, UK Visit the companion website: www.routledge.com/cw/tcherni-buzzeo Contents Introduction to the Seventh Edition vii 1. Background for Evaluating Research Reports 1 2. Evaluating Titles 16 3. Evaluating Abstracts 27 4. Evaluating Introductions and Literature Reviews 38 5. A Closer Look at Evaluating Literature Reviews 51 6. Evaluating Samples when Researchers Generalize 62 7. Evaluating Samples when Researchers Do NotGeneralize 79 8. Evaluating Measures 87 9. Evaluating Experimental Procedures 103 10. Evaluating Analysis and Results Sections: Quantitative Research 120 11. Evaluating Analysis and Results Sections: Qualitative Research 128 12. Evaluating Analysis and Results Sections: Mixed Methods Research Anne Li Kringen 140 13. Evaluating Discussion Sections 154 14. Evaluating Systematic Reviews and Meta-Analyses: Towards Evidence-Based Practice 164 v Contents 15. Putting It All Together 183 Concluding Comment 188 Appendix A: Quantitative, Qualitative, and Mixed Methods Research: An Overview 189 Appendix B: A Special Case of Program or Policy Evaluation 193 Appendix C: The Limitations of Significance Testing 196 Appendix D: Checklist of Evaluation Questions 200 Index 207 vi Introduction to the Seventh Edition When students in the social and behavioral sciences take advanced courses in their major field of study, they are often required to read and evaluate original research reports published as articles in academic journals. This book is designed as a guide for students who are first learning how to engage in this process. Major Assumptions First, it is assumed that the students using this book have limited knowledge of research methods, even though they may have taken a course in introductory research methods (or may be using this book while taking such a course). Because of this assumption, technical terms and jargon such as true experiment are defined when they are first used in this book. Second, it is assumed that students have only a limited grasp of elementary statistics. Thus, the chapters on evaluating statistical reporting in research reports are confined to criteria that such students can easily comprehend. Finally, and perhaps most important, it is assumed that students with limited backgrounds in research methods and statistics can produce adequate evaluations of research reports – evaluations that get to the heart of important issues and allow students to draw sound conclusions from published research. This Book Is Not Written for . . . This book is not written for journal editors or members of their editorial review boards. Such professionals usually have had first-hand experience in conducting research and have taken advanced courses in research methods and statistics. Published evaluation criteria for use by these professionals are often terse, full of jargon, and composed of many elements that cannot be fully comprehended without advanced training and experience. This book is aimed at a com- pletely different audience: students who are just beginning to learn how to evaluate original reports of research published in journals. vii Introduction to the Seventh Edition Applying the Evaluation Questions in This Book Chapters 2 through 15 are organized around evaluation questions that may be answered with a simple “yes” or “no,” where a “yes” indicates that students judge a characteristic to be satis- factory. However, for evaluation questions that deal with complex issues, students may also want to rate each one using a scale from 1 to 5, where 5 is the highest rating. In addition, N/A (not applicable) may be used when students believe a characteristic does not apply, and I/I (insufficient information) may be used if the research report does not contain sufficient information for an informed judgment to be made. Evaluating Quantitative and Qualitative Research Quantitative and qualitative research differ in purpose as well as methodology. Students who are not familiar with the distinctions between the two approaches are advised to read AppendixA, which presents a very brief overview of the differences, and also explains what mixed methods research is. Students are also encouraged to check the online resources for Chapter 11 that include an overview of important issues in the evaluation of qualitative research. Note from the Authors I have taken over the updating of this text for its current, 7th edition, due to Fred Pyrczak’s untimely departure from this earth in 2014. His writing in this book is amazing: structured, clear, and concise. It is no surprise that the text has been highly regarded by multiple generations of students who used it in their studies. In fact, many students in my Methods classes have commented how much they like this text and how well written and helpful it is. I have truly enjoyed updating this edition for the new generation of students, and tried my best to retain all the strengths of Fred’s original writing. I am also grateful to my colleague Anne Li Kringen, who is an expert on mixed methods research, for contributing a new Chapter12 (on evaluating mixed methods research) to the current edition. Also, new in the current edition are Chapter 14 (on evaluating meta-analyses and systematic reviews) and Appendix B (on evaluating programs and policies). The remainder of the chapters and appendices have been updated throughout with new information and examples. I hope this text will serve you well in your adventures of reading research articles! Maria Tcherni-Buzzeo New Haven, 2018 My best wishes are with you as you master the art and science of evaluating research. With the aid of this book, you should find the process both undaunting and fascinating as you seek defensible conclusions regarding research on topics that interest you. Fred Pyrczak Los Angeles, 2014 viii CHAPTER 1 Background for Evaluating Research Reports The vast majority of research reports are initially published in academic journals. In these reports, or empiricaljournal articles,1researchers describe how they have identified a research problem, made relevant observations or measurements to gather data, and analyzed the data they collected. The articles usually conclude with a discussion of the results in view of the study limitations, as well as the implications of these results. This chapter provides an overview of some general characteristics of such research. Subsequent chapters present specific questions that should be applied in the evaluation of empirical research articles. 3 Guideline 1: Researchers Often Examine Narrowly Defined Problems Comment: While researchers usually are interested in broad problem areas, they very often examine only narrow aspects of the problems because of limited resources and the desire to keep the research manageable by limiting its focus. Furthermore, they often examine problems in such a way that the results can be easily reduced to statistics, further limiting the breadth of their research.2 Example 1.1.1 briefly describes a study on two correlates of prosocial behavior (i.e., helping behavior). To make the study of this issue manageable, the researchers greatly limited its scope. Specifically, they examined only one very narrow type of prosocial behavior (making donations to homeless men who were begging in public). 1 Note that empirical research articles are different from other types of articles published in peer-reviewed journals in that they specifically include an original analysis of empirical data (data could be qualitative or quantitative, which is explained in more detail in Appendix A). Other types of articles include book reviews or overview articles that summarize the state of knowledge and empirical research on a specific topic or propose agenda for future research. Such articles do not include original data analyses and thus are not suitable for being evaluated using the criteria in this text. 2 Qualitative researchers (see Appendix A) generally take a broader view when defining a problem to be explored in research and are not constrained by the need to reduce the results to numbers and statistics. More information about examining the validity of qualitative research can be found in the online resources for Chapter 11 of this text. 1
Description: