Demystifying Scholarly Metrics This page intentionally left blank DEMYSTIFYING SCHOLARLY METRICS A Practical Guide Marc W. Vinyard and Jaimie Beth Colvin Copyright © 2022 by Marc W. Vinyard and Jaimie Beth Colvin All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, except for the inclusion of brief quotations in a review, without prior permission in writing from the publisher. Library of Congress Cataloging-in-Publication Data Names: Vinyard, Marc W., author. | Colvin, Jaimie Beth, author. Title: Demystifying scholarly metrics : a practical guide / Marc W. Vinyard and Jaimie Beth Colvin. Description: Santa Barbara, California : Libraries Unlimited, [2022] | Includes bibliographical references and index. Identifiers: LCCN 2021028546 (print) | LCCN 2021028547 (ebook) | ISBN 9781440875939 (paperback) | ISBN 9781440875946 (ebook) Subjects: LCSH: Bibliometrics—Handbooks, manuals, etc. | Scholarly publishing—Evaluation. | Research—Evaluation—Statistical methods. Classification: LCC Z669.8 .V56 2022 (print) | LCC Z669.8 (ebook) | DDC 020.72/7—dc23/eng/20211028 LC record available at https://lccn.loc.gov/2021028546 LC ebook record available at https://lccn.loc.gov/2021028547 ISBN: 978-1-4408-7593-9 (paperback) 978-1-4408-7594-6 (ebook) 26 25 24 23 22 1 2 3 4 5 This book is also available as an eBook. Libraries Unlimited An Imprint of ABC-CLIO, LLC ABC-CLIO, LLC 147 Castilian Drive Santa Barbara, California 93117 www.abc-clio.com This book is printed on acid-free paper Manufactured in the United States of America Contents Chapter 1. Why Are You Reading This Book? 1 Chapter 2. You Know More Than You Think 7 Chapter 3. H-Indexes, Altmetrics, and Impact Factors—Oh My! 13 Chapter 4. Metrics for All Kinds of Scholars 77 Chapter 5. Open Access: The Good, the Bad, and the Ugly 105 Chapter 6. Finding a Journal That’s the Right Fit 145 Chapter 7. Publish, Don’t Perish! Applying What You’ve Learned 173 Chapter 8. Developing or Finding Metrics Services at Your Library 185 Chapter 9. Avoid Drowning in a Sea of Scholarly Metrics: How to Stay Current 205 Index 213 1 Why Are You Reading This Book? Goodhart’s law “When a measure becomes a target, it ceases to be a good measure.” —Marilyn Strathern This chapter is asking all the stakeholders with an interest in scholarly met- rics (librarians, administrators and department chairs, and researchers) to stop and consider why they are interested in this topic and what they hope to gain from understanding journal ranking and other scholarly metrics. These are complex, but imperfect measurements. When used appropriately, metrics can help users make informed decisions or describe work accom- plishments. When metrics are used to answer questions they were never designed to answer they become meaningless, or worse become an onerous burden. LIBRARIANS Don’t push your agenda. Provide a service; don’t create problems. When we initially investigated how libraries might provide research evaluation services, we noticed two groups emerging from the literature. One group believed libraries should assist with bibliometric services because it was a natural expansion of library services, while the other group focused on the strategic benefits of providing such services. They argued that if libraries provide scholarly metrics services, they would dem- onstrate their worth and reestablish their value to administrators who increasingly rely on metrics. However, coupled with the voices warning 2 DEMYSTIFYING SCHOLARLY METRICS that libraries risked becoming irrelevant if they didn’t evolve to incorpo- rate bibliometric analysis services was a second group warning that the strategic plan to stay relevant might backfire if faculty associate librarians with evaluation reports and failed promotions. Those expressing wariness didn’t oppose libraries offering research evaluation services; instead, the message was to proceed with caution and make sure that faculty would welcome these services. Since our initial investigation, the literature has shifted from “Should libraries provide scholarly metrics services, and if so why?” to “Which services do we provide and how?” The Association of College and Research Libraries (ACRL) updated their information literacy competency standards to a framework model that includes “Scholarship as Conversa- tion” as one the frames, thus recognizing the cycle of research that goes beyond investigation (research) and creation (writing) to sharing (publishing) and communicating (citing, responding, etc.) (Brantley et al., 2017, p. 140). The scholarly communication framework illustrates the ongoing cycle of research, and librarians have recognized needs at each stage of this cycle that they previously focused on—specifically the stages of publishing and disseminating research, which have increasingly grown complex (Si et al., 2019; Ye, 2019). The need to navigate the expanding world of research metrics has created an opportunity for librar- ians to expand their services to include tracking impact and interpreting metrics (Brantley et al., 2017; Howie & Kara, 2020; Powell & Elder, 2019). The need is great enough that many libraries are hiring librarians specifically to help faculty navigate the realm of measuring scholarly out- put (Powell & Elder, 2019). As libraries increase their involvement in the stages of research evaluation and analyzing impact, they have the chance to chase the hottest trends or create faculty center services (Brantley et al., 2017). As more libraries move to incorporate scholarly metric services, past cautions may seem irrelevant. Yet, we believe vigilance is still called for as libraries expand research support to include research evaluation and met- ric analysis services because these services are closely related to profes- sional development and promotions. Librarians must assess whether the programs meet needs or create problems. For example, if librarians attempt to push mandates for faculty to publish in gold open access (OA) journals that either limit their publication options or require high article processing charges (APCs) faculty can’t afford, they’ve created another hurdle to publishing instead of making the process easier. But if faculty are interested in learning more about OA or already decided to prioritize OA journals, then a service that helps faculty publish in OA journals is providing a service that removes hurdles (see chapter 8 for tips on identi- fying faculty needs). Meet a need; don’t create a problem. WHY ARE YOu READING THIS BOOk? 3 ADMINISTRATORS AND DEPARTMENTAL CHAIRS Don’t misapply metrics and create a hostile research environment. Are you hoping a move toward emphasizing bibliometrics will make eval- uating individuals, groups, and the whole institution easier? Are looking for the metrics that will help you evaluate your institution’s scholarly strength in comparison to peer institutions? Are you looking to standardize scholarly metrics to help you compare departments? Are you looking to use metrics to identify research with the most impact potential so your institution allocates funding efficiently? Consider the research environment your decisions will create. Pay close attention to the initial purpose of the metric and what it’s intended to mea- sure before you start using it to measure something else entirely. Understand metrics within their context to avoid unfair comparisons. By ignoring the limitations of metrics, you might pick academic winners and losers by valu- ing subjects that are easier to measure. Remember college applications are a compilation of SAT scores, essays, references, and supplemental information in an attempt to create a complete picture. Additionally, there is a reason job applicants submit a cover letter, resume, and application and run the gaunt- let of interviews: because employers don’t see the full picture from one met- ric. Likewise, author-level metrics and research output reports can’t tell the whole story. We learned that some larger research universities place a high emphasis on improving their institution ranking through scholarly output. These universities make appointment decisions based on evaluating the tal- ent of candidates by producing reports that compare the papers of candi- dates with scholars in the same discipline; these analysis reports are forwarded to the human resources office (Ye, 2019). Metrics are helpful, but they are a piece of the puzzle, not the whole picture. Avoid policies and requirements that shift incentives in research behavior. If metrics are used to promote or elevate areas of study, faculty may lose their freedom to organically study topics of interest and that are important to their field in favor of producing work for the assessment environment (MacColl, 2010). A move to emphasize metrics could create an environment that unfairly punishes fields that produce monographs. Policies implemented to measure scholarly impact may end up stifling scholarly communication if publishing in a journal with a high-impact factor is valued over publishing in a journal that will reach appropriate audiences (MacColl, 2010). Lastly, how you prioritize metrics could result in unreasonable output expectations that pressure researchers to publish more and focus efforts on quantity, not quality, in order to meet unattainable goals. Without carefully considering how your decision may affect the research environment, the requirements (initially established to identify and reward scholarly communication) can become tools in a system that monitors, assesses, and judges researchers. A