ebook img

Examining Information Processing on the World Wide Web Using Think Aloud Protocols PDF

27 Pages·2000·0.12 MB·English
by  
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Examining Information Processing on the World Wide Web Using Think Aloud Protocols

MEDIAPSYCHOLOGY,2,219–244. Copyright © 2000, Lawrence Erlbaum Associates, Inc. Examining Information Processing on the World Wide Web Using Think Aloud Protocols William P. Eveland, Jr. School of Journalism and Communication The Ohio State University Sharon Dunwoody School of Journalism and Mass Communication University of Wisconsin–Madison Some theorists argue that the node-link design of the Web mimics human information storage and that Web use encourages individuals to process information efficiently and effectively, potentially increasing meaningful learning. However, critics claim that Web navigation increases cognitive load and often produces disorientation. This reduces the processing devoted to meaningful learning, and, thus the Web may potentially inhibit learning. In an examination of information processing on the Web using a quantitative analysis of think aloud protocols, we found that users spend a substantial proportion of their cognitive effort orienting to the content and structure of the Web, and this effort comes at the expense of elaborative and evaluative processing. Additional findings suggest that, at least during a single relatively short session, time spent in a given site does not reduce the processing devoted to orientation. Finally, this paper offers a theoretically informed strategy for analyzing information processing activities that may be of use to other scholars. Requests for reprints should be sent to William P. Eveland, Jr., School of Journalism and Communication, The Ohio State University, 154 North Oval Mall, Room 3016, Columbus, OH 43210. E-mail: [email protected] 22 0 EVELAND & DUNWOODY The purpose of this article is to delineate how individuals process information presented to them via the World Wide Web (or “Web”). Some theorists have suggested that the design of hypermedia systems, such as the Web, can facilitate useful information processing activities that lead to learning. Others have argued that some factors, such as disorientation, can hinder effective information processing of Web content. An important area of inquiry, then, is to better understand the information processing that takes place when individuals browse the Web. Unfortunately, few studies have attempted to empirically determine the prevalence of different types of information processing using observational methods. Instead, most studies either experimentally manipulate information processing to determine its effects (e.g., Craik & Tulving, 1975; Hamilton, 1997; Johnsey, Morrison, & Ross, 1992; Mayer, 1980; Pressley, McDaniel, Turnure, Wood, & Ahmad, 1987) or attempt to measure information processing via self reports (e.g., Eveland, 1997a, 1997b; Kardash & Amlund, 1991; Perse, 1990; Salomon, 1981, 1983; Schmeck, Ribich, & Ramanaiah, 1977; We i n s t e i n , Zimmermann, & Palmer, 1988). Although these are certainly useful approaches, they do leave out direct assessment of naturally occurring variation in information processing over time or across content. This article presents an exploratory and descriptive study of information processing of Web content and structure. The study uses a think aloud methodology to provide relatively direct observation of patterns of information processing compared to experimental manipulations or self-reports. We conclude from this study that a majority of processing while using the Web is focused on maintaining orientation to the structure and content of the site, thus reducing other forms of information processing that have been demonstrated to produce meaningful learning. INFORMATION PROCESSING IN THE CONTEXT OF HYPERMEDIAAND THE WORLD WIDE WEB More than 50 years ago, Vannevar Bush (1945) proposed the creation of a machine called a memex that would allow instantaneous access to multiple sources of information through associational links. He believed that this machine would increase learning because it would function in the same way that individuals’ brains worked—as an associative network. In Bush’s vision, the information would be stored on microfilm and presented on multiple viewers mounted into a large desk. The technology underlying this idea was later updated, and the resulting product was labeled hypertext (now hypermedia) by Ted Nelson (Bevilacqua, 1989; Heller, 1990; Nelson, 1993; Tsai, 1988–1989). IN F O R M A TION PROCESSING ON THE WE B 22 1 The defining feature of hypermedia is the use of nodes (packets of information, typically in the form of a “page”) connected by links that may be easily traversed at the whim of the user (Horney, 1991; Shirk, 1992). As such, hypermedia is distinguished from other media, such as television and radio, by a high level of user control over the pace, order, and content. This control allows use of this medium to be nonlinear or nonsequential (Duchastel, 1990; Horney, 1993; Shin, Schallert, & Savenye, 1994), although individuals may still choose to use the medium in a linear or sequential manner (Eveland & Dunwoody, 1998, in press). Nearly five decades after Bush’s (1945) classic article, the idea of the memex—in the form of hypermedia—took the United States by storm in the guise of the Web (The Internet, 1997). The Web is, technologically, a massive hypermedia system (Astleitner & Leutner, 1995) created by thousands of different authors across the globe. Recent statistics on the popularity of the Internet—of which the Web is a major component—reveal the massive growth in this medium over the past few years. Apoll conducted in the fall of 1998 (Pew Research Center, undated) found that more than 40% of American adults used the Internet, with nearly half of those beginning during the prior year.A series of studies conducted by Bimber (1999) found that Internet access among American adults increased from 26% in October 1996 to 46% in February 1998 and 55% in March 1999. Although exact figures depend in part on how Internet use is defined, it is clear that a substantial proportion of the U.S. population is making use of the Internet today, and that use has been increasing rapidly over the past several years. PROMISES AND PROBLEMS OF USING HYPERMEDIA FOR LEARNING Theorists interested in the uses and effects of hypermedia frequently argue that the structure of hypermedia and the process of its use mimics the associative structure of human memory and the function of human information processing (e.g., Bieber, Vitali, Ashman, Balasubramanian, & Oinas-Kukkonen, 1997; Churcher, 1989; Jonassen & Wang, 1993; Kozma, 1987; Lucarella & Zanzi, 1993; Marchionini, 1988; Shin et al., 1994; Shirk, 1992). For instance, Jonassen (1988) noted that “because hypertext is a node-link system based upon semantic structures, it can map fairly directly the structure of knowledge it is representing” (p. 14). Tergan summarizes this perspective by noting the following: 22 2 EVELAND & DUNWOODY Some researchers have argued that structural and functional features of hypertext/hypermedia technology match very well with cognitive network theories of the human mind, constructivist principles of learning, and multiple mental modes for representation of knowledge. The suggested match has nourished expectations that hypertext-based technologies may overcome deficiencies inherent in the traditional reading comprehension and information processing approach of teaching and learning and may even revolutionize learning. (Tergan, 1997, pp. 257–258) Thus, advocates hypothesize that hypermedia systems can serve as superior learning tools compared to other, more constrained and linear media that do not represent a knowledge domain so precisely. Churcher (1989) argued that “where hypertext is highly structured and indeed is the structure of the domain of knowledge and that structure/system is to eventually become the users’ conceptual model it strongly suggests hypertext as a more effective learning environment” (p. 245). Thus, the argument made by many hypermedia advocates is that, because hypermedia can be designed to emulate the appropriate (based on domain experts) links among concepts in a particular knowledge domain, learners will more easily be able to build their own mental models from the model used in the hypermedia system (e.g., Churcher, 1989; Jonassen, 1988; Jonassen & Wang, 1993). In effect, in most theoretical approaches the user is assumed to employ the hypermedia system to shape his or her own mental representations of the domain of knowledge—both in terms of content and structure—thereby emulating the knowledge structure of the domain expert whose input influenced the design of the hypermedia system itself. However, some argue that there are important differences between the structure and use of hypermedia systems and those of human memory. Nelson and Palumbo (1992) argued that at present, most hypermedia systems support linkages indicating only that one unit of information is somehow related to another unit of information, without specifying the nature of this relationship and a rationale for its existence. . . . In contrast, human memory supports a much stronger linking mechanism that both establishes a relationship and conveys information about the associational nature of the link. (p. 290) In addition, Tergan (1997) criticized the assumption that hypermedia use is analogous to human information processing and thus raises questions about the superiority of hypermedia as a learning tool. Despite these and other criticisms of the conceptual ties between human memory and hypermedia, many hypermedia researchers who take a stance on the issue seem to agree that the IN F O R M A TION PROCESSING ON THE WE B 22 3 similarities between the two are many and theoretically important. If accurate, this would suggest that hypermedia may facilitate information processing, particularly if the design of the hypermedia system is structured in a meaningful way. Although many hypermedia theorists focus on the benefits of using hypermedia for information processing and learning, there are those who see another, darker side of hypermedia use. One of the most common concerns about hypermedia use expressed by these individuals is its propensity to cause disorientation (McDonald & Stevenson, 1996). Disorientation is likely to reduce learning and, potentially, even lead users to abandon use of the system altogether. From this perspective, the relevant metaphor for hypermedia use is not human processing of information but navigation through unfamiliar physical space (Kim & Hirtle, 1995). Based on this metaphor and formal observations (e.g., Dias & Sousa, 1997), as well as informal reports of users, this perspective points out that people often get confused and even lost in virtual spaces with which they are unfamiliar.This is particularly true when these spaces are poorly designed. To avoid getting lost, people must engage in orienting techniques, such as identifying landmarks and exploring the relationship of one location to another. McDonald and Stevenson (1998) argued that nonlinear hypermedia systems produce “a high cognitive burden on users such that they must simultaneously focus on their information retrieval goals and on orienting themselves within the hypertextual space” (p. 24). Under the assumption of a limited cognitive capacity (e.g., Kahneman, 1973), the effort spent orienting oneself to the information space—sometimes called cognitive overhead (Conklin, 1987; T h ü r i n g , Hannemann, & Haake, 1995)—consumes some or all of the cognitive effort that might otherwise be invested in more meaningful processing of the content. Thus, the focus of information processing on efforts to orient oneself suggests that, even if the user never actually becomes disoriented, the cognitive overhead produced by hypermedia may potentially inhibit, instead of encourage, other information processing activities that lead to fruitful learning. One means of addressing this debate would be to compare the relative amount of learning from hypermedia systems with more traditional media, such as print. A number of researchers have followed this route, with findings that are potentially important but still somewhat ambiguous (Chen & Rada, 1996; Dillon & Gabbard, 1998; Eveland & Dunwoody, 2000). This study takes an alternate approach by examining not the product of information processing—learning— but the processing of information itself. This allows us to elaborate on the findings of learning experiments by describing the processing of information that 22 4 EVELAND & DUNWOODY may have produced the results of those learning experiments. Thus, this study will examine the relative proportions of information processing devoted to orienting to the information space compared to other forms that may be more conducive to meaningful learning. INFORMATION PROCESSING ON THE WEB We focus on what we believe to be four basic, distinct, and meaningful categories of information processing that would occur after attention to content had already been established: maintenance, orientation, elaboration, and evaluation. These are all forms of processing information that one would expect to find in most forms of media use and in everyday activity. Orientation is likely to be particularly prevalent for those using hypermedia systems such as the Web compared to traditional media, although this study was not designed to test this expectation. Maintenance Simply put, maintenance is the repetition of information in short-term memory. The quintessential example of maintenance is the mental rehearsal of a phone number or name over and over in an attempt to remember it. An important characteristic of maintenance is that it does not include any attempts to connect the information to existing knowledge or to interpret it in light of other information. Estes (1988) suggested that “maintenance of items in active working memory simply by what is termed primary or maintenance rehearsal . . . serves to increase the probability of later recognition but within wide limits has no detectable effect on later recall” (p. 356). Others have drawn similar conclusions about the relatively weak effect of simple maintenance on recall and learning (e.g., Craik & Tulving, 1975; Haberlandt, 1994). Thus, for purposes of this study maintenance is not considered an effective form of information processing for learning, although it should be noted that it may have some limited positive ef fe c t s . Orientation Orientation is of particular concern for those interested in the use of hypermedia systems. Kim and Hirtle (1995, p. 241) argued that while browsing a hypertext database, the user must carry out multiple tasks concurrently.These tasks can be clarified into three categories: (1) navigational tasks: planning and executing routes through the network; (2) informational tasks: IN F O R M A TION PROCESSING ON THE WE B 22 5 reading and understanding contents presented in the nodes and their relationships, for summary and analysis; and (3) task management: coordinating information and navigational tasks (e.g., keeping track of digressions to incidental topics). Performance of these tasks exacts a high cognitive load upon the user. It is the first and third of these cognitive activities that we consider orientation in this study. Orientation, although potentially useful for learning the overall structure of information (and thus valuable only if the information is structured in a meaningful manner), also robs precious cognitive resources from other information-processing activities that may be more valuable for learning. Hill and Hannafin (1997) claimed that “significant disorientation may hinder the u s e r’s ability to reference relevant prior subject knowledge as well as metacognitive knowledge” (p. 58). If true, this would limit the amount of meaningful learning that takes place. Indeed, they noted that “it may be critical to reduce perceived (or real) system discomfort and disorientation prior to advancing open learning applications” (Hill & Hannafin, 1997, p. 61). Elaboration Perse (1990) stated that elaboration of media content “relates the incoming information to existing knowledge and images and attaches connotative and associative meanings” (p. 19). In effect, elaboration is the process through which connections are made between new and existing bits of information in memory or between two or more existing bits of information (Hamilton, 1997). Elaboration serves to connect new information into existing schema as well as to create greater interconnectedness within schema. Both of these processes are integral to learning, and are consistent with the purported benefits of hypermedia for learning. Experimental research in cognitive and educational psychology has consistently upheld the connection between elaboration and greater learning from stimulus materials (e.g., Hamilton, 1989; Mayer, 1980; Pressley et al., 1987; Woloshyn, Paivio, & Pressley, 1994; Woloshyn, Wil l o u g h b y , Wood, & Pressley, 1990). In their reviews of the literature, Estes (1988), Greene (1992), and Haberlandt (1994) concluded that recall is substantially greater when participants engage in elaborative rehearsal than when they engage in simple maintenance rehearsal. In addition, there is ample evidence for a strong relationship between survey measures of elaboration/deep processing and knowledge of specific topics or academic achievement (e.g., Eveland, 1997a, 1997b; Kardash & Am l u n d , 1991; Perse, 1990; Schmeck, 1980; Schmeck & Grove, 1979; Schmeck & Phillips, 1982; Schmeck et al., 1977; Watkins & Hattie, 1981a, 1981b). 22 6 EVELAND & DUNWOODY Evaluation The final type of information processing we will consider is evaluation— assessing the value or worth of a given object or piece of information. On the Web, even more so than in traditional informational media, assessments of the credibility of the source and the accuracy of individual bits of information is an important skill. At any moment the specific source of information, such as the sponsor of the Web site, may change, and each of these changes require a new assessment of credibility. Some have suggested that evaluation is merely an extension or a subset of elaboration (e.g., Gould, Trevithick, & Dixon, 1991), in part because nearly all evaluations require making connections to existing information like standards or exemplars. However, we argue that evaluation adds an affective judgment to any elaboration—that is, good or bad, true or false—that is not an essential feature of elaboration more generally. In effect, then, evaluations are elaborations that include an affective tag and should therefore contribute to learning. METHODS Think Aloud Interviewing The think aloud method has been most prominently advocated by Ericsson and Simon (1993). This method requires participants to engage in some task and express the thoughts going through their minds as they do so. It is a nondirective technique, such that the only probe used after initial instructions is when participants stop verbalizing for some time, at which point they are simply reminded to think aloud. Given the large quantity of data obtained from each individual, think aloud interviews are normally conducted with small samples of between 10 and 30 participants (see Calvi, 1997; Carmel, Crawford, & Chen, 1992; Crampton, 1992; Darken & Sibert, 1996; Hill & Hannafin, 1997). Th e participant pools for think aloud interviews are typically students and are rarely drawn from the general population. The products of think aloud interviews are often coded quantitatively (Carmel et al., 1992), as we do in this study, although some researchers analyze them qualitatively instead (e.g., Hill & Hannifin, 1997). The purpose of the think aloud method is to make observable at least some proportion of the information processing that takes place during a given task. Researchers assume that the source of the think aloud output is information currently in short-term memory. By quantitatively coding the think aloud protocols, researchers should be able to develop a better understanding of IN F O R M A TION PROCESSING ON THE WE B 22 7 cognitive processes. Like most other nontraditional methods, the use of think aloud protocols has gone through a stage of attack by critics and defense by proponents (e.g., Ericsson & Simon, 1993; Kellogg, 1982; Nisbett & Wilson, 1977; Russo, Johnson, & Stephens, 1989; Smith & Miller, 1978; Turner, 1988; Wright & Rip, 1981). Responses to the critics generally have been persuasive, as the use of the think aloud protocols is accepted practice in fields such as educational psychology, geography, computer science, and engineering (e.g., Calvi, 1997; Carmel et al., 1992; Crampton, 1992; Darken & Sibert, 1996; Hill & Hannafin, 1997). The Why Files The Why Files Web site (http://whyfiles.news.wisc.edu), created by the National Institute for Science Education and initially funded by the National Science Foundation, was designed to convey the “science behind the news.” This site has also served as a test-bed for research on the communication of scientific information to the general public. Our think aloud interviews, although concerned with the processing of information on the Web generally, were also designed to help us evaluate the processing of scientific information in The Why Files in particular.Therefore, we began all think aloud participants on the home page of this site. The implications of this decision are described in later sections. Participants In the spring and early summer of 1997 a sample of Dane County,WI, residents were contacted via telephone for a screening interview.1The first question in the interview asked respondents if they had used the World Wide Web in the past month; those who did not were thanked for their time, and the interview was discontinued.2For those who had used the Web in the past month, several other questions were asked regarding the following: personal interest in four different types of scientific information, each measured on a 1–10 scale, and whether they had used the Web more than five times versus five times or less in the past 30 days.3The gender of each respondent was also identified. If the sum of the four science interest questions was 20 or greater, the respondent was asked to participate in the think aloud interview. Then, to ensure representation across potentially important correlates of information processing in Web sites, and thus more generalizability of our findings, we selected equal numbers of high and low Web users distributed evenly between males and females.4This left us with four high-Web-use males, four high-Web-use females, four low-Web-use males, and 22 8 EVELAND & DUNWOODY four low-Web-use females as participants in our think aloud interviews. At the conclusion of the session, each participant was paid $50. Procedures Each participant was run individually in a session that lasted approximately 90 minutes. First, participants engaged in several practice tasks to familiarize themselves with the process of thinking aloud (Ericsson & Simon, 1993). Specifically, they were asked to think aloud while engaging in more and more complex tasks: mental addition of two 3-digit numbers, solving anagrams, and reading a brief article from a print magazine. The final practice task—lasting from five to fifteen minutes—was to surf a science-related World Wide Web site (“The Exploratorium”—http://www.exploratorium.com) in order to make the participant comfortable with our computer setup and with the process of expressing thoughts while engaging in a task very similar to the primary think aloud task. The primary task for the think aloud interview was to surf the Web using a Macintosh computer, either Internet Explorer or Netscape Web browser software (depending on the participant’s preference), a 14” color monitor, and either a 14.4 modem or a direct Ethernet connection (depending on the participant’s typical connection speed when using the Web). The task initially placed participants on the home page of The Why Files Web site, but participants were informed that they were free to navigate from there to anywhere on the Web. The task lasted about 30 minutes for most participants. An audiotape recording was made of the complete interview, beginning with the first practice task. We also produced a synchronized, picture-in-picture video recording of the following: (a) the facial expression of the individual during the practice and formal Web think aloud tasks; and (b) the images on the computer screen during this time using a direct feed from the computer.A transcript of the audio portion of the interview was used for unitization and categorization tasks. Due to a technical problem, there was no video information available for one of the participants (a low-Web-use male); thus, the final number of interviews analyzed was reduced to 15. Operationalizations and Intercoder Reliability Intercoder reliability was assessed by having two trained coders independently code the practice Web site think aloud protocols.

Description:
protocols, researchers should be able to develop a better understanding of . goals of the project were one means of determining dividing points between thought units. Indications by .. Data on Internet users and on political use of the Internet [ O n - line] Online newcomers more middle-brow, less.
See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.