ebook img

DTIC ADA599406: Improving the Usefulness and Acceptability of Automated Detection and Tracking Systems PDF

0.43 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview DTIC ADA599406: Improving the Usefulness and Acceptability of Automated Detection and Tracking Systems

CAN 3 Improving the Usefulness and Acceptability of Automated Detection and Tracking Systems Sharon McFadden, Melissa Lauz and Kirsten Stanger Defence and Civil Institute of Environmental Medicine 1133 Sheppard Avenue West, Toronto, Ontario, Canada, M3M 3B9 Abstract Automation is being introduced increasingly in an effort to improve the accuracy and timeliness of information processing within complex computer-based systems. One task that has received considerable attention in this respect is the detection and tracking of targets on sensor and tactical systems. However, initial experience with automated detection and tracking systems has been less than satisfactory. They are often seen as being unreliable and workload intensive. When they are perceived as being reliable, operators often fail to monitor them adequately. Most of the effort to improve the performance of these systems has focused on finding better algorithms. In contrast, the research presented in this paper has focused on understanding when and how operators make use of automated detection and tracking systems and the conditions under which they are likely to improve performance and reduce workload. This paper summarizes the most pertinent results of our research examining human use of a simulated automated detection and tracking system as a function of reliability, workload, and experience. Our research shows that each of these factors impacts on system performance and on the perceived and actual usefulness of the automated system. Best performance tends to be associated with a moderately reliable tracking system with a low false detection rate while perceived reliability tends to be associated with low workload. Under low workload conditions, the operators' ability to detect targets improves, but their ability to detect automation induced errors does not. Suggestions for improving the usefulness of automated detection and tracking systems are offered. Report Documentation Page Form Approved OMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE 3. DATES COVERED JAN 2009 2. REPORT TYPE 00-00-2009 to 00-00-2009 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Improving the Usefulness and Acceptability of Automated Detection and 5b. GRANT NUMBER Tracking Systems 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION Defence Research and Development Canada - Toronto,Defence and Civil REPORT NUMBER Institute of Environmental Medicine,1133 Sheppard Avenue West,Toronto, Ontario M3K 2C9, 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S ACRONYM(S) 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 12. DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release; distribution unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT Automation is being introduced increasingly in an effort to improve the accuracy and timeliness of information processing within complex computer-based systems. One task that has received considerable attention in this respect is the detection and tracking of targets on sensor and tactical systems. However, initial experience with automated detection and tracking systems has been less than satisfactory. They are often seen as being unreliable and workload intensive. When they are perceived as being reliable, operators often fail to monitor them adequately. Most of the effort to improve the performance of these systems has focused on finding better algorithms. In contrast, the research presented in this paper has focused on understanding when and how operators make use of automated detection and tracking systems and the conditions under which they are likely to improve performance and reduce workload. This paper summarizes the most pertinent results of our research examining human use of a simulated automated detection and tracking system as a function of reliability, workload, and experience. Our research shows that each of these factors impacts on system performance and on the perceived and actual usefulness of the automated system. Best performance tends to be associated with a moderately reliable tracking system with a low false detection rate while perceived reliability tends to be associated with low workload. Under low workload conditions, the operators’ ability to detect targets improves, but their ability to detect automation induced errors does not. Suggestions for improving the usefulness of automated detection and tracking systems are offered. 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF 18. NUMBER 19a. NAME OF ABSTRACT OF PAGES RESPONSIBLE PERSON a. REPORT b. ABSTRACT c. THIS PAGE Same as 11 unclassified unclassified unclassified Report (SAR) Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 Introduction Improved sensors and processing capability have increased the amount of information that must be handled, while efforts to reduce manning have resulted in fewer operators to handle the available information. The only feasible solution to this increasing data overload is automation. Thus, we can expect automated systems to be an integral part of the operations room of the future and it is important to understand what they will and will not do and how best to employ them. Experience with automated systems has often been less than satisfactory. In many cases, operators have under used them or rejected them entirely. This is especially true of automated detection and tracking systems. The response of system developers has been to try to improve the algorithms used by the automated systems. We have taken a somewhat different approach. We have tried to understand why operators might not be using these systems effectively and under what conditions they would. An automated detection and tracking system (ADT) mimics an operator by scanning incoming signals to determine which ones are most likely to represent new targets (other ships submarines etc.) or the latest position of existing targets. The ADT detects targets by comparing the characteristics of the signal with a predefined template; it tracks the position of targets by comparing the location and strength of new signals with the location of existing targets. If the position of a signal is within a certain area surrounding an existing target, the automated tracker associates that signal with the target, updating its position. If the criterion used by the ADT is too stringent, real targets will be missed and tracks will be lost. On the other hand, the use of a lax criterion can result in false signals and signals from other targets being incorrectly associated with a target. Given the variability in the strength, speed and paths of targets, it is probably inherently impossible to design a perfect automated detector and tracker. A more realistic goal is to design a useful automated system. A useful system may be defined as one that: improves overall system performance, reduces operator workload, and is used efficiently and effectively by the operator. Informal comments by operators of naval systems incorporating automated detection and tracking algorithms indicate that these goals are not currently being met despite extensive effort to develop improved algorithms for filtering the incoming data. As one operator recently put it, "the best thing about the automated detection and tracking system is the offbutton". Unfortunately, there has been little research on human use of automated detection and tracking systems. However, research on other automated systems suggests reasons why operators may reject these systems. For example, Moray and his colleagues (Lee and Moray 1992, Muir and Moray, 1996) found that the use of an automated controller varied as a function of the operator's trust in it. Unless the automated controller was perceived as reliable, an operator did not use it to its full potential. Even when an aid is perceived as reliable, users may still use it suboptimally in order to maintain some feeling of control over the system (Morris, Rouse and Ward, 1988, Weisgerber and Savage, 1990). When users do rely on an automatic controller, they often fail to adequately monitor the automated system and miss errors (Parasuraman, Molloy & Singh 1993, Parasuraman, Molloy, Mouloua, & Hilburn 1996, Wiener 1985). These findings suggest that it may not be a simple task to develop an automated detection and tracking system that operators will use effectively and efficiently. Previous research Our initial studies looked at the use and usefulness of an Automated Tracker (AT) in a target detection and tracking task as a function of the perceived and actual reliability ofthe AT, task difficulty, and user experience (McFadden, Giesbrecht and Gula 1998, McFadden, Vimalachandran & Blackmore 1999). Providing only an automated tracker allowed the user the option of conducting the task manually or handing some or all of the targets over to the automated tracker. The results ofthese studies showed that people would make use of an AT if they perceived it as reducing their workload, even if it was not completely reliable. However, the extent to which the AT was used frequently depended on previous experience. 3 of II un average, parnctpams wno naa extensive expenence wnn a renaote A 1 tenaea tO maKe Jess use of a moderately reliable AT (McFadden et al 1999). As well, participants who were able to carry out the task manually or with a moderately reliable AT tended to make less use of a reliable AT, at least under low workload conditions (McFadden et al 1998). Without any kind of AT, most participants could track about 4 to 6 targets in the time available. With an AT which could track about 75% ofthe targets assigned to it, most participants were able to consistently monitor 90 to 100% of the strong targets. With the availability of a higher reliability AT, participants were more likely to detect and track weak targets as well. Moreover, workload, as measured by the time spent on interacting with the computer and perceived effort, declined substantially. However, percentage of targets tracked correctly did not improve beyond that found with the less reliable AT. Ideally one would have expected performance to at least be equivalent to AT reliability if not better. Thus, if the AT could track 95 % of the detected targets, the percentage of targets tracked correctly should have been similar. At low levels of AT reliability, the increase in percentage of targets tracked came from a substantial reduction in lost targets. The AT relieved the participants of having to manually update each target. Freed from this time consuming task, the participants had the time to search for weak targets. Thus, the percentage of missed targets also tended to decline as reliability increased, but primarily at higher levels of reliability. Unfortunately, as reliability increased a new type of error became more frequent misassociations. Effectively, a decrease in missed targets was offset by an increase in misassociated targets. Misassociations occurred in clearly defined situations where two targets passed close to one another. At that point, either the human or the AT could become confused as to which signal went with which target. If the targets were misassociated, the effect could often be seen in an unexpected deviation in the tracks of the misassociated targets on the display. A moderately reliable AT would sometimes fail to update the misassociated target. At that point, participants tended tore-add the target to the display instead of updating its position manually. This would get rid ofthe misassociation. A reliable AT was more consistent in updating targets. Thus, unless the participants noticed that the AT was associating the signals from one target with the marker originally assigned to a different target, the error continued until the target's path took it beyond the limit of the target display. Some participants were relatively good at handling misassociated targets; most were very poor. Overall, our results suggest that providing an AT can both improve performance and reduce workload. Moreover, most participants will make use of the AT especially under high workload conditions. However, this use may be modified by previous experience. Thus, it would seem to be important to give users extensive experience with the automated system under the conditions in which it will be used. Finally, the benefits of an AT are not likely to be fully realized unless the percentage of automation induced errors such as misassociations is reduced. Usefulness of an automated detection and tracking system Having determined that even a moderately reliable AT would be used and could be effective in improving performance and reducing workload, the next step was to assess the effect of adding an automated detection function. In particular, would adding an automated detection function impact the percentage of misses and misassociations. In other tasks, automation induced errors have been reduced by keeping the operator more directly involved in the task. For example, Parasuraman et al (1993) found that automation induced errors were 4 of 11 reoucea wnen me reuaouny or me automateo system was vanea over nme. Keaucmg A 1 reliability has somewhat the same effect. Some targets will continue to be tracked consistently while others will be lost from time to time. However, results with the AT showed that while this produce fewer misassociations, the number of missed targets and workload increased substantially. Thus, the overall benefit was nil. Reduced AT reliability may be partially compensated for by introducing an automated detection function. Strong targets that the AT fails to update will be re-added to the display. In addition, the detection function reduces the requirement to scan large quantities of data for new targets. On the other hand, adding a detection function changes the nature of the task. The operator no longer has the option of handling some targets manually. He or she becomes a passive monitor of the automated system whose primary role is to fix the errors introduced by it. Under such conditions, users may be even less effective monitors of the automated system. To assess the potential of the detection function, performance and workload were assessed under two different detection criteria. With the first criterion, the automated detection function added only targets to the display. Participants received no assistance with detecting weak targets, but they did not have to deal with a large number of false alarms. With the second criterion, the automated function would potentially detect weak targets. However, it would also add non-targets. Detection threshold was investigated to determine if the number of missed targets could be reduce by increasing false alarm rate and the impact on workload of the lower threshold. As well, both a moderate and high reliability automated tracker were tested with the two levels of detection threshold to see if the addition of an automated detection function compensated for the increased workload and higher miss rate associated with a lower reliability AT. Targets that were lost by the automated tracker would, in many cases, be re-added to the display automatically thereby reducing the workload associated with coping with a moderately reliable tracker. Automated Detection and Tracking Simulation System (ADTS) The current experiment was carried out using the Automated Detection and Tracking Simulation (ADTS) experimental control software. The ADTS is a simulation of a target detection and tracking task for studying human use of an automated system. A schematic of the ADTS screen is shown in Figure 1. User tasks are performed through a series oftracking display (shown on the left of the figure), signal table (shown on the right), and function button (far right) selections. All selections are carried out via a mouse. 5 of II Refresh Time Optional oo:oo ll Performance Screen I::::J associated ' D unassociated s sig sect 6 r ~ ,,, LT: -2ZU 4 • liZ •I 3 2 132 6 9 (rmvetgt) 4 ':1Z' '165 6.3 75,, 5 2 120 3 18 ~ssociat~ 6 3 230 5 4 7 4 280 9.7 3 2.5, ,,30, " ( deassoc) 9 ··4 310 10 4 300 '4.2 111 ', ~ 3 2 ' S1gnal Table Session Time II 00:00 II Figure 1: Schematic of the display for the Automated Detection and Tracking Simulation (ADTS) system. The "X" for the unassociated target marker appears as a white X in a black circle in the actual display. The dashed line in sector 1 of the tracking display shows the path traced out by that target. The solid line at the top of the same target shows the projected direction for that target. The numbers 1-4 are sector labels and the numbers 5 and 10 distance markers. The ADTS is a modification ofthe ATS simulation used in our previous research. A detailed description ofthe Automated Tracking System (ATS) is available in McFadden et al (1998). The task presented by the ADTS is to detect and then track the location of various targets (e.g. vessels) over time on the tracking display using information about the current location of a set of signals that is presented at regular intervals in the signal table. Some of the signals are due to the targets being tracked, some to targets that have not yet been detected, and the remainder are non-target signals. To add a new target, the participant selects the desired signal by positioning the cursor over it and clicking the mouse button and then selects the add function button in a similar way. To update the position of a target, the participant selects the desired signal and the existing target marker and then selects the associate button. Each time new information is presented, the associations between the target markers and the signals in the signal table are broken. Target markers that are not associated with a current signal have an X in them. The ADT act as preprocessors for the task. Each time new information is presented in the signal table, the ADT tries to update the position of all targets previously added to the tracking display as well as adding any new signals with a strength above a predefined threshold. The participant's task is to monitor the performance of the 6 of 11 AU 1, aaamg mtssea rargers, removmg or upaanng rargers mar were nor nan01ea oy me automated system (circles with an X), and correcting errors. The set of target signals that the participant must track is called a scenario. Each scenario contains a selection of targets that follow either a straight line, zigzag, circular, or experimenter-defined path. In addition, several non-target signals are presented each time the participant receives new information about the position of the target signals. Non-target signals always have a strength ofless than 20 and never last for more than one to two update periods. The strength of the target signals range from 0 .I to 100. At the beginning of a run, signals for new targets are added at a fixed rate until the maximum number of target signals for the condition under test has been reached. If a target's path takes it off the screen, the signal from a new target is added to the signal table. Method Four different combinations of detection threshold (10 and 20) and tracking reliability (approximately 87% and 98%) were run under two different levels of task difficulty (I 0 or 20 targets). With a detection threshold of 20, the ADT added only targets since all non-target signals had a strength of less than 20. With a threshold of 10, the ADT would also add weak targets and some non-target signals. Participants completed four scenarios under each combination of test conditions over a period of four days. Prior to carrying out the test session, participants completed three training sessions. These introduced them to the task and gave them practice on simpler scenarios during which they had to detect and track either 6 or 8 targets. In the second and third training session, they had access to an ADT that tracked 98% of the detected targets and added all signals with a threshold of 20 or higher. New information was added to the signal table every 40 seconds. Results and discussion The percentages of targets detected and tracked under the various conditions are shown in Figure 2 along with the results under similar conditions from an earlier experiment (McFadden et al 1999) in which an automated detection function was not available (Threshold = 100). As can be seen, the percentage of targets tracked was very similar across all conditions. Adding an automated detection capability did not lead to improvements in performance over what had been achieved in previous experiments using just the automated tracker. Moreover, the same pattern and percentage of errors was found; misses declined and misassociations increased as tracker reliability increased. 7 of 11 10 20 10 20 Threshold = 10 75 • Targets tracked Percent Dill Misses 50 • M isassocia tions Dill Time on Task 25 0 Threshold = 100 75 75 Percent 50 50 25 25 0 0 10 20 10 20 10 20 Moderate High High Reliability of tracker for 10 and 20 targets Figure 2: Percentage of targets tracked, misses, misassociations, and time taken as a function ofthe threshold of the automated detection function, the reliability ofthe tracker and the number of targets that the participant had to track. The results for threshold= 10 0 were collected in a previous experiment in which an automated detection function was not available. The threshold for the auto detection function had little effect on targets tracked, but it did impact time taken or workload. It took approximately 20-25% longer to handle misses and errors when the threshold was set to 10 compared to when it was set to 20. Most of that extra time was taken up handling the larger number of false alarms in the lower threshold condition. With the threshold set to 20, time on task was about 10% lower than when an automated detection function was not available. However, in this study at least, that did not translate into 8 of ll

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.