An Experimental Design on the Use of Rancor Microworld Simulator: A Comparison of Human Performances between Actual Operators and Students

Jooyoung Park, Sungheon Lee, Jonghyun Kim, Ronald Boring, Thomas Ulrich

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Human reliability analysis (HRA) is a method for evaluating human errors and estimating human error probabilities (HEPs) for application in probabilistic safety assessment (PSA) [1]. To estimate the error probabilities with high quality and reliability, most HRA methods have been developed on the basis of human reliability data sets collected from actual historical measurements, expert judgement, simulator studies, or experimental research. Representatively, the oldest HRA method, i.e., Technique for Human Error Rate Prediction (THERP) [1], has suggested on how to estimate HEPs with the data from expert judgement as well as sparse empirical and experience-based data. Nevertheless, a lack of adequate data still has been highlighted as a major challenge in the field of HRA [2]. In fact, many HRA methods widely used by utilities and regulatory agencies for nuclear power plants (NPPs) have been developed based on THERP data, which was generated from the early 1970s until the late 1980s, mostly from non-nuclear experience. Although new technologies like digital main control rooms (MCRs) are already implemented on new or upgraded nuclear power plants, HRA methods still have been applied as is without modification to accommodate differences due to digital technologies. For this reason, several institutes and researchers have attempted to collect HRA data from event reports, simulator studies, or multiple sources [3]. For most current studies, these are predominantly concentrating on collecting data from simulator studies with full-scope simulators. The largest current efforts are led by the U.S. Nuclear Regulatory Commission (U.S. NRC) and Korea Atomic Energy Research Institute (KAERI). These efforts are collecting data from full-scope digital MCR simulators using the Scenario Authoring, Characterization, and Debriefing Application (SACADA) database [4] and Human Reliability data Extraction (HuREX) framework [5], respectively. In keeping with the need for human reliability data sources, Idaho National Laboratory (INL) has begun to collect HRA data using its simplified simulator, i.e., the Rancor Microworld simulator, in contrast to the U.S. NRC and KAERI using full-scope simulators. To be specific, INL’s study has concentrated on offering and supporting additional data beyond what KAERI and U.S. NRC are collecting through full-scope simulator as well as collecting specific data for digital MCRs and dynamic HRA. The present study represents as an early effort to collect HRA data using the simplified Rancor Microworld simulator. This study aims to experimentally investigate whether students could be used as subjects for collecting HRA data instead of actual operators. To achieve this goal, this study tries to compare human performance between actual operators and students measured across benchmark experiments. Six human performances measures—i.e., 1) average completion time per instruction, 2) the number of secondary tasks, 3) error rate, 4) workload, 5) situation awareness, and 6) patterns of attention—are measured from the experiments. Then, these are compared and analyzed using appropriate statistic methods.
Original languageAmerican English
Title of host publicationTransactions of the Korean Nuclear Society Autumn Meeting 2019
StatePublished - Oct 2019

Fingerprint

Dive into the research topics of 'An Experimental Design on the Use of Rancor Microworld Simulator: A Comparison of Human Performances between Actual Operators and Students'. Together they form a unique fingerprint.

Cite this