Advertisement
Research Article| Volume 5, ISSUE 2, SUPPLEMENT , S3-S40, July 2014

Download started.

Ok

The NCSBN National Simulation Study: A Longitudinal, Randomized, Controlled Study Replacing Clinical Hours with Simulation in Prelicensure Nursing Education

      Providing high-quality clinical experiences for students has been a perennial challenge for nursing programs. Short patient length of stays, high patient acuity, disparities in learning experiences, and the amount of time instructors spend supervising skills have long been issues. More recently, other challenges have emerged: more programs competing for limited clinical sites, faculty shortages, facilities not granting students access to electronic medical records, and patient safety initiatives that decrease the number of students allowed on a patient unit or restrict their activity to observing care.
      With high-fidelity simulation, educators can replicate many patient situations, and students can develop and practice their nursing skills (cognitive, motor, and critical thinking) in an environment that does not endanger patients. As the sophistication of simulation has grown over the last 10 years, the number of schools using it has increased as well, and boards of nursing (BONs) have received requests from programs for permission to use simulation to replace some traditional clinical experience hours. However, the existing literature does not provide the level of evidence that BONs need to make a decision on simulation as a replacement strategy. Though studies indicate that simulation is an effective teaching pedagogy, they lack the rigor and generalizability to provide the evidence needed to make policy decisions. The NCSBN National Simulation Study, a large-scale, randomized, controlled study encompassing the entire nursing curriculum, was conducted to provide the needed evidence.
      Incoming nursing students from 10 prelicensure programs across the United States were randomized into one of three study groups:
      • Control: Students who had traditional clinical experiences (no more than 10% of clinical hours could be spent in simulation)
      • 25% group: Students who had 25% of their traditional clinical hours replaced by simulation
      • 50% group: Students who had 50% of their traditional clinical hours replaced by simulation.
      The study began in the Fall 2011 semester with the first clinical nursing course and continued throughout the core clinical courses through graduation in May 2013. Students were assessed on clinical competency and nursing knowledge, and they rated how well their learning needs were met in both the clinical and simulation environments.
      A total of 666 students completed the study requirements at the time of graduation. At the end of the nursing program, there were no statistically significant differences in clinical competency as assessed by clinical preceptors and instructors (p=0.688); there were no statistically significant differences in comprehensive nursing knowledge assessments (p=0.478); and there were no statistically significant differences in CLEX® pass rates (p=0 737) among the three study groups
      The study cohort was also followed for the first 6 months of clinical practice There were no differences in manager ratings of overall clinical competency and readiness for practice at any of the follow-up survey time points: 6 weeks (p=0 706), 3 months (p=0.511), and 6 months (p=0.527) of practice as a new registered nurse.
      The results of this study provide substantial evidence that substituting high-quality simulation experiences for up to half of traditional clinical hours produces comparable end-of-program educational outcomes and new graduates that are ready for clinical practice.

      Introduction

      Nursing education in the United States is at the crossroads of tradition and innovation. High-fidelity simulation is emerging to address 21st-century clinical education needs and move nursing forward into a new era of learning and critical thinking. However, this technology raises key questions:
      • Is high-fidelity simulation sufficient to help students adequately learn and meet the competencies demanded in a challenging, high-acuity, 21st-century practice environment?
      • How do student outcomes after simulation compare with those of traditional clinical education?
      Traditionally, nursing students in the United States receive didactic instruction in the classroom setting and develop technical skills, enhance critical thinking, and learn the art and practice of nursing in a clinical environment. (Hereafter, these experiences in the clinical environment are referred to as traditional clinical experiences.) In the clinical environment, students are assigned patients and provide care under the supervision of a clinical instructor. Ideally, traditional clinical experiences offer a wide breadth of learning opportunities, allowing students to practice skills; increase clinical judgment and critical thinking; interact with patients, families, and members of the health care team; apply didactic knowledge to actual experience; and prepare for entry into practice.
      However, the number of undergraduate programs has increased, creating more competition for clinical placement sites. Patient safety initiatives at some acute-care facilities have reduced the number of nursing students permitted on a patient unit at one time, creating even fewer educational opportunities. In addition, faculty members report that restrictions on what students may do in clinical facilities have increased and that students' time in clinical orientation are barriers to optimizing students' clinical learning (
      • Ironside P.M.
      • McNelis A.M.
      Clinical education in prelicensure nursing programs: Findings from a national study.
      .
      These recent issues, along with the existing challenges of clinical education (such as variability in patient acuity and census and decreased lengths of stay), have educators looking for new ways to prepare students for the complex health care environment.
      No real alternatives to the traditional clinical model existed before the advent of increasingly sophisticated patient simulators (
      • Gaba D.M.
      The future vision of simulation in healthcare.
      ). Medium- and high-fidelity human simulators appeared in medical education in the 1960s, but they did not appear in undergraduate nursing education programs until the late 1990s. The use of this technology accelerated in nursing programs in the mid-2000s as faculty realized that simulation allowed students to practice skills, critical thinking, and clinical decision making in a safe environment.
      With the challenges of providing high-quality clinical experiences and the availability of high-fidelity manikins, the use of simulation in nursing education has grown rapidly. In 2002,
      • Nehring W.M.
      • Lashley F.R.
      Current use and opinions regarding human patient simulators in nursing education: An international survey.
      surveyed nursing schools and simulation centers on the use of patient simulators. To be included in the survey, a program had to have purchased a patient simulator from Medical Education Technologies, Inc. before 2002; only 66 nursing programs received surveys. Just 8 years later, a National Council of State Boards of Nursing (NCSBN) survey found that 917 nursing programs were using medium- or high-fidelity patient manikins in their curriculum (
      • Hayden J.
      Use of simulation in nursing education: National survey results.
      ).
      As simulation use increased, boards of nursing (BONs) received requests from programs for permission to use simulation to replace some of the traditional clinical experience hours. However, the existing literature did not provide the level of evidence BONs needed to make a decision on simulation as replacement strategy. In 2009, during discussions of nursing education, BONs raised concerns about the availability of clinical sites, the quality of the clinical experiences, the amount of time students were spending in observational experiences rather than providing direct care, and the amount of time clinical instructors were spending supervising skill performance. Many believed simulation could address these issues, though concerns existed:
      • How much simulation should be used?
      • Are students receiving a quality experience with simulation when nine students are observing and three are performing?
      • Can simulation be used for all undergraduate courses?
      The existing literature did not provide the answers.

      Review of the Literature

      Simulation in the education of health care practitioners is not a new concept.
      • Nehring W.M.
      History of simulation in nursing.
      notes that as early as 1847, the Handbook for Hospital Sisters called for “every nursing school to have 'a mechanical dummy, models of legs and arms to learn bandaging, a jointed skeleton, a black drawing board, and drawings, books, and models' (p. 34)” (p. 10).
      Nehring describes Mrs. Chase, the first life-size manikin produced in 1911 for the purpose of nursing education. Over the years, Mrs. Chase underwent modifications and improvements and was joined by a male version and a baby version (
      • Nehring W.M.
      History of simulation in nursing.
      a). In the 1960s, a mannequin called Resusci Anne appeared for cardiopulmonary resuscitation (CPR) training (
      • Hovancsek M.T.
      Using simulation in nursing education.
      ). Next came Sim One in 1969 to train anesthesia students (
      • Lapkin S.
      • Levett-Jones T.
      • Bellchambers H.
      • Fernandez R.
      Effectiveness of patient simulation manikins in teaching clinical reasoning skills to undergraduate nursing students: A systematic review.
      ) and then Harvey in the 1980s to train medical students to perform cardiac assessments (
      • Hovancsek M.T.
      Using simulation in nursing education.
      ). Since then, tremendous advances in computer technology have provided nurse educators with the ability to design, develop, and implement complex learning activities in the academic setting. Nursing simulation with sophisticated computerized manikins began in the late 1990s and early 2000s (
      • Hovancsek M.T.
      Using simulation in nursing education.
      ; Nehring, 2010a).
      With the advent of medium- and high-fidelity manikins, more nursing programs began incorporating them into their curriculum. The first study to describe the prevalence of simulation use was conducted by
      • Nehring W.M.
      • Lashley F.R.
      Current use and opinions regarding human patient simulators in nursing education: An international survey.
      . Thirty-four nursing programs and 6 simulation centers participated in the survey. The investigators found that simulation was used most frequently for teaching basic and advanced medical-surgical courses, physical assessment, and basic nursing skills. Of the 35 respondents, 57.1% (n=20) stated that simulation was used as part of clinical time; the other respondents stated that simulation rarely or never replaced clinical time.
      In the spring of 2007,
      • Katz G.B.
      • Peifer K.L.
      • Armstrong G.
      Assessment of patient simulation use in selected baccalaureate nursing programs in the United States.
      conducted an electronic survey of baccalaureate programs accredited by the National League for Nursing (NLN). Of the 78 responding programs, 79% reported using human patient simulators; about half were using the simulators with case scenarios. Eighteen of the responding schools reported using simulation as a replacement for clinical hours, most frequently in nursing fundamentals, medical-surgical nursing, and obstetric nursing courses.
      A 2010 national survey of prelicensure nursing programs found that 87% of respondents (n=917) were using high- or medium-fidelity simulation in their programs (
      • Hayden J.
      Use of simulation in nursing education: National survey results.
      ). High- and medium-fidelity simulation use was reported most frequently in foundations, medical-surgical, obstetric, and pediatric courses. Sixty-nine percent of respondents reported that they do or have on occasion substituted simulation for traditional clinical experiences. Substitution occurred most frequently in basic and advanced medical-surgical, obstetric, and pediatric courses, followed by nursing foundations courses. Like the
      • Katz G.B.
      • Peifer K.L.
      • Armstrong G.
      Assessment of patient simulation use in selected baccalaureate nursing programs in the United States.
      , this national study documented the increasing trend toward incorporating simulation experiences into the prelicensure curriculum.

      Simulation Outcome Studies

      As the use of simulation in health care education programs increased, the literature on simulation grew as well; however, research on simulation outcomes understandably lagged behind. When High-Fidelity Patient Simulation in Nursing Education was published in 2010, Nehring found only 13 research articles on nursing student outcomes, namely, satisfaction with the simulation experience (6 studies), selfconfidence (7 studies), self-ratings (4 studies), knowledge (4 studies), and skill performance or competence (3 studies). In these reports, the results were mixed. In most of the studies, students reported satisfaction with the simulation experience (
      • Childs J.C.
      • Sepples S.
      Clinical teaching by simulation: Lessons learned from a complex patient care scenario.
      ,
      • Jeffries P.R.
      • Rizzolo M.A.
      Designing and implementing models for the innovative use of simulation to teach nursing care of ill adults and children: A national multi-site, multi-method study.
      ,
      • Schoening A.M.
      • Sittner B.J.
      • Todd M.J.
      Simulated clinical experience: Nursing students' perceptions and the educators' role.
      ) and usually reported higher self-confidence after simulation experiences (
      • Bearnson C.S.
      • Wiker K.M.
      Human patient simulators: A new face in baccalaureate nursing education at Brigham Young University.
      ,
      • Bremner M.N.
      • Aduddell K.
      • Bennett D.N.
      • VanGeest J.B.
      The use of human patient simulators: Best practices with novice nursing students.
      ,
      • Childs J.C.
      • Sepples S.
      Clinical teaching by simulation: Lessons learned from a complex patient care scenario.
      ,
      • Jeffries P.R.
      • Rizzolo M.A.
      Designing and implementing models for the innovative use of simulation to teach nursing care of ill adults and children: A national multi-site, multi-method study.
      ,
      • Schoening A.M.
      • Sittner B.J.
      • Todd M.J.
      Simulated clinical experience: Nursing students' perceptions and the educators' role.
      ). However, in two studies, Alinier and colleagues found no differences in self-confidence ratings (
      • Alinier G.
      • Hunt B.
      • Gordon R.
      Determining the value of simulation in nurse education: Study design and initial results.
      ,
      • Alinier G.
      • Hunt B.
      • Gordon R.
      • Harwood C.
      Effectiveness of intermediate-fidelity simulation training technology in undergraduate nursing education.
      ), and
      • Scherer Y.K.
      • Bruce S.A.
      • Runkawatt V.
      A comparison of clinical simulation and case study presentation on nurse practitioner students' knowledge and confidence in managing a cardiac event.
      found significantly higher reports of self-confidence in the control group. Frequently, there were no significant differences between groups overall, but a subscale may have shown a significant difference (
      • LeFlore J.L.
      • Anderson M.
      • Michael J.
      • Engle W.D.
      • Anderson J.
      Comparison of self-directed learning versus instructor-modeled learning during a simulated clinical experience.
      ,
      • Jeffries P.R.
      • Rizzolo M.A.
      Designing and implementing models for the innovative use of simulation to teach nursing care of ill adults and children: A national multi-site, multi-method study.
      ,
      • Kuiper R.
      • Heinrich C.
      • Matthias A.
      • Graham M.J.
      • Bell-Kotwall L.
      Debriefing with the OPT model of clinical reasoning during high fidelity patient simulation.
      ,
      • Radhakrishnan K.
      • Roche J.P.
      • Cunningham H.
      Measuring clinical practice parameters with human patient simulation: A pilot study.
      ,
      • Scherer Y.K.
      • Bruce S.A.
      • Runkawatt V.
      A comparison of clinical simulation and case study presentation on nurse practitioner students' knowledge and confidence in managing a cardiac event.
      ). In general, these and other early studies had small sample sizes, lacked a control group, or lacked randomization, but they laid the groundwork for future research.
      Other nurse scholars have conducted systematic reviews of the nursing literature with similar findings. The original intent of a review conducted by
      • Lapkin S.
      • Levett-Jones T.
      • Bellchambers H.
      • Fernandez R.
      Effectiveness of patient simulation manikins in teaching clinical reasoning skills to undergraduate nursing students: A systematic review.
      was to perform a meta-analysis of simulation outcomes in nursing. The initial search revealed 1,600 articles between 1999 and 2009. A reasonably large number were research studies; however, even after a relaxation of inclusion criteria, only eight studies could be included. The Lapkin et al. review found that simulation improved critical thinking, skills performance, and knowledge of subject matter. An increase in clinical reasoning was inconclusive; however, three components of clinical reasoning—knowledge, critical thinking, and ability to recognize deteriorating patients—improved with simulation.
      Difficulty in reviewing the simulation research literature is not limited to nursing. Systematic reviews and meta-analyses of the health care literature identify issues with a general lack of appropriately powered, rigorous studies (
      • Cook D.A.
      • Hatala R.
      • Brydges R.
      • Zendejas B.
      • Szostek J.H.
      • Wang A.T.
      • Erwin P.J.
      • Hamstra S.J.
      Technology-enhanced simulation for health professions education: A systematic review and meta-analysis.
      ,
      • Issenberg S.B.
      • McGaghie W.C.
      • Petrusa E.R.
      • Gordon D.L.
      • Scalese R.J.
      Features and uses of high-fidelity medical simulations that lead to effective learning: A BEME systematic review.
      ,
      • Laschinger S.
      • Medves J.
      • Pulling C.
      • McGraw R.
      • Waytuck B.
      • Harrison M.
      • Gambeta K.
      Effectiveness of simulation on health profession students' knowledge, skills, confidence and satisfaction.
      ). Issenberg and colleagues' (2010) review of 34 years of the medical simulation literature concluded, “While research in this field needs improvement in terms of rigor and quality, high-fidelity medical simulations are educationally effective and simulation-based education complements medical education in patient care settings.”
      • Laschinger S.
      • Medves J.
      • Pulling C.
      • McGraw R.
      • Waytuck B.
      • Harrison M.
      • Gambeta K.
      Effectiveness of simulation on health profession students' knowledge, skills, confidence and satisfaction.
      attempted a meta-analysis of all health care literature to provide a synthesis of the evidence on the effectiveness of simulation in prelicensure education, including medicine, nursing, and rehabilitation therapy from 1995 to 2006. Though the initial literature review identified 1,118 papers, the meta-analysis could not be performed because of the types of study designs and the quality of the studies. Instead, the authors synthesized the evidence into a systematic review. They found that use of simulators (partial-task trainers through high-fidelity manikins) resulted in high learner satisfaction in learning clinical skills, but the overall results of the review were inconclusive on the effectiveness of simulation to train health care professionals. The authors concluded that simulation should be an adjunct for clinical practice, not a replacement: “It remains unclear whether the skills learned through a simulation experience transfer into real-world settings” (
      • Laschinger S.
      • Medves J.
      • Pulling C.
      • McGraw R.
      • Waytuck B.
      • Harrison M.
      • Gambeta K.
      Effectiveness of simulation on health profession students' knowledge, skills, confidence and satisfaction.
      ).
      All the literature reviews reach a common conclusion: Study results are inconclusive regarding the effectiveness of simulation, but they seem generally favorable. All agree that variability in study design, issues with sample sizes that cannot detect significant effect sizes, and an overall lack of controlled, longitudinal studies make it difficult to draw strong conclusions as to the effectiveness of simulation. The literature to date also indicates the need for rigorous research that is appropriately powered with a controlled comparison group.

      Study Aims and Significance

      The NCSBN National Simulation Study, a longitudinal, randomized, controlled trial using nursing programs across the United States, is the largest, most comprehensive study to date that explores whether simulated clinical experiences can be substituted effectively for traditional clinical experiences in the undergraduate nursing program. Students participating in the study were enrolled throughout the entire 2 years of their undergraduate nursing program. The new graduates were then followed for the first 6 months in their first clinical positions to determine long-term effects of simulation and whether replacing clinical with simulation impacts entry into professional practice.
      The aims of this study were to provide BONs with evidence on nursing knowledge, clinical competency, and the transferability of learning from the simulation laboratory to the clinical setting. Specifically, the aims are as follows:
      • To determine whether simulation can be substituted for traditional clinical hours in the prelicensure nursing curriculum, using a large sample of students from different degree programs (associate degree [ADN] and bachelor's degree [BSN]) and various geographical regions of the country
      • To determine the educational outcomes of undergraduate nursing students in the core clinical courses when simulation is integrated throughout the core nursing curriculum
      • To determine whether varying levels of simulation in the undergraduate curriculum impact the practice of new graduate nurses in their first clinical positions.
      This study is reported in two parts: Part I is a randomized, controlled study of nursing students during their educational programs, and Part II is a follow-up survey study of the new graduate nurses and their managers during the first 6 months of clinical practice. Appendix A provides definitions of terms used in this study.

      National Simulation Study: Part I

      Research Questions

      • 1.
        Does substituting clinical hours with 25% and 50% simulation impact educational outcomes (knowledge, clinical competency, critical thinking and readiness for practice) assessed at the end of the undergraduate nursing program?
      • 2.
        Are there course by course differences in nursing knowledge, clinical competency, and perception of learning needs being met among undergraduate students when traditional clinical hours are substituted with 25% and 50% simulation?
      • 3.
        Are there differences in first-time NCLEX pass rates between students that were randomized into a control group, 25% and 50% of traditional clinical substituted with simulation?

      Method

      Trial Design

      This was a comparison study using a randomized, controlled, longitudinal, multisite design to examine whether time and activities in a simulation laboratory could effectively substitute for traditional clinical hours in the prelicensure nursing curriculum.
      In 2010, prelicensure nursing programs (ADN and BSN) throughout the United States were notified of the study and its requirements via postcard and were invited to apply for participation. After a review of the applications received (n=23) and telephone interviews, 10 nursing programs (five ADN and five BSN) were selected from geographically diverse areas. The programs represented rural and metropolitan communities and ranged from community colleges to large universities.
      New students accepted into these programs and matriculating in the Fall 2011 semester (with an expected graduation after the Spring 2013 semester) were asked to participate in the study. All students who consented were randomized into one of three study groups:
      • Control: Students had traditional clinical experiences (no more than 10% of clinical hours could be spent in simulation)
      • 25% Group: Students had 25% of their traditional clinical hours replaced by simulation
      • 50% Group: Students had 50% of their traditional clinical hours replaced by simulation
      Students remained in their assigned groups throughout the 2 years they were enrolled in the nursing program. Data from course outcomes (clinical competency and course-level ATI scores) and end-of-program outcomes (comprehensive ATI scores, clinical competency, critical thinking, and readiness for practice [End-of-Program Survey]) were collected from all programs and aggregated. These data were compared across the three study groups.

      Study Sites

      Inclusion Criteria

      Schools interested in applying for study participation had to meet the following criteria:
      • BON-approved prelicensure nursing education program
      • ADN or BSN program
      • National accreditation
      • NCLEX pass rates at or above the national rate
      • Maximum of 10% simulation use in any one of its current clinical courses
      • Willingness to randomize students to each of the three study groups
      • Access to a simulation laboratory that could accommodate the number of students and simulation scenarios required by the study
      • Willingness to designate and commit faculty and staff members (hereafter referred to as the study team) to conducting the study from August 2011 through May 2013
      • Availability of the study team to attend three training meetings.
      For final selection, these factors were also considered:
      • Location of the program: Selected study sites were geographically distributed across the United States.
      • Prelicensure nursing curriculum: Selected study sites had comparable clinical course curricula.

      Subjects

      Inclusion Criteria

      Students enrolled in the prelicensure-RN (ADN or BSN) program beginning Fall 2011 at a participating study site, with graduation anticipated in May 2013.

      Exclusion Criteria

      • Accelerated BSN students
      • Degree completion students (RN to BSN students)
      • Any student who already held a nursing license (LPN/VN or RN)

      Procedure

      Institutional Review Board (IRB) approval was obtained by NCSBN through the Western IRB and from the IRB of each study site.
      Each school appointed a study team consisting of faculty and staff members. Having consistent study team members ensured that the scenarios and debriefings were conducted according to the study model, which ensured consistency across all study sites in accordance with best practices for simulation. Study team members were required to attend three mandatory training sessions to receive education on the NLN/Jeffries Simulation Framework. Study teams were also taught the Debriefing for Meaningful Learning© method (
      • Dreifuerst K.T.
      Debriefing for meaningful learning: Fostering development of clinical reasoning through simulation.
      ).
      In the training sessions, study team members practiced actual simulation scenarios and conducted debriefings with volunteer students. During the final training session, experienced simulation faculty members from multiple simulation centers evaluated debriefings to ensure that the study team members were proficient in the techniques. Throughout the study, team leaders performed ongoing evaluations on their study team members to ensure debriefing methods met the study standards.
      A standardized simulation curriculum was developed and provided to the participating programs to ensure that quality simulation scenarios were used at all study sites. A modified Delphi technique involving the study teams was used to determine the subject matter for the curriculum. A description of the development of the simulation curriculum is described in Appendix B.
      Subsequent to the development of the standardized simulation curriculum, simulation scenarios depicting the patient conditions and key concepts in the curriculum were obtained from publishers and distributed to the programs. When published scenarios were not available for some courses, such as mental health and community/public health, a call for scenarios went to members of the International Association for Clinical Simulation & Learning (INACSL). An expert in nursing simulation reviewed all donated scenarios to ensure they were consistent with the NLN/Jeffries Simulation Framework.
      Faculty members from each program selected simulations from the provided curriculum that would meet their learning objectives. Other processes used to ensure uniformity across study sites included the provision of manikin programming files and consumable supplies necessary for running the scenarios, including labeled simulated medications.

      Traditional Clinical Experiences

      All participants had traditional clinical experiences during each of the seven core nursing courses. The only difference among groups was the number of hours spent in the traditional clinical environment.
      Traditional clinical experiences took place in inpatient, ambulatory, or community settings selected by the schools. Students were assigned patients by clinical instructors and were expected to meet clinical objectives and competencies outlined for the courses. Students were evaluated by a clinical instructor at the end of each week using the Creighton Competency Evaluation Instrument (CCEI). Clinical instructors were required to complete training on the use of the CCEI data collection form before the first day of clinical education for the course.

      Simulated Clinical Experiences

      For students in the two simulation study groups, 25% or 50% of required clinical hours were spent in the simulation laboratory. Controlgroup students were allowed up to 10% of their clinical hours in simulation. Study sites were allowed flexibility in how they scheduled simulation hours, whether they had pre- or post-conferences, and whether the students had to prepare for their “patient care” assignment. The programs were instructed to use requirements for simulation similar to those for the clinical setting. Simulation scenarios involved medium- or high-fidelity manikins, standardized patients, role playing, skills stations and computer-based critical thinking simulations.
      Simulation scenarios followed the NLN/Jeffries Simulation Framework: clear learning objectives, problem solving components were built into the scenarios, fidelity was appropriate for the learning objectives, and structured debriefing followed each scenario. Students were assigned roles during the simulation scenarios, including Nurse 1, Nurse 2, family member, and observer. Students assigned a nursing role were oriented to the scenario and environment and were provided with background patient information, the patient chart, and the change-of-shift report. A student assigned to role-play a family member was instructed by the study team on how to respond during the scenario. Clinical instructors stayed with the remaining students in the clinical group to observe the scenario. All students in the simulation group participated in the debriefing, which was led by a study team member using the Debriefing for Meaningful Learning© method (
      • Dreifuerst K.T.
      Debriefing for meaningful learning: Fostering development of clinical reasoning through simulation.
      ).
      Throughout the scenario and debriefing, clinical instructors observed the two students in nursing roles during the scenario and completed a CCEI form for those students. The same procedure was followed for all seven core courses. Due to the number of students participating in simulation, more than one clinical group would frequently be in the simulation laboratory at the same time. Entire clinical groups rotated through stations throughout the simulation day. Appendix C depicts a sample of a simulation day schedule used by one of the study sites.

      Outcome Measurements

      The study measured students' knowledge, competency, and critical thinking as well as their perceptions of how well their learning needs were met.

      Knowledge

      At the end of the nursing program, knowledge was measured by the ATI RN Comprehensive Predictor® 2010 (Assessment Technologies Institute, LLC), a multiple-choice, Web-based, proctored examination. The examination reports a score as a percentage of correctly answered items as well as scores for the major content areas and eight nursing dimensions categories. The total score is based on 150 items.
      Knowledge of the specialty content in each clinical course was measured using the ATI Content Mastery Series® (CMS) examinations for Fundamentals of Nursing, Adult Medical-Surgical Nursing, Maternal-Newborn, Nursing Care of Children, Mental Health, and Community Health. These examinations use a Web-based format and report scores as a percentage of correctly answered items as well as scores for major content areas and nursing dimension categories. The CMS program includes other features and services that were available to all study participants, but were not required for the study.

      Clinical Competency

      During the study, clinical competency was measured using three instruments: the Creighton Competency Evaluation Instrument (CCEI), the New Graduate Nurse Performance Survey (NGNPS), and the Global Assessment of Clinical Competency and Readiness for Practice.

      Creighton Competency Evaluation Instrument

      The CCEI is a 23-item tool used by clinical instructors to rate students on behaviors that collectively demonstrate clinical competency (assessment, communication, clinical judgment, and patient safety). The tool was used to assess students in the clinical setting and the simulation setting.
      These data were used to monitor how well students were progressing clinically. Detailed validity and reliability statistics are reported by
      • Hayden J.K.
      • Keegan M.
      • Kardong-Edgren S.
      • Smiley R.A.
      Reliability and validity testing of the Creighton Competency Evaluation Instrument for use in the NCSBN National Simulation Study.
      . Overall, Cronbach's alpha ranged from 0.974 to 0.979, which is considered highly acceptable. Percent agreement between the faculty raters of the reliability and validity studies and an expert rater was reported at 70% or better for 20 of the 23 items.

      New Graduate Nurse Performance Survey

      The NGNPS developed by the Nursing Executive Center of the Advisory Board Company consists of 36 items that assess clinical knowledge, technical skills, critical thinking, communication, professionalism, and management of responsibilities on a six-point Likert scale (
      • Berkow S.
      • Virkstis K.
      • Stewart J.
      • Conway L.
      Assessing new graduate nurse performance.
      ). Berkow et al. found the Chronbach's alpha coefficient to be 0.972, and the split-half reliability was 0.916 (K. Virkstis, personal communication, March 12, 2013).

      Global Assessment of Clinical Competency and Readiness for Practice

      The Global Assessment of Clinical Competency and Readiness for Practice scale consists of one question that asks the evaluator to rate the graduating student overall on a scale of 1 to 10 (1=among the weakest and 10=among the best). The reliability of this question has not been established; however, a similar question was used in a pilot study of continued competence of RNs. The question in that study was: “Given the above behaviors/tasks, and others that you feel are directly relevant, how would you rate this RN's performance on the competency Management of Care?” An intra-rater reliability of r=0.80 using a test-retest method 1 month apart and an 81% agreement were obtained (
      • Budden J.
      The intra-rater reliability of nurse supervisor competency ratings.
      ).

      National Council Licensure Examination (NCLEX®)

      The NCLEX is “an examination that measures the competencies needed to perform safely and effectively as a newly licensed, entry-level registered nurse” (National Council of State Boards of Nursing [
      • National Council of State Boards of Nursing
      NCLEX-RN test plan [PDF file].
      ). This examination “assesses the knowledge, skills, and abilities that are essential for the entry-level nurse to use in order to meet the needs of clients requiring the promotion, maintenance, or restoration of health” (
      • National Council of State Boards of Nursing
      NCLEX-RN test plan [PDF file].
      ). Content for the examination is based on a practice analysis survey of entry-level nurses conducted every 3 years. The NCLEX is administered using a computerized adaptive testing format in secured, proctored testing facilities.

      Critical Thinking

      Developed by the Nursing Executive Center, the Critical Thinking Diagnostic© assesses critical-thinking ability using five items in each of the following areas: problem recognition, clinical decision making, prioritization, clinical implementation, and reflection (
      • Berkow S.
      • Virkstis K.
      • Stewart J.
      • Aronson S.
      • Donohue M.
      Assessing individual frontline nurse critical thinking.
      ). The reliability for all survey items of the Critical Thinking Diagnostic© is 0.976 Chronbach's alpha coefficient.

      Learning Needs Comparison

      The Clinical Learning Environment Comparison Survey (CLECS) assesses students' perceptions of how well they feel their learning needs were met in the traditional clinical and simulation environments by rating each environment side-by-side on 29 items related to clinical learning. The instrument provides a total score and six subscale scores (communication, nursing process, holism, critical thinking, self-efficacy, and teaching-learning dyad); each subscale has a rating for the traditional clinical environment and the simulation environment. The reported Cronbach's alphas of the subscales in the traditional clinical environment ranged from 0.741 to 0.877 and Cronbach's alphas for the subscales in the simulation environment ranged from 0.826 to 0.913 (K. Leighton, personal communication, June 6, 2013).

      Data Collection

      At the beginning of the study when informed consent was obtained, the student's demographic information was also obtained. At the beginning of each clinical course, demographic information was obtained from clinical faculty members who were completing CCEI ratings on the participants.
      The CCEI was used to assess clinical competency on an ongoing basis throughout the study. In the clinical setting, students were assessed individually once a week. In the simulation setting, two students were rated on their performance in the simulation and the debriefing. During simulation days, students were assessed at least once using the CCEI. CCEI scores were graphed, and the data trends were evaluated weekly as a safety indicator to determine if students were meeting the course objectives.
      All CCEI scores obtained from the clinical and simulation settings were collected weekly. Scores from simulations were used by the participating sites and a data safety monitoring board (DSMB) to monitor academic progress but were not used as study outcome measurements. For the purposes of statistical comparison, the final CCEI rating from the clinical setting was used as a proxy for the final clinical competency rating for the course.
      At the end of each core clinical course, students completed several assessments:
      • CLECS to assess how well learning needs were met in the clinical and simulation learning environments
      • ATI Content Mastery Series computerized assessments of nursing knowledge
      • Student information sheet to determine if students worked as nursing assistants during the semester and whether additional ATI resources were utilized that could influence examination scores and to collect qualitative comments about the study experience.
      During the final weeks of the last semester, clinical preceptors and instructors were asked to complete the End-of-Program Preceptor Survey, which consisted of three instruments to assess clinical competency and critical thinking: NGNPS, Critical Thinking Diagnostic, and the Global Assessment of Clinical Competency and Readiness for Practice. Completed surveys were mailed directly to the project director in prepaid reply envelopes.
      At the end of the last semester, students completed the ATI Comprehensive Predictor 2010 for an assessment of overall nursing knowledge. Students also completed an end-of-program CLECS and the End-of-Program Survey. The end-of-program CLECS assessed overall perception of the traditional clinical and simulation settings. Students were instructed to consider all of their clinical courses and make selections based on their experiences overall in both learning environments. The End-of-Program Survey used the same scales as the preceptor version to obtain self-assessment ratings of clinical competency: the NGNPS, Critical Thinking Diagnostic, and the Global Assessment of Clinical Competency and Readiness for Practice. To ensure confidentiality of the responses, students mailed completed surveys to the project director using prepaid reply envelopes.
      New graduates were eligible to take the NCLEX-RN after graduating from their nursing program. NCLEX results were collected through December 31, 2013. Table 1 outlines the instruments used and the data collection schedule.
      Table 1Description of Data Collection Instruments
      InstrumentCompleted byInformation CollectedTiming
      Demographic formStudentsGender, age, race, previous degrees, previous health care or military experienceBeginning of study
      Clinical instructorsGender, age, race, length of RN and teaching experience, previous experience with simulationBeginning of semester
      Creighton Competency Evaluation Instrument (CCEI)Clinical instructors23-item competency evaluation (total score and 4 subscales)Each week of clinical and after every simulation scenario
      ATI Content Mastery Series® examinationsStudentsComputerized knowledge assessmentsAfter each clinical course
      Clinical Learning Environment Comparison Survey (CLECS)StudentsRatings of traditional clinical setting and simulation setting to determine how well learning needs were metAfter each clinical course
      End-of-Program surveyStudentsNew Graduate Nurse Performance Survey (6 subscales)End of final semester
      Critical Thinking Diagnostic (5 subscales)
      Global Assessment of Clinical Competency and Readiness for Practice (1 item)
      Clinical preceptor/Clinical InstructorNew Graduate Nurse Performance Survey (6 subscales)End of final semester
      Critical Thinking Diagnostic (5 subscales)
      Global Assessment of Clinical Competency and Readiness for Practice (1 item)
      ATI RN-Comprehensive Predictor® 2010StudentsComputerized knowledge assessmentsEnd of final semester
      NCLEX®New graduatesCompetency evaluationWithin approximately 7 months of graduation
      Follow-up surveyNew graduate nursesNew Graduate Nurse Performance Survey (6 subscales)6 weeks, 3 months, and 6 months after practice
      Critical Thinking Diagnostic (5 subscales)
      Global Assessment of Clinical Competency and Readiness for Practice (1 item)
      Preparation for practice, length of orientation, charge nurse responsibilities, and workplace stress
      Manager surveyManagers/clinical preceptorsNew Graduate Nurse Performance Survey (6 subscales)6 weeks, 3 months, and 6 months after practice
      Critical Thinking Diagnostic (5 subscales)
      Global Assessment of Clinical Competency and Readiness for Practice (1 item)
      Errors (2 items

      Power Analysis and Sample Size Determination

      • Pauly-O'Neill S.
      Beyond the five rights: Improving patient safety in pediatric medication administration through simulation.
      reported large effect sizes associated with the use of simulation, but the study did not examine different amounts of simulation. One might expect a large effect; however, the comparisons among the three amounts of simulation may have smaller effects. Based on these considerations, an effect size of d=0.35 was selected for analysis. This effect size is between what
      • Cohen J.
      Statistical power analysis for the behavioral sciences.
      calls a small effect (d=0.20) and what he calls a medium effect (d=0.40). Assuming this effect size, a two-tailed alpha of 0.05 and a power of 0.92, a sample of 200 students per group was needed. With three groups, a total sample of 600 was required.

      Safety Monitoring

      In addition to IRB approval, other mechanisms were used to ensure the study intervention was not compromising a study group or placing students at risk for poor performance.
      A committee was established at each school to provide internal oversight for the study. Committees consisted of program administrators, the study team leader, other study team members, course faculty members, and other stakeholders. The committee reviewed study progress, resolved site-specific issues, tracked student progress, and provided a structured mechanism for communication regarding the study and any program effects.
      A DSMB was established at NCSBN to review all study data on a continual basis and determine if the study should continue. Members of the DSMB included the project director, two statisticians, and two prelicensure nursing program directors whose schools were not involved in the study. The DSMB met regularly each semester to review data as it was collected. Aggregated national data and school level data were reviewed to ensure that the simulation groups were progressing in the nursing program and meeting program objectives. In addition to the weekly CCEI data, the DSMB reviewed end-of-semester ATI scores, CLECS ratings, grade point averages, attrition data, and adverse events. DSMB summary reports were submitted to each site for IRB review.

      Recruitment and Randomization of Students

      Student recruitment efforts began in the summer of 2011. All students received a detailed description of the study and were invited to participate. Informed consent was obtained from student volunteers, and demographic data were collected. The study team leader at each site assigned each study subject a study specific identification (ID) number. These ID numbers were forwarded to the lead statistician, who randomized students into one of three study groups (Control, 25% or 50%) using a standard random number generator from Statistical Analysis Systems (SAS).
      The number of students in each cohort varied according to school and state requirements. Attempts were made to maintain a 1:1:1 ratio at each school. Students remained in the same study group assignment for the duration of the nursing program.

      Data Analysis

      Data on students were collected throughout the 2 years of the study. Paper-based data collection forms were used for the majority of data collection instruments. Data were manually entered into a data spreadsheet using a double-key entry process. ATI scores were received directly from ATI on a per student basis as de-identified data, using the study ID numbers. SAS version 9.2 was used for all analyses.
      Basic descriptive statistics were run on all data. Parametric and nonparametric tests were used as indicated for each type of data. Multivariate analysis of variance (MANOVA) procedures were employed to check possible covariates (school, gender, age, ethnicity, race, nursing assistant experience, previous degree, and use of additional ATI products) for interaction effects. When the covariates were included in the MANOVA model, the Wilks' Lambda value did not change substantially, and interaction effects were determined to be insignificant. For all tables, the effect sizes displayed represent the maximum effect size calculated when comparing the means of the control, 25%, and 50% groups to each other.

      Results

      Sample

      A total of 847 students consented to participate in the study, and they were randomized into the three study groups. The number of participants randomized at each site ranged from 60 to 103. The study sample was 86% female, 84% white, and just under 18% Hispanic. At the start of the study, the mean age of the study sample was 26.3 years (SD 8.0, range 18–60). Also at the start of the study, almost 16% of the students were certified nurse assistants, 34% had a previous degree, and 3% had prior or current military experience. As Table 2 shows, there was very little variance in the demographic characteristics of the three groups. Statistical analysis showed no difference in demographic characteristics among the three groups with the exception of ethnicity. The statistically significant difference (p=0.043) was between the control group and the 25% group for the number of Hispanic participants.
      Table 2Demographics of Study Participants
      Total Study SampleControl Group25% Group50% Group
      Gendern%n%n%n%p value
       Female70886.1%22185.7%24685.4%24187.3%0.740
       Male11413.9%3714.3%4214.6%3512.7%
      AgeMeanSDMeanSDMeanSDMeanSDp value
      26.38.026.28.126.07.526.88.40.415
      n%n%n%n%p value
       18-24 years45855.5%14555.8%16757.6%14652.9%0.865
       25-34 years23328.2%7328.1%7826.9%8229.7%
       35 years or more13616.3%4216.2%4515.5%4817.4%
      Racen%
      Percentages total more than 100% as more than one category may have been selected.
      n%
      Percentages total more than 100% as more than one category may have been selected.
      n%
      Percentages total more than 100% as more than one category may have been selected.
      n%
      Percentages total more than 100% as more than one category may have been selected.
      p value
       White69084.0%22486.2%24686.0%22080.0%0.080
       Black/African American728.8%176.5%238.0%3211.6%0.099
       Asian597.2%197.3%155.2%259.1%0.210
       Native American/Alaska Native91.1%20.8%41.4%31.1%0.780
       Hawaiian or other Pacific Islander50.6%31.2%0020.7%0.213
      Ethnicityn%n%n%n%p value
       Hispanic14717.9%3413.1%6021.0%5319.3%0.043
      Experience as a certified nurse assistantn%n%n%n%p value
       Yes12915.6%3915.0%4615.9%4415.9%0.950
      Note. Not all subjects provided demographic information; therefore, the column n's do not total to the entire sample size.
      Bold=statistically significant p value.
      a Percentages total more than 100% as more than one category may have been selected.
      A total of 666 students completed the study. The demographic characteristics of those students were similar to the demographic characteristics of those who began the study. Among students who completed the study, 87% were female, 87% were white, and just over 18% were Hispanic. Their mean age at the beginning of the study was 26.1 years (SD 7.5, range 18–57). At the end of the study, 34% of the graduating students indicated they worked as a nursing aide or an assistant at some point during the study.
      Differences existed between the demographic characteristics of those who completed the study and those who did not. Nonwhite students, males, and older students tended to drop out of the study. The demographic characteristics of the students who did not complete the study were 82% female, 74% white, and almost 16% Hispanic. Their mean age at the beginning of the study was 27.4 years (SD 9.5, range 18–60). Table 3 lists the demographic characteristics of study participants who completed the study and those who did not.
      Table 3Demographics of Subjects by Completion Status
      Completed StudyDid Not Complete Studyp value
      Gendern%n%
       Female57087.3%13881.7%0.059
       Male8312.7%3118.3%
      AgeMeanSDMeanSD
      26.17.527.49.50.100
      n%n%
       18-24 years37056.4%8851.8%0.011
       25-34 years18728.5%4627.0%
       35 years or more9915.1%3621.2%
      Racen%
      Percentages total more than 100% as more than one category may have been selected.
      n%
      Percentages total more than 100% as more than one category may have been selected.
       White56686.5%12474.3%0.000
       Black/African American487.3%2414.4%0.004
       Asian396.0%2012.0%0.007
       American Indian/Alaska Native71.1%21.2%0.888
       Hawaiian or other Pacific Islander40.6%10.6%0.985
      Ethnicityn%n%
       Hispanic12118.5%2615.6%0.378
      Experience as a certified nurse assistantn%n%
       Yes9814.9%3118.2%0.288
      Previous degreen%n%
       None
      Includes Emergency Medical Technicians and paramedics if no additional degree was listed.
      42664.8%11768.8%0.476
       Associate8312.6%2212.9%
       Baccalaureate or higher14822.5%3118.2%
      Military experiencen%n%
       Yes213.2%74.2%0.554
       Medical Corps40.6%42.4%0.038
       Reservist10.2%10.6%0.302
      Note. Not all subjects provided demographic information; therefore, the column n's do not total to the entire sample size.
      Bold=statistically significant p value.
      a Percentages total more than 100% as more than one category may have been selected.
      b Includes Emergency Medical Technicians and paramedics if no additional degree was listed.

      Attrition

      The rate of completion for the study sample was 79%. Completion rates were the same for the control group and 25% group at 81%; the completion rate for the 50% group was 74%. Students could withdraw from the study at any time, or they could be removed if they no longer met the eligibility criteria. The main reason for withdrawal from the study was not graduating on time. Aside from changing majors or leaving the nursing program, reasons for not graduating on time were dropping a required nursing course, failing a course, taking a leave of absence, and changing to part-time status.
      The rate of course failures (theory or clinical) was 7.7% overall. The highest failure rate was in the control group (9.3%), and the lowest was in the 50% group (6.6%); these differences were not statistically significant (p=0.487). The rate of study withdrawal was 13.6% overall; however, the 50% group had a much higher rate of withdrawal (19.2%) than the control and 25% groups (9.3% and 12.0%, respectively, p=0.002). Table 4 outlines the reasons for not completing the study by study group.
      Table 4Reasons for Study Attrition
      OverallControl25%50%p value
      Number of students randomized847268293286
      Number of students completing the study666218236212
       Rate of completion78.6%81.3%80.5%74.1%0.072
      Number of students who failed a course during the study66252219
       Rate of failure78%9.3%75%6.6%0.487
      Number of students who withdrew or were withdrawn from the study for any reason115253555
       Rate of withdrawal13.6%9.3%11.9%19.2%0.002
      Reasons students did not complete the study
       Withdrew from the nursing program3511915
       Dropped a required nursing course15537
       No longer wished to participate 5972131
       Ineligible for other reasons6222
      Bold=statistically significant p value.
      Statistically significant differences existed between those who completed the study and those who did not. Although the mean ages of these two groups appear similar, those who were age 35 or older were more likely to not complete the study. Also, males, Black/African-American, and Asian students had significantly higher rates of not completing the study. Study completers and noncompleters also had a statistically significant difference regarding experience as a nurse assistant. In all three groups, a higher proportion of completers worked as a nursing assistant at some point during the study (p<0.001). This was true for all three study groups, but the largest difference was seen in the 50% group. Of those in the 50% group, 87% of study completers worked as nursing assistants compared with 69% of those who did not complete the study.

      Research Question 1

      Does substituting clinical hours with 25% and 50% simulation impact educational outcomes (knowledge, clinical competency, critical thinking and readiness for practice) assessed at the end of the undergraduate nursing program?

      Nursing Knowledge

      The RN Comprehensive Predictor® 2010 was used to assess overall nursing knowledge at the end of the nursing program. There were no statistically significant differences among the three study groups in the total score (p=0.478). (See Figure 1.)
      Figure thumbnail gr1
      Figure 1Mean ATI RN Comprehensive Predictor Scores (N=641)
      Table 5 below lists the detailed results of the ATI subscale scores for each study group. Although the 50% group tends to have slightly higher percentages, the differences are minimal, and the majority of ATI scores have less than one percentage point difference among the three groups. These minimal differences are reflected in the small effect sizes and no statistically significant differences between the groups.
      Table 5ATI Comprehensive Predictor Scores
      TotalControl Group25% Group50% GroupEvaluation of Significance
      (n=641)(n=209)(n=221)(n=211)
      MeanSDMeanSDMeanSDMeanSDF valueEffect Sizep valueSignificant Differences
      Total score69.68.269.18.769.58.670.17.10.740.120.478
      Categories:
       Management of care69.410.869.211.069.011.070.010.50.500.090.608
       Safety & infection con-trol64.814.163.615.165.214.765.612.41.230.150.292
       Health promotion & maintenance66.815.366.115.267.816.266.414.50.760.110.466
       Psychosocial integrity63.715.864.216.262.816.564.114.70.500.080.607
       Basic care & comfort59.315.958.916.559.215.659.815.60.200.060.819
       Pharmacological & parenteral therapies65.312.865.713.165.312.964.912.60.190.060.831
       Reduction of risk potential64.214.163.315.664.513.564.813.10.700.110.495
       Physiological adaptation70.211.969.012.170.112.371.511.22.350.210.097
      Dimensions:
       Clinical judgment/Clin-ical thinking in nursing65.98.765.29.266.08.966.37.70.900.130.406
       Foundational thinking in nursing66.511.066.411.366.311.866.89.90.140.050.869
       Analysis/Diagnosis63.512.463.113.363.812.263.611.80.210.060.813
       Assessment64.112.863.012.564.213.065.012.91.310.160.269
       Evaluation69.515.869.015.769.616.670.015.10.190.060.831
       Implementation/Thera-peutic nursing inter-vention65.79.665.59.865.610.466.18.50.260.070.770
       Planning69.510.569.211.269.411.170.09.10.360.080.697
       Priority setting73.29.373.09.673.09.273.79.00.370.080.692

      Clinical Competency

      In the final weeks of the nursing program, clinical preceptors and instructors rated the clinical competency of the study participants using the NGNPS. There was less than a one-point difference among the mean scores across the three groups. On a scale of 1 to 6 (1=lowest rating, 6=highest), students in all three groups had mean scores above 5.0, indicating they all were rated as clinically competent by their preceptors or instructors. The effect sizes were small, and chi-square analysis indicated no statistical significance on any of the subscales. (See Table 6.)
      Table 6End-of-Program Survey Preceptor Ratings
      Total SampleControl Group25% Group50% GroupEvaluation of Significance
      nMeanSDnMeanSDnMeanSDnMeanSDF valueEffect Sizesp valueSignificant Differences
      New Graduate Nurse Performance Survey® (16 scalae)
      1=lowest rating, 6=highest rating
       Clinical knovvledge4625.130.681555.120.731715.180.601365.090.720.730.140.481
       Technical skills4625.060.751555.060.761715.090.641365.010.860.420.110.659
       Critical thinking4625.070.761555.110.721715.060.711365.030.880.400.100.668
       Communication4615.300.721555.300.651705.340.651365.240.870.740.130.478
       Professionalism4625.420.711555.380.691715.470.611365.390.850.840.140.432
       Management of responsibilities4605.200.751555.220.711695.200.701365.170.850.160.060.849
      Critical Thinking Diagnostic (1–6 sacale)
      1=lowest rating, 6=highest rating
       Problem recognition4625.020.701554.970.701715.070.651365.020.750.710.150.494
       Clinical decision making4625.130.671555.090.601715.180.611365.120.810.760.150.469
       Prioritization4455.090.69138
      Not enough questions within the subscale were answered to calculate a score.
      5.140.661715.080.631365.030.770.870.150.418
       Clinical implementation4635.130.651565.100.611715.190.601365.100.761.020.150.361
       Reflection4635.170.671565.130.641715.230.591365.150.781.150.160.318
      Global assessment of clinical competency and readiness for practice (1-10 sccale)
      1=among the weakest, 10=among the best.
      4598.271.421568.201.341688.291.481358.341.440.370.100.688
      a 1=lowest rating, 6=highest rating
      b Not enough questions within the subscale were answered to calculate a score.
      c 1=among the weakest, 10=among the best.

      Critical Thinking

      Preceptors and instructors rated students on their ability to think critically on a scale of1 to 6 (1=lowest rating, 6=highest rating). Again, students in all three groups had overall mean scores above 5.0. Though the overall mean scores tended to be slightly higher for the 25% group, the differences are minimal, and chi-square analysis found no statistically significant differences with small effect sizes. (See Table 6.)

      Global Assessment of Clinical Competence and Readiness for Practice

      Preceptors and instructors gave students an overall rating of readiness for practice on a scale of 1 to 10 (1=among the weakest students, 10=among the best). As the bottom of Table 6 indicates, the mean scores of students in all three groups were above 8.0, and there were no statistically significant differences among the control, 25%, and 50% groups on ratings of global clinical competency and readiness for practice (p=0.688, d=0.10).

      End-of-Program Survey Student Ratings

      The study participants completed the End-of-Program Survey as a self-assessment. Table 7 shows the detailed results of the three instruments comprising this survey: the NGNPS, the Critical Thinking Diagnostic, and the Global Assessment of Clinical Competency and Readiness for Practice. The 50% group rated themselves higher than their peers on the New Graduate Nurse Performance Survey, but the differences were only statistically significant for the critical-thinking subscale (control group mean 5.13 (SD 0.7), 25% group mean 5.10 (SD 0.7), 50% group mean 5.30 (SD 0.9; p=0.038).
      Table 7End-of-Program Survey Student Ratings
      TotalControl Group25% Group50% GroupEvaluation of Significance
      (n=472)(n=153)(n=165)(n=154)
      MeanSDMeanSDMeanSDMeanSDF

      value
      Effect SizeP

      value
      Significant Differences
      New Graduate Nurse Performance Survey® (1–6 scale)
      1=lowest rating 6=highest.
       Clinical knowledge5.070.725.090.675.010.675.120.821.010.150.366-
       Technical skills4.950.824.990.764.920.844.940.860.340.090.711-
       Critical thinking5.170.755.130.695.100.675.300.863.300.260.03850%>25%
       Communication5.380.715.360.665.370.675.400.790.110.050.896-
       Professionalism5.610.595.590.545.620.535.620.680.110.060.899-
       Management of responsibilities5.290.685.230.665.250.655.380.732.070.220.127-
      Critical Thinking Diagnostic (1–6 scale)
      1=lowest rating 6=highest.
       Problem recognition5.140.625.080.665.090.565.240.633.350.250.036
      Post-hoc analysis did not identify a statistically significant difference between groups
       Clinical decision making5.280.555.250.565.210.545.390.544.550.330.01150%>25%
       Prioritization5.150.645.130.665.080.605.260.663.580.290.02950%>25%
       Clinical implementation5.260.625.240.665.190.595.360.613.160.280.04350%>25%
       Reflection5.370.575.310.625.330.535.480.544.290.290.01450%>CTL
      Global Assessment of Clinical Competency and Readiness for Practice (1–10 scale)
      1=among the weakest, 10=among the best
      7.941.217.831.337.781.148.231.116.840.400.00150%>CTL & 25%
      Bold=statistically significant p value.
      a 1=lowest rating 6=highest.
      b Post-hoc analysis did not identify a statistically significant difference between groups
      c 1=among the weakest, 10=among the best
      On the Critical Thinking Diagnostic, the 50% group rated themselves significantly higher in every subscale. The 50% group also rated themselves significantly higher on the Global Assessment of Clinical Competency and Readiness for Practice (p=0.001). (See Table 7.)

      Research Question 2

      Are there course by course differences in nursing knowledge, clinical competency, and perception of learning needs being met among undergraduate students when traditional clinical hours are substituted with 25% and 50% simulation?
      At the end of each core clinical course, students completed standardized tests of nursing knowledge, using the ATI Content Mastery Series. Clinical instructors rated the students' clinical competency progression during each week of clinical, using the CCEI, and at the end of each course, students completed the CLECS to compare how well traditional clinical settings and the simulation environment met their learning needs. For the CCEI, a percentage was calculated based on the items rated as competent divided by the number of items assessed.

      Fundamentals of Nursing

      Figure 2 illustrates the total score for the ATI Fundamentals of Nursing Assessment. The overall ATI scores were not statistically different for the three study groups (p=0.155). All three received similar scores overall and within the subscales; in some cases, the scores were separated by only fractions of a percent (Appendix D, Table D1).
      Figure thumbnail gr2
      Figure 2Mean Total Score: ATI Fundamentals of Nursing Assessment (N=800)
      The Fundamentals of Nursing CCEI scores for each study group were graphed separately for each week of the semester. (See Figure 3.) Competency ratings were lower at the beginning of the semester for all students and improved over time, as expected. All three groups had similar competency ratings throughout the semester.
      Figure thumbnail gr3
      Figure 3Total CCEI Scores Assessed in the Traditional Clinical Setting for the Fundamentals of Nursing Course (N=714)
      Table 8 shows the total score and subscale scores of the Fundamentals of Nursing CCEI rating for the three groups. The 25% and control groups received statistically significantly higher ratings than the 50% group; however, the competency ratings for the 50% group were still above 90%.
      Table 8Fundamentals of Nursing Final CCEI Clinical Ratings
      TotalControl Group25% Group50% GroupEvaluation of Significance
      (n=714)(n=216)(n=268)(n=230)
      MeanSDMeanSDMeanSDMeanSDF valueEffect Sizep valueSignificant Differences
      Total Score95.710.096.47697.16.293.914.49.810.30<0.00125% & CTL>50%
       Assessment95.916.797.99.597.811.691.724.610.600.33<0.00125% & CTL>50%
       Communication96.711.297.49.397.89.494.614.15.720.230.00325% & CTL>50%
       Judgment94.413.895.211.196.09.991.918.76.130.280.00225% & CTL>50%
       Safety96.98.396.77.797.96.896.110.22.910.210.055-
      Bold=statistically significant p value

      Medical-Surgical Nursing

      The ATI Medical-Surgical Nursing Assessment was administered after students received all medical-surgical nursing content; therefore, students in most schools took this test during the last semester of the program. Because this assessment was generally taken after advanced medical-surgical courses, the nursing knowledge results are presented in the next section of this paper.
      Figure 4 shows the results of the weekly Medical-Surgical Nursing CCEI ratings for each group. As with the results for the Fundamentals of Nursing course, the assessment ratings started lower at the beginning of the semester and gradually increased over the semester. All three groups ended the semester with equally high ratings. There were no statistically significant differences among the three groups in the total score or any of the CCEI subscale scores. (See Table 9.)
      Figure thumbnail gr4
      Figure 4CCEI Scores Assessed in the Traditional Clinical Setting for the Medical-Surgical Nursing Course (N=692)
      Table 9Medical-Surgical Nursing Final CCEI Clinical Ratings
      TotalControl Group25% Group50% GroupEvaluation of Significance
      (n=692)(n=210)(n=251)(n=231)
      MeanSDMeanSDMeanSDMeanSDF

      value
      Effect SizeP

      value
      Significant Differences
      Total Score97.37.496.58.997.55.697.97.72.160.170.116-
       Assessment98.29.096.911.498.57.298.98.23.020.200.050-
       Communication97.39.397.78.396.89.197.610.30.720.110.488-
       Judgment96.710.395.313.597.18.197.68.92.900.200.056-
       Safety98.07.497.37.398.47.298.17.61.150.140.316-

      Advanced Medical-Surgical Nursing

      The total score for ATI Medical-Surgical Nursing Assessment was significantly higher (p=0.005) for the 50% group (see Figure 5) compared with the control group; however, scores were separated by less than 3 percentage points.
      Figure thumbnail gr5
      Figure 5Mean Total Score on ATI Medical-Surgical Nursing Assessment (N=683)
      In every category and nursing dimension, one of the simulation groups had the highest score; however, the effect sizes were small to moderate. Table D2 in Appendix D shows the mean percentages with standard deviations and statistical findings for the Medical-Surgical Nursing assessment results.
      Most schools offered an advanced medical-surgical course at the end of the program. All three groups (see Figure 6) started out with unusually high ratings at the beginning of the semester and maintained high ratings throughout the course. These scores reflect the clinical abilities of students in their final clinical course, indicating that clinical instructors believe all the students were demonstrating competence in the final course.
      Figure thumbnail gr6
      Figure 6CCEI Scores Assessed in the Traditional Clinical Setting for the Advanced Medical-Surgical Nursing Course (N=528)
      The last CCEI rating by clinical instructors in the traditional clinical environment was statistically significant for the 25% group compared with the control group (p=0.025). However, Table 10 shows how similar the scores are; the overall difference between the 25% group and control group is 0.8%. Table 10 lists the total CCEI scores and subscale scores for all three study groups in the advanced medical-surgical course.
      Table 10Advanced Medical-Surgical Nursing Final CCEI Clinical Ratings
      TotalControl Group25% Group50% GroupEvaluation of Significance
      (n=528)(n=180)(n=181)(n=167)
      MeanSDMeanSDMeanSDMeanSDF

      value
      Effect SizeP

      value
      Significant Differences
      Total Score98.34.997.65.999.03.298.45.13.740.290.02525%>Control
       Assessment99.36.399.44.399.65.098.99.00.610.100.542-
       Communication98.37.898.67.398.76.297.79.80.780.100.461-
       Judgment97.77.295.89.899.04.098.36.110.250.43<0.00125%>50%>Control
       Safety98.66.198.36.298.96.598.75.60.460.090.634-
      Bold=statistically significant p value

      Maternal-Newborn Nursing

      The Maternal-Newborn Nursing ATI end-of-course assessments were consistent with findings in other courses: The 50% group had total scores that were higher than the 25% group or control group, and the differences were statistically significant (p=0.011). However, less than three percentage points separated the three groups. Figure 7 shows the total score for each group, and Table D3 in Appendix D details all the subscale results by study group. Though not always statistically significant, the 50% group had the highest scores in every assessment category except Management of Care; the 25% group scored the highest in this category. Again, the effect sizes were small.
      Figure thumbnail gr7
      Figure 7Mean Total Score on ATI Maternal-Newborn Nursing Assessment (N=680)
      Figure 8 shows the weekly Maternal-Newborn Nursing CCEI scores. The control group had consistent ratings from week to week. The 25% group scores stayed lower for a longer period of time but came up at the end of the course. The 50% group ratings were more variable. Initially, this group shows the same gradual increase seen in the other courses; however, the ratings dip halfway through the semester and then increase.
      Figure thumbnail gr8
      Figure 8CCEI Scores Assessed in the Traditional Clinical Setting for the Maternal-Newborn Nursing Course (N=693)
      In the last clinical rating by clinical instructors during the Maternal-Newborn Nursing course, the control group had the highest ratings overall and in each of the subscales. Two of these higher scores (Total score p=0.022, Assessment p=0.038) are statistically significantly; however, the mean scores for all groups were over 94% overall and for each of the subscales, indicating clinical competency was demonstrated by all groups. Table 11 lists the scores for each group.
      Table 11Maternal-Newborn Nursing Final CCEI Clinical Ratings
      TotalControl Group25% Group50% GroupEvaluation of Significance
      (n=693)(n=225)(n=250)(n=218)
      MeanSDMeanSDMeanSDMeanSDF

      value
      Effect Sizep

      value
      Significant Differences
      Total Score97.08.198.25.796.49.296.38.63.850.260.022Control>25% & 50%
       Assessment96.514.197.911.596.813.994.616.33.290.240.038Control>50%
       Communication97.110.998.37.796.613.096.410.92.250.210.106-
       Judgment96.411.097.87.795.812.695.611.72.870.230.057-
       Safety98.08.098.76.497.19.798.47.22.830.200.060-
      Bold=statistically significant p value.

      Pediatric Nursing

      The ATI Nursing Care of Children total scores were significantly higher for the 50% group (p=0.002, d=0.37). Again, the scores for the three groups are close, with only 3.4 percentage points separating them. (See Figure 9 and Table D4 in Appendix D.)
      Figure thumbnail gr9
      Figure 9Mean Total Score on ATI Nursing Care of Children Assessment (N=620)
      The Pediatric Nursing weekly clinical CCEI ratings for the control and the 25% group started high and remained high for the duration of the course. Ratings for the 50% group started out lower than their peers and then climbed until the end of the semester, when all three groups were receiving top ratings from their clinical instructors. (See Figure 10.)
      Figure thumbnail gr10
      Figure 10CCEI Scores Assessed in the Traditional Clinical Setting for the Pediatric Nursing Course (N=686)
      The final CCEI rating in the traditional clinical setting was lowest for the 50% group. The control and 25% groups received significantly higher ratings during their last clinical day compared with the 50% group. (See Table 12.) Total score and subscale scores for the control and 25% groups were similar. The 50% group, though receiving lower scores than their peers, received ratings at or above 92%, again indicating a high level of clinical competence in this course.
      Table 12Pediatric Nursing Final CCEI Clinical Ratings
      TotalControl Group25% Group50% GroupEvaluation of Significance
      (n=686)(n =228)(n=248)(n=210)
      MeanSDMeanSDMeanSDMeanSDF

      value
      Effect Sizep

      value
      Significant Differences
      Total Score96.98.197.56.297.86.795.210.96.790.290.00125% & Control>50%
       Assessment96.313.398.28.397.511.192.818.510.810.38<0.00125% & Control>50%
       Communication96.112.497.48.996.710.594.016.74.580.260.011Control>50%
       Judgment96.99.897.09.098.17.695.312.44.490.270.01225%>50%
       Safety98.27.898.36.998.57.797.68.90.910.120.403-
      Bold=statistically significant p value.

      Mental Health Nursing

      Mental Health Nursing knowledge assessments show that the 50% group scored higher overall compared with the 25% group and control group. (See Figure 11.) Total ATI scores were significantly higher for the 50% group than the control group (p=0.011; d=0.30). However, there is less than a 3-point difference between the scores. Both the 25% and 50% groups had the highest scores in the categories and dimensions of the Mental Health Nursing assessment; however, only four of the subscale scores were significantly different as outlined in Table D5 in Appendix D.
      Figure thumbnail gr11
      Figure 11Mean Total Score on ATI Mental Health Nursing Assessment (N=633)
      Figure 12 illustrates the CCEI ratings for the Mental Health Nursing course over the semester. The 25% group started the semester lower than the 50% and control groups, however by the second week, ratings were equivalent to the control group ratings. The 25% group and the control group have similar ratings throughout the semester.
      Figure thumbnail gr12
      Figure 12CCEI Scores Assessed in the Traditional Clinical Setting for the Mental Health Nursing Course (N=665)
      Clinical scores for the 50% group showed more variability than scores for the 25% group and control group. Although end of semester ratings were lower for the 50% group, the ratings were above the 90% level. (See Table 13.) The control group had the highest ratings for all five scores, and in three instances these differences were statistically significant. Still, the two simulation groups were rated over 93% in all areas.
      Table 13Mental Health Nursing Final CCEI Clinical Ratings
      TotalControl Group25% Group50% GroupEvaluation of Significance
      (n=665)(n=220)(n=220)(n=225)
      MeanSDMeanSDMeanSDMeanSDF

      value
      Effect Sizep

      value
      Significant Differences
      Total Score96.312.497.97.695.115.895.812.53.020.230.050Control >

      25%
       Assessment96.716.399.06.895.219.996.018.63.380.260.035Control >

      25%
       Communication95.816.397.511.493.521.096.514.83.690.240.025Control >

      25%
       Judgment96.114.697.78.295.317.695.316.22.030.190.132-
       Safety96.814.698.110.396.318.195.914.31.480.180.228-
      Bold=statistically significant p value
      Interestingly, there were more instances of instructors using the Not Applicable option when completing the CCEI rating forms for Mental Health Nursing. This could indicate that the items on the CCEI do not pertain to the mental health environment or that the terminology on the CCEI form was less familiar to the instructors in the mental health setting and thus they had difficulty utilizing all the items on the form.

      Community Health Nursing

      There were no statistically significant differences among the three study groups for any of the Community Health Nursing assessment scores (p=0.387). Figure 13 shows the overall results for the three groups; Table D6 in Appendix D lists all the individual subscale results.
      Figure thumbnail gr13
      Figure 13Mean Total Score on ATI Community Health Nursing Assessment (N=344)
      Some schools did not have a separate Community Health Nursing course; instead they integrated community health concepts into the medical-surgical, maternal-newborn, pediatric, and mental health courses. When community health clinical experiences or simulations were incorporated into another course, clinical instructors labeled the data collection forms according to the course title, resulting in fewer CCEI forms identified as community health nursing. Community Health Nursing courses had fewer clinical hours than other clinical courses and therefore provided fewer opportunities for assessments. Additionally, CCEI data collection was challenging because students in Community Health courses are in various settings at the same time and clinical instructors cannot always observe and assess them. Fewer CCEI rating forms were received for this course, and there were more occurrences of instructors using the Not Applicable option when completing the CCEI forms.
      Figure 14 depicts the Community Health Nursing scores for the three groups over the semester. All three groups started out above the 90% level and remained there throughout the course, indicating that all groups received competent ratings by their clinical instructors. This is echoed in the final CCEI ratings, where all scores are over 94%. (See Table 14.)
      Figure thumbnail gr14
      Figure 14CCEI Scores Assessed in the Traditional Clinical Setting for the Community Health Nursing Course (N=252)
      Table 14Community Health Nursing Final CCEI Clinical Ratings
      TotalControl Group25% Group50% GroupEvaluation of Significance
      (n =252)(n =95)(n =90)(n =67)
      MeanSDMeanSDMeanSDMeanSDF

      value
      Effect Sizep

      value
      Significant Differences
      Total Score98.16.496.97.699.71.997.67.14.920.510.00825%>Control
       Assessment99.44.898.87.0100.00.099.54.11.540.250.216-
       Communication99.53.498.75.4100.00.0100.00.04.280.320.01525% & 50%>Control
       Judgment96.811.594.215.499.53.396.911.55.100.470.00725%>Control
       Safety98.47.799.13.799.62.595.813.75.660.420.00425% & Control

      >50%
      Bold=statistically significant p value

      Learning Environment Comparison

      The CLECS was utilized to obtain side-by-side ratings of traditional and simulation settings. Ratings are based on a 4-point scale (1=learning needs not met, 4=learning needs well met). The instrument produces a total score and six subscale scores, each having a rating for the traditional clinical environment and the simulation environment. Comparisons were completed in two ways: by comparing the three study groups on total scores and subscale scores in each environment and by comparing the clinical environment rating to the simulation environment rating within each group.
      Students completed the CLECS at the end of each clinical course and again at the end of the program. Detailed tables from the end-of-program ratings and each clinical course are in Appendix E (Tables E1-E8).
      At the end of the nursing program, students were asked to reflect on all their traditional clinical and simulation experiences throughout their nursing education and rate how well each environment met their learning needs overall. In every instance, the control group rated the traditional clinical environment higher than the simulation environment; the 50% group rated the simulation environment higher in every category; and the 25% group was in the middle with a tendency to rate the clinical environment higher than simulation for meeting their learning needs. Table 15 shows the mean ratings with standard deviations for each group, effect sizes, and p values of the between-group and within-group analyses for the overall ratings on the end-of-program CLECS.
      Table 15End-of-Program Clinical Learning Environment Comparison Survey (CLECS) Results
      Overall Rating (1-4 Scale)
      Control Group25% Group50% GroupEvaluation of Significance
      nMeanSDEffect

      Size
      Within group effect size.
      nMeanSDEffect

      Size
      Within group effect size.
      nMeanSDEffect

      Size
      Within group effect size.
      Effect

      Size
      Between groups effect size.
      p valueSignificant Differences
      Traditional1973.500.421.232023.410.410.281873.260.530.570.50<.001CTL & 25% >50%
      Simulation1742.820.672023.280.511873.540.451.27<.00150%>25%>CTL
      Bold=statistically significant effect size or p value.
      Scale: 1=learning needs not met, 4=learning needs well met.
      a Within group effect size.
      b Between groups effect size.

      Research Question 3

      Are there differences in first-time NCLEX pass rates between students that were randomized into a control group, 25% and 50% of traditional clinical substituted with simulation?
      A total of 660 study participants took the NCLEX examination as of December 31, 2013; two students from each of the study groups had not taken the NCLEX as of December 31, 2013. The first-time pass rate of the study cohort overall was 86.8%. The pass rate was higher for the control group, but the difference was not statistically significant (p=0.737; v=0.04). (See Figure 15.) The pass rate for all three groups was higher than the national average of 80.2% during the same period (
      • National Council of State Boards of Nursing
      NCLEX pass rates [PDF file].
      ).
      Figure thumbnail gr15
      Figure 15First-Time NCLEX Pass Rates (N=660)

      Summary of Part I Results

      The main findings for Part I of the study revealed no significant differences among the three study groups' end-of-program educational outcomes. Comprehensive nursing knowledge, preceptor and clinical instructor ratings of clinical competency, and NCLEX pass rates show no statistically significant differences when simulation experiences are used to replace a portion of traditional clinical hours. The two simulation groups perceived that their learning needs were being met, and they were able to synthesize theory content and perform as well as the control group on tests of nursing knowledge. The learning that occurred in simulation was translated to the clinical environment as evidenced by the high competency ratings made by clinical instructors.

      National Simulation Study: Part II

      To determine the long-term impact of substituting simulation for traditional clinical experience, the study subjects were followed for 6 months after beginning their first clinical position as an RN, evaluating performance in three areas (clinical competency, critical thinking, and readiness for practice). Part II of the study also evaluated acclimation to the role of the RN for any differences among the subjects from the three groups.

      Research Questions

      • 1.
        Are there differences in clinical competency, critical thinking and readiness for practice among the new graduate nurses from the three study groups?
      • 2.
        Are there differences among new graduates from the three study groups in acclimation to the role of RN?

      Method

      Procedure

      All study subjects who completed Part I of the National Simulation Study and graduated from the prelicensure program were asked to participate in Part II of the study. Additional requirements for participation in Part II included passing the NCLEX and being employed as an RN in a clinical position by December 31, 2013.
      Part I study subjects who agreed to participate in Part II provided contact information and agreed to notify the National Simulation Study project director of the start date for their first RN position. All Part II subjects were given written information for their managers and asked to inform managers of the study.
      To assess clinical competency, critical thinking, readiness for practice, and acclimation to the RN role, the new graduates were sent a survey consisting of the following evaluation tools at 6 weeks, 3 months, and 6 months after the start of their first RN position:
      • NGNPS (See Part I for psychometric properties)
      • Critical Thinking Diagnostic (See Part I for psychometric properties)
      • Global Assessment of Clinical Competency and Readiness for Practice.
      Additional questions used previously in NCSBN studies were added to the questionnaire to assess acclimation to the RN role. Acclimation questions focused on the length of orientation, assigned patient loads, charge nurse responsibilities, and workplace stress.
      Demographic data, including practice setting and work schedule, were also collected. The evaluation tools (hereafter referred to as surveys) were accessible to the new graduates via electronic survey links. The links were sent to each new graduate via an e-mail message 6 weeks, 3 months, and 6 months after the employment start date. If the new graduate left his or her first position for any reason, the graduate was discontinued from the study.
      The new graduates' managers or preceptors (hereafter referred to as managers) played an important role in Part II of the study. They were asked to evaluate the new graduate using the NGNPS, Critical Thinking Diagnostic, and Global Assessment of Clinical Competency and Readiness for Practice at the same time intervals as new graduates (6 weeks, 3 months, and 6 months after the start date). Managers had access to the surveys via an e-mail link sent to the graduates, who forwarded it to managers. A cover letter explaining the entire study and their role was included in the first link sent with the 6-week survey. Follow-up reminders to the managers were sent via e-mail to the new graduates, who were asked to forward the reminder.
      In the event of nonresponses, reminders were sent via e-mail and cell phone text messages to new graduates. All communication was with the new graduate; study researchers did not directly contact managers.
      New graduates and managers had a specific window of time in which they could respond and provide the study data. For the 6-week surveys, the data collection period extended +/− 2 weeks from the 6-week date. The 3-month data collection period was +/− 4 weeks from the 3-month date. The 6-month data collection period extended +/− 6 weeks from the 6-month date. Only data received within these collection periods were included in the analysis to ensure that the responses represented data from the intended time period.

      Data Analysis

      Survey data from the new graduates and managers were collected via an electronic survey (WorldApp KeySurvey) except for data from three participants who requested paper surveys. Survey data were downloaded into a spreadsheet and analyzed using SAS version 9.2. Data analysis procedures for the follow-up study mirrored those for Part I of the study.
      Basic descriptive statistics were utilized for all data, and parametric and nonparametric tests were used as indicated for each type of data. MANOVA procedures were employed to check possible covariates (school, gender, age, ethnicity, race, nursing assistant experience, previous degree, and use of additional ATI products) for interaction effects. When the covariates were included in the MANOVA model, the Wilks' Lambda value did not change substantially, and interaction effects were determined to be insignificant. The effect sizes displayed represent the maximum effect size calculated when comparing the means of the control, 25%, and 50% groups to each other.

      Results

      Prior to graduation, 575 participants provided contact information so they could receive follow-up surveys. A total of 375 of these participants contacted the research team with their start dates (a 65% hire rate); however, not all provided this information right away. Some new graduates provided information after the allowable survey windows. In these instances, the graduates were sent the next survey during the allowable data collection period.
      Six-week follow-up surveys were e-mailed to 354 new graduates, and 328 completed surveys were returned. After excluding surveys returned beyond the data collection window and surveys from graduates who left their first nursing position, 266 surveys were included in the analysis, a 75% response rate. The new graduates were responsible for forwarding the survey request to their managers. After excluding surveys returned outside the data collection window, 135 6-week manager surveys were included in the analysis, for a 38% response rate.
      Response rates improved with the 3-month and 6-month surveys. The new graduates had an 89% response rate for the 3-month survey, and managers had a 68% response rate. For the 6-month survey period, new graduates had an 86% response rate, and managers responded 66% of the time. (See Table 16.)
      Table 16Follow-Up Survey Response Rates of New Graduates and Managers
      New Graduate 6-Week Follow-Up SurveySix-Week Manager Survey
      Total Surveys SentControl Group25% Group50% GroupTotal Surveys SentControl Group25% Group50% Group
      (n=354)(n=105)(n=126)(n=123)(n=354)(n=105)(n=126)(n=123)
      freq%freq%freq%freq%freq%freq%freq%freq%
      All responses32892.79691.411692.111694.3All responses24268.47369.58265.18770.7
      Usable data
      Usable data=survey returned within the data collection window and the RN remained in first nursing position.
      26675.16864.89978.69980.5Usable data
      Usable data=survey returned within the data collection window and the RN remained in first nursing position.
      13538.13533.34938.95141.5
      New Graduate 3-Month Follow-Up SurveyThree-Month Manager Survey
      Total Surveys SentControl Group25% Group50% GroupTotal Surveys SentControl Group25% Group50% Group
      (n=345)(n=101)(n=121)(n=123)(n=345)(n=101)(n=121)(n=123)
      freq%freq%freq%freq%freq%freq%freq%freq%
      All responses32393.69493.111191.711895.9All responses26476.57271.39276.010081.3
      Usable data
      Usable data=survey returned within the data collection window and the RN remained in first nursing position.
      30889.39291.110687.611089.4Usable data
      Usable data=survey returned within the data collection window and the RN remained in first nursing position.
      23668.46463.48469.48871.5
      New Graduate 6-Month Follow-Up SurveySix-Month Manager Survey
      Total Surveys SentControl Group25% Group50% GroupTotal Surveys SentControl Group25% Group50% Group
      (n=366)(n=112)(n=130)(n =124)(n=366)(n=112)(n=130)(n=124)
      freq%freq%freq%freq%freq%freq%freq%freq%
      All responses33591.510392.011689.211693.6All responses25670.07567.09270.88971.8
      Usable data
      Usable data=survey returned within the data collection window and the RN remained in first nursing position.
      31586.19887.510883.110987.9Usable data
      Usable data=survey returned within the data collection window and the RN remained in first nursing position.
      24266.17264.38666.28568.6
      a Usable data=survey returned within the data collection window and the RN remained in first nursing position.

      Workplace Demographics

      Two-thirds of the new graduates were hired as RNs in urban areas. Higher proportions of new graduates from the 50% group were working in urban areas, while the 25% group had higher proportions of new graduates working in suburban and rural areas.
      More than 80% of the new graduates were working in a hospital or medical center, and 10% reported working in long-term care facilities. Of the new graduates, 27% reported working in a facility with Magnet® designation. One-third of new graduates reported working in critical care environments, and 26% reported working in medical-surgical units. There were no statistical differences in employment settings or patient-care environments by study group. (See Table F1 in Appendix F.)

      Manager Demographics

      Of the managers completing surveys, 61% reported being a clinical preceptor to the new graduate nurse, and 57% reported receiving formal training to be a preceptor. This group had previously been preceptors for an average of 10 new graduates.

      Research Question 1

      Are there differences in clinical performance (clinical competency, critical thinking, and readiness for practice) among the new graduate nurses from the three study groups when assessed by their preceptor or manager in their first RN position?

      Clinical Competency

      The NGNPS was used to measure clinical competence. The new graduates performed a self-assessment administered via electronic survey, and their managers completed an electronic version of this instrument. The NGNPS consists of 36 items organized into 6 subscales and rated on a scale of 1 to 6 (1=lowest rating, 6=highest).
      Table 17 lists the manager ratings overall for each time period and for each study group. Self-assessment ratings provided by the new graduates are listed in Appendix F, Table F2. Despite the fact that the managers performed 18 ratings over 6 months (six assessment areas for three time periods), few differences in the manager ratings exist, regardless of the study group. The only two exceptions are the 6-week ratings of clinical knowledge, in which the 25% and 50% groups did better than the control group, and the difference was statistically significant (p=0.017); and the critical thinking ratings for the 25% group at 6 weeks, which were also statistically significant (p=0.037).
      Table 17Manager Ratings of Clinical Competency
      New Graduate Nurse Performance Survey
      6 week3 month6 month
      nMeanSDnMeanSDnMeanSD
      Clinical Knowledge
      Control354.860.65645.190.69725.210.63
      25% group495.290.61845.210.66865.070.92
      50% group515.100.73885.070.84845.210.66
      Total1355.100.682365.150.742425.160.75
      Effect size: 0.68Effect size: 0.18Effect size: 0.17
      p value: 0.017p value: 0.394p value: 0.376
      Technical Skills
      Control354.910.65645.220.65725.280.65
      25% group494.940.63845.210.68865.090.95
      50% group514.960.72885.000.84845.190.65
      Total1354.940.682365.140.742425.180.77
      Effect size: 0.07Effect size: 0.29Effect size: 0.23
      p value: 0.953p value: 0.096p value: 0.325
      Critical Thinking
      Control354.690.72645.080.76725.110.78
      25% group495.080.67845.130.72865.060.92
      50% group514.840.73884.990.88845.150.72
      Total1354.890.722365.060.792425.110.81
      Effect size: 0.57Effect size: 0.17Effect size: 0.11
      p value: 0.037p value: 0.496p value: 0.741
      Communication
      Control355.140.55645.410.61725.400.66
      25% group495.290.68845.270.70865.220.95
      50% group515.160.73885.220.90845.420.70
      Total1355.200.672365.290.762425.340.79
      Effect size: 0.23Effect size: 0.24Effect size: 0.24
      p value: 0.531p value: 0.309p value: 0.203
      Professionalism
      Control355.340.59645.580.53725.540.58
      25% group495.450.58845.430.70865.300.95
      50% group515.330.71885.410.85845.500.65
      Total1355.380.632365.460.722425.440.76
      Effect size: 0.19Effect size: 0.24Effect size: 0.30
      p value: 0.617p value: 0.317p value: 0.096
      Management of Responsibilities
      Control354.770.77645.170.75725.260.73
      25% group495.140.71845.180.70865.130.94
      50% group515.000.85885.030.88845.320.73
      Total1354.990.792365.120.782425.240.81
      Effect size: 0.51Effect size: 0.19Effect size: 0.23
      p value: 0.102p value: 0.405p value: 0.284
      Bold=statistically significant p value.
      The clinical knowledge ratings for all new graduates were similar at each survey period: 5.11 for the 6-week period, 5.15 for the 3-month period, and 5.17 for the 6-month period. A different pattern emerged from the new graduates' self-assessments. Though there were no differences among the study groups, the new graduates gave themselves higher ratings in clinical knowledge at the end of the nursing program than in practice. The self-assessment ratings increased over time, but even after 6 months of practice, new graduates were not rating their clinical knowledge as high as they did before graduating from their nursing programs.
      For technical skills, the manager ratings increased over time. The new graduates rated themselves lower after 6 weeks than at graduation. After 6 months, however, new graduates gave themselves ratings similar to those at graduation (ratings of 4.95).

      Critical Thinking

      Managers and new graduates also completed the Critical Thinking Diagnostic, the same instrument used before graduation in Part I of the study. The five categories that make up the Critical Thinking Diagnostic are problem recognition, clinical decision making, prioritization, clinical implementation, and reflection.
      Results of the Critical Thinking Diagnostic are similar to those of the NGNPS; new graduates from all three groups were given high ratings by their managers. Both manager and self-ratings increased over time, but the new graduates generally rated themselves lower than their managers did. Table 18 shows the manager ratings for each survey period, and Table F3 in Appendix F provides the results of the new graduate self-ratings.
      Table 18Manager Ratings of Critical Thinking
      Critical Thinking Diagnostic
      6 week3 month6 month
      nMeanSDnMeanSDnMeanSD
      Problem Recognition
      Control354.980.65635.220.58725.360.60
      25% group495.200.66835.270.61855.170.88
      50% group514.890.68875.040.81845.280.57
      Total1355.030.672335.170.692415.270.70
      Effect size: 0.46Effect size: 0.32Effect size: 0.25
      p value: 0.063p value: 0.081p value: 0.251
      Clinical Decision Making
      Control355.160.50645.290.59725.400.53
      25% group495.310.60845.400.53865.280.88
      50% group515.190.56885.160.82845.360.53
      Total1355.230.562365.280.672425.340.67
      Effect size: 0.28Effect size: 0.35Effect size: 0.16
      p value: 0.368p value: 0.056p value: 0.491
      Prioritization
      Control354.890.59645.180.69725.360.68
      25% group495.130.64845.260.62845.250.90
      50% group505.000.63885.050.82845.350.57
      Total1345.020.632365.160.722405.320.73
      Effect size: 0.39Effect size: 0.29Effect size: 0.14
      p value: 0.211p value: 0.163p value: 0.559
      Clinical Implementation
      Control355.050.55635.270.60715.400.49
      25% group485.180.62845.310.59855.210.92
      50% group495.090.58875.170.81835.380.55
      Total1325.110.582345.250.682395.330.70
      Effect size: 0 21Effect size: 0.20Effect size: 0.25
      p value: 0.615p value: 0.377p value: 0.116
      Reflection
      Control355.080.55645.330.66725.420.53
      25% group495.300.49845.390.56855.260.92
      50% group505.210.49875.200.81845.390.58
      Total1345.210.512355.300.692415.350.71
      Effect size: 0.43Effect size: 0.27Effect size: 0.21
      p value: 0.138p value: 0.154p value: 0.325

      Global Assessment of Clinical Competence and Readiness for Practice

      At the end of the survey, managers were asked to consider the items they just completed and any other aspects of nursing care relevant to overall clinical competency and readiness for practice and then to rate the new graduate on a scale of 1 to 10 (1=among the weakest, 10=among the best). Ratings of overall competence were high for all three groups, and ratings increased between 6 weeks of practice and 6 months. There were no statistical differences among the groups. (See Table 19.)
      Table 19Manager Global Assessment of Readiness for Practice
      6 week3 month6 month
      nMeanSDnMeanSDnMeanSD
      Control357.941.37648.391.35728.601.37
      25% group498.181.36848.361.32868.371.46
      50% group518.021.38888.151.61848.551.16
      Total1358.061.372368.291.442428.501.33
      Effect size: 0.18Effect size: 0.16Effect size: 0.16
      p value: 0 706p value: 0.511p value: 0.527
      Table 20 shows the new graduates' self-ratings of overall clinical competency and readiness for practice. As with the manager ratings, self-assessments increased as the new graduates gained experience over time. There were no statistically significant differences among the three groups.
      Table 20New Graduate Nurse Self-Ratings of Global Assessment of Readiness for Practice
      6 week3 month6 month
      nMeanSDnMeanSDnMeanSD
      Control687.151.18927.151.46987.391.27
      25% group997.031.331067.221.491087.361.34
      50% group997.291.311107.471.231097.761.12
      Total2667.161.283087.291.403157.511.26
      Effect size: 0 20Effect size: 0.24Effect size: 0.32
      p value: 0 356p value: 0.216p value: 0.033
      Bold=statistically significant p value.

      Research Question 2

      Are there differences among new graduates from the three study groups in acclimation to the role of the registered nurse?
      Acclimation to the role of the professional nurse is multifactorial. Several concepts were chosen to assess new graduate acclimation: leaving the first nursing position, charge nurse responsibilities, and workplace stress. The results are descriptive and should be interpreted cautiously as it was not possible to control for differences in work environment or participation in nurse residency or transition-to-practice programs.

      Preparation for Practice

      The new graduates were asked how well their clinical experiences (traditional and simulated) prepared them for practice as an RN. Generally, the new graduates reported feeling “quite a bit prepared” or “very well prepared.” (See Table F4 in Appendix F.) After 6 weeks of working as an RN, 66% of the graduates reported they felt “quite a bit prepared” or “very well prepared” for practice based on their clinical experiences during their nursing program. The 50% group consistently reported higher levels of feeling “prepared for practice” compared with their study peers.

      Left First Nursing Position

      By the 6-month survey, 25 new graduates reported leaving their first nursing position; 4 left at or before the 6-week survey; another 5 left by the 3-month survey; and 18 more left by the 6-month survey. (See Table 21.) There were no statistically significant differences among the groups (p=0.578). Reasons for leaving included accepting a position with a better schedule or better compensation, accepting a position in a first-choice specialty area, and having a spouse who was relocating.
      Table 21New Graduates Who Left First Nursing Position
      TotalControl Group25% Group50% GroupEvaluation of Significance
      (n=348)(n=104)(n=123)(n=121)
      freq%freq%freq%freq%Cramer's vp value
      Left first position257.254.897.3119.10.070.462

      Patient Loads and Charge Nurse Responsibilities

      After 6 months of practice, 67% of new graduates were working 12-hour shifts; 43% were working the night shift. (See Table 22.) The entire group of new graduates reported on average they were caring for 8 patients per shift. When asked about the difficulty of recent patient-care assignments, 83% reported their assignments as “just right”; 5% said “not challenging enough”; and 12% said their patient-care responsibilities were “too challenging.” There were no statistical differences among the study groups on any workplace factors.
      Table 22New Graduate Nurse Work Schedules and Patient Loads
      TotalControl Group25% Group50% Group
      (n=299)(n=88)(n=106)(n=105)
      Current Work Schedule
      n%n%n%n%Cramer's vp value
       Day (7a-3p)227.41011.487.643.80.150.443
       Day (9a-5p)155.000.0109.454.8
       Day (12-hour shift)7424.82123.92624.52725.7
       Evening (3p-11p)165.455.754.765.7
       Night (11p-7a)113.744.632.843.8
       Night (12-hour shift)12842.84045.54340.64542.9
       Rotating289.478.098.51211.4
       Other51.711.121.921.9
      There was a statistically significant difference among the new graduates regarding charge nurse responsibilities. Overall, 12% had charge nurse responsibilities within the first 6 months of practice. The control group had higher rates of charge nurse responsibilities (21%) than the 25% group (13%) and the 50% group (5%) (p=0.005). (See Table 23.)
      Table 23Working as a Charge Nurse
      TotalControl Group25% Group50% Group
      (n=298)(n=88)(n=106)(n=104)
      n%n%n%n%Cramer's vp value
      Yes3712.41820.51413.254.80.190.005
      No26187.67079.69286.89995.2
      Bold=statistically significant p value.

      Workplace Stress

      In each survey, new graduates were asked questions about stress in the workplace. The majority agreed or strongly agreed that they were experiencing stress at work, and the percentage of those reporting stress at work increased over time. The only difference in stress ratings occurred on the 3-month survey, in which 25% of the control group strongly agreed with the statement “I am experiencing stress at work” compared with 15% of the 25% group and 12% of 50% group graduates (p=0.03). (See Figure F1 in Appendix F).
      When new graduates were asked whether they felt overwhelmed by patient-care responsibilities, the results were similar across study groups. Of all new graduates, 22% reported “often” or “almost always feeling overwhelmed” after 6 weeks of practice. These results remained stable: On the 3-month survey, 21% reported “feeling overwhelmed,” and on the 6-month survey, 20% were “feeling overwhelmed.” There were no statistically significant differences among the study groups. (See Figure F2 in Appendix F.)
      The third question regarding stress in the workplace asked new graduates how often in the last week they felt expectations of them were unrealistic. All new graduates reported the lowest levels of unrealistic expectations on the 6-week survey. The highest levels of unrealistic expectations were reported on the 3-month survey (12.7%). This level slightly decreased on the 6-month survey (11.4%). Again, there were no statistically significant differences among groups for any of the survey periods. (See Figure F3 in Appendix F.)

      Limitations

      All studies have some degree of limitation. Although students were randomly assigned to the study groups, the schools participating in the study were not randomly selected. The chosen schools had an interest in using simulation to educate nursing students, and they had a simulation laboratory and the equipment for the high volume of simulation required for the study. Not all schools may be prepared to begin or increase their simulation programs with the aggressive level of simulation used in this study.
      The preceptors and clinical instructors in Part I of the study and the managers in Part II were not blinded as to which study group the students or new graduates were assigned. Ratings may be biased based on the raters' personal feelings about traditional clinical experiences and simulation experiences. During the study, some students reported that nurses in the clinical environment made statements to the study participants that hands-on clinical experiences were superior for teaching students the role of the nurse. Clinical instructor attitudes regarding traditional clinical and simulation experiences were not measured and therefore it cannot be determined to what extent these attitudes may have influenced the results.
      Another limitation was the distribution of the End-of-Program Surveys to the clinical preceptors for schools with a capstone course. Capstone students were responsible for providing the survey to their preceptors. Likewise in Part II of the study, new graduates were responsible for forwarding the electronic survey links to their managers. It is possible that weaker students and new graduate nurses did not provide the surveys to their managers, and therefore the results may reflect the ratings of stronger participants.

      Discussion

      There were no significant differences among study groups regarding end-of-program nursing knowledge, clinical competency, or overall readiness for practice. NCLEX pass rates were statistically equivalent, and managers gave all new graduates similar ratings in critical thinking, clinical competency, and overall readiness for practice. All evaluative measures produced the same results: Educational outcomes were equivalent when up to 50% of traditional clinical experience in the undergraduate nursing program was replaced by simulation.
      Although all studies have limitations, this study provides strong evidence supporting the use of simulation as a substitute for up to 50% of traditional clinical time and makes a substantial contribution to the literature in both nursing regulation and education. The strengths of the study were the longitudinal design, the large sample size, and use of multiple data collection sites. The diversity of the sites is a strength as the results reflect both associate and baccalaureate programs and programs in urban and rural areas from across the country. This variety represents the variety in the nursing programs overall and lends to the generalizability of the results. The large sample provided adequate power to find statistical significance; however, it also produced statistical significance with nominal differences in some instances. Therefore, the results need to be interpreted carefully.
      Another strength of the study was the methodology employed to conduct the simulation experiences. A designated simulation team in all participating nursing programs were taught theory-based simulation and debriefing methods. We believe this was integral to the positive outcomes of the study, but it also is essential for any simulation experience.
      The consistent findings across the two time periods (educational period and early employment period) in two settings (academic setting and practice setting) with two types of evaluators (educators and managers) give further credence to the findings of this study.
      The demographic characteristics of the three study groups were consistent, but more students in the 50% group dropped out of the study. These students were older, male, and members of a minority population. However, the students in the 25% and 50% groups who remained in the study rated their simulation experiences highly, as indicated on the CLECS. More research may be needed in this area to ascertain whether simulation is suitable for all students.
      Other recent research studies on educational outcomes when simulation replaces a portion of traditional clinical experiences have reported findings similar to this study.
      • Watson K.
      • Wright A.
      • Morris N.
      • McMeeken J.
      • Rivett D.
      • Black-stock F.
      • Jull G.
      Can simulation replace part of clinical time? Two parallel randomized controlled trials.
      conducted two multi-site, randomized, controlled trials in which 25% of clinical hours were replaced with standardized patient simulation experiences in physiotherapy programs in Australia. Both studies found no differences in clinical competency evaluations by an independent examiner when simulation replaced clinical experiences.
      • Meyer M.N.
      • Connors H.
      • Hou Q.
      • Gajewski B.
      The effect of simulation on clinical performance: A junior nursing student clinical comparison study.
      evaluated student performance when 25% of pediatric clinical hours were replaced with simulation. At the end of the course, no differences in clinical evaluation scores existed between those who experienced simulation and those who did not. Meyer's study used clinical faculty to assess the students on an ongoing basis in the clinical setting.
      • Sportsman S.
      • Schumacker R.E.
      • Hamilton P.
      Evaluating the impact of scenario-based high fidelity patient simulation on academic metrics of student success.
      found no differences in exit examination scores or graduating grade point averages when nursing students were exposed to simulation in place of clinical hours throughout their education compared with a historical control group of students who were not exposed to simulation. They also found that senior nursing students with clinical hours replaced by simulation did not rate their clinical competency any differently than students not exposed to simulation. This is different from the current study, in which students who had 50% of their clinical experiences in simulation rated their clinical competency significantly higher than students in the control and 25% groups. Sportsman et al. do not provide a detailed description of how simulations or debriefings were conducted. The current study used the Debriefing for Meaningful Learning© method, and it is postulated that students in the 50% group received so much feedback over the 2 years of their education that they had a more positive opinion of their clinical abilities at the end of the nursing program.
      • Schlairet M.C.
      • Fenster M.J.
      Dose and sequence of simulation and direct care experiences among beginning nursing students: A pilot study.
      studied the effects of simulation “dose” when students received no simulation or 30%, 50%, or 70% simulation in place of clinical experiences in a nursing fundamentals course. The sequence of the delivery of simulation and direct-care experiences was also studied. They found no differences among the groups in critical thinking or nursing knowledge standardized assessments. The 30% group that experienced simulation at the end of the course had significantly lower clinical judgment scores than other students (
      • Schlairet M.C.
      • Fenster M.J.
      Dose and sequence of simulation and direct care experiences among beginning nursing students: A pilot study.
      ). The investigators noted that the small sample size may have contributed to the nonsignificantly different results, although the current study validates their findings.
      The current study found that scores on standardized assessment tests on end-of-program comprehensive nursing knowledge were no different among the study groups. Even course-by-course level results found few meaningful differences among groups, with all students achieving high scores.
      Similarly, students with more simulated clinical experiences had clinical competency ratings that were comparable to those of students who spent the majority of their clinical hours in the traditional setting. Some nominal differences were found—for example, the control group received slightly higher ratings in the final clinical assessmentsof most courses, but the end-of-program ratings made by the last clinical preceptor conducting a summative evaluation indicate no significant differences in critical thinking, clinical competency, or overall readiness for practice among the three groups. These results indicate that the skills learned in simulation transfer to the clinical setting. Transfer of learning from simulation to clinical practice has been a documented concern for many (
      • Foronda C.
      • Liu S.
      • Bauman E.
      Evaluation of simulation in undergraduate nurse education: An integrative review.
      ,
      • Sportsman S.
      • Schumacker R.E.
      • Hamilton P.
      Evaluating the impact of scenario-based high fidelity patient simulation on academic metrics of student success.
      ), and the nursing literature has started to address this concern. Considering smaller nursing studies (
      • Alinier G.
      • Hunt B.
      • Gordon R.
      • Harwood C.
      Effectiveness of intermediate-fidelity simulation training technology in undergraduate nursing education.
      ,
      • Kirkman T.R.
      High fidelity simulation effectiveness in nursing students' transfer of learning.
      ,
      • Rutherford-Hemming T.
      Learning in simulated environments: Effect on learning transfer and clinical skill acquisition in nurse practitioner students.
      ), large systematic reviews of the medical literature (
      • McGaghie W.C.
      • Issenberg S.B.
      • Petrusa E.R.
      • Scalese R.J.
      A critical review of simulation-based medical education research: 2003–2009.
      ), and the results of this study, learning that occurs in simulation does transfer to the clinical setting.
      Similar passing rates on the NCLEX examination were achieved by all three groups. Not only were all three comparable, the passing rates of the three groups were above the 2013 national average passing rate of 80%.
      The follow-up surveys in Part II completed by the managers of new graduates support the findings from Part I: All three groups were well prepared for clinical practice. There were no meaningful differences among the groups in critical thinking, clinical competency, and overall readiness for practice as rated by managers after 6 weeks, 3 months, and 6 months of practice. These results come 6 years after
      • Berkow S.
      • Virkstis K.
      • Stewart J.
      • Conway L.
      Assessing new graduate nurse performance.
      surveyed nurse educators and nurse managers about the preparation of new graduates. They found that 90% of nurse educators believed their new graduates were prepared for clinical practice, but only 10% of managers believed new graduates were prepared for the reality of clinical practice. Using the same instrument, the clinical instructors and the managers rated the study participants similarly, agreeing that they were prepared for professional practice.
      At the end of their nursing program, all students rated themselves highly on clinical competence, critical thinking, and readiness for practice. The 50% group rated themselves statistically significantly higher than their peers, indicating the group with the most simulation experience had the most self-confidence. The simulation literature documents this as well (
      • Bambini D.
      • Washburn J.
      • Perkins R.
      Outcomes of clinical simulation for novice nursing students: Communication, confidence, clinical judgment.
      ,
      • Jeffries P.R.
      • Rizzolo M.A.
      Designing and implementing models for the innovative use of simulation to teach nursing care of ill adults and children: A national multi-site, multi-method study.
      ,
      • Lambton J.
      • O'Neill S.P.
      • Dudum T.
      Simulation as a strategy to teach clinical pediatrics within a nursing curriculum.
      ). Additionally, the 50% group more often reported feeling “very well prepared” for practice, another indicator of self-confidence for those entering the nursing profession.
      All the study findings indicate that students were able to adapt to the method with which they were taught. The 25% and 50% simulation groups had experiences in both the traditional clinical and simulation environments and rated both highly in meeting their learning needs. Students in the control group spent the majority of their time in the traditional clinical setting and rated it better for meeting their learning needs. Students from all groups were rated highly by their clinical instructors in weekly as well as end-of-program clinical competency assessments. All groups scored high on nursing knowledge assessments throughout the program and on the end-of-program comprehensive examination. NCLEX passing rates were comparable among the groups. Manager ratings of clinical competence, critical thinking, and overall readiness for practice were consistent with Part I findings that there were no differences in outcomes among the three groups.

      Conclusion

      This study provides substantial evidence that up to 50% simulation can be effectively substituted for traditional clinical experience in all prelicensure core nursing courses under conditions comparable to those described in the study. These conditions include faculty members who are formally trained in simulation pedagogy, an adequate number of faculty members to support the student learners, subject matter experts who conduct theory-based debriefing, and equipment and supplies to create a realistic environment. BONs should be assured by nursing programs that they are committed to the simulation program and have enough dedicated staff members and resources to maintain it on an ongoing basis.
      A most important way to ensure high-quality simulation is to incorporate best practices into a simulation program; these best practices include terminology, professional integrity of the participant, participant objectives, facilitation, facilitator, the debriefing process, and participant assessment and evaluation (
      • International Association for Clinical Simulation & Learning Board of Directors
      Standards of best practice: Simulation.
      ).
      Expanding on the current study to explore other aspects of simulation is needed. The ratio of traditional clinical hours to simulation hours should be studied further. The current study used a 1:1 ratio, but other proportions may be effective as well. Research that studies active simulation participation for longer periods of time are needed. For example, in this study, the student was often in an active nurse role only once a day for 15 to 30 minutes; the rest of the time, the student was an active observer. The effects of high-dose simulation that engages the student as an active participant throughout the clinical time period might indicate further uses for simulation and need to be studied.
      This study makes a substantial contribution to nursing and the scientific literature which has been void of a large scale, multisite study of simulation across the prelicensure nursing curriculum. This analysis provides valuable data for boards of nursing, who often receive requests from nursing programs to allow time/activities in a simulation lab to be substituted for clinical hours. The better understanding regulators have of simulation and its impact on nursing education, the more effectively they can develop prelicensure education requirements, guide programs and develop policy at the state level. In addition, this study provides important information for nursing educators for determining the best approaches in teaching students and shaping the future of nursing education.
      The most significant finding of this study is the effectiveness of two types of educational methods: traditional clinical and simulation experiences. In both environments, when structure, an adequately prepared faculty with appropriate resources, dedication, foresight, and vision are incorporated into the prelicensure nursing program, excellent student outcomes are achieved.

      Acknowledgments

      The authors would like to thank the following for their contributions to the National Simulation Study:
      The Deans and Directors at the 10 participating schools for their enthusiasm and commitment to the project and their shared vision of its importance.
      The study team members for their dedication to this project on a daily basis.
      Nursing program team leaders: Kristen Zulkosky, PhD, RN, CNE, Kay Buchanan, MSN, RN, Donna Enrico, MBA, BSN, RN, Pat Riede, MSN, MBA, RN, CNE, Henry Henao, MSN, BSN, BBA, Linda Fluharty, MSN, RN, Rochelle Quinn, MSN, RN, Joyce Vazzano, MS, RN, CRNP, Sandy Swoboda, MS, RN, FCCM, Maggie Neal, PhD, RN, Pam Anthony, MSN, RN, Erin McKinney, MN, RNC, Susan Poslusny, PhD, RN, Kathy Masters, DNS, RN, and Kevin Stevens, MSN, RN, for their leadership and study management.
      Kim Leighton, PhD, MSN, Mary Tracy, PhD, RN, Martha Todd, MS, APRN, Julie Manz, MS, RN, Kim Hawkins, PhD, APRN, Maribeth Hercinger, PhD, RN, BC, and The Advisory Board Company— for use of their evaluation instruments.
      Katie Adamson, PhD, RN, for use of training videos; Kris Dreifuerst, MS, RN, ACNS-BC, CNE, for the use of the Debriefing for Meaningful Learning© model and for instruction of the study teams in its use; Mary Ann Rizzolo, EdD, RN, FAAN, ANEF, and the National League for Nursing for access to scenarios used in the study; Cheryl Feken, MS, RN, for the use of her scenario template; Kris Selig, MS, RN, for converting hundreds of scenarios into the template format; and Jessica Kamerer, MSN, RNC-NIC, CHSE, Joe Corvino, MEd, BS, and Frank Brophy III, MS, for programming scenarios for each type of simulator. Becky Bunderson, MS, RN, CHSE, and Boise State University faculty and students for piloting scenarios.
      Lou Fogg, BS, PhD, Brigid Lusk, PhD, RN, and Julie Zerwic, PhD, RN, FAHA, FAAN, for ongoing review of the data and safety determinations.
      NCSBN staff: Nancy Spector, PhD, RN, FAAN, for assistance with the design of the study; Jo Silvestre, MSN, RN, for assistance planning and conducting pilot study work; and Laura Jarosz, Renee Sednew, Esther White, and Lindsey Gross for administrative assistance.
      The hundreds of clinical instructors and preceptors who completed ratings on the students and most importantly, the students who participated in the study—without them, this study would not have been possible.

      References

        • Alinier G.
        • Hunt B.
        • Gordon R.
        Determining the value of simulation in nurse education: Study design and initial results.
        Nurse Education in Practice. 2004; 4: 200-207
        • Alinier G.
        • Hunt B.
        • Gordon R.
        • Harwood C.
        Effectiveness of intermediate-fidelity simulation training technology in undergraduate nursing education.
        Journal of Advanced Nursing. 2006; 54: 359-369
        • Bambini D.
        • Washburn J.
        • Perkins R.
        Outcomes of clinical simulation for novice nursing students: Communication, confidence, clinical judgment.
        Nursing Education Perspectives. 2009; 30: 79-82
        • Bearnson C.S.
        • Wiker K.M.
        Human patient simulators: A new face in baccalaureate nursing education at Brigham Young University.
        Journal of Nursing Education. 2005; 44: 421-425
        • Berkow S.
        • Virkstis K.
        • Stewart J.
        • Aronson S.
        • Donohue M.
        Assessing individual frontline nurse critical thinking.
        Journal of Nursing Administration. 2011; 41: 168-171
        • Berkow S.
        • Virkstis K.
        • Stewart J.
        • Conway L.
        Assessing new graduate nurse performance.
        Journal of Nursing Administration. 2008; 38: 468-474
        • Bremner M.N.
        • Aduddell K.
        • Bennett D.N.
        • VanGeest J.B.
        The use of human patient simulators: Best practices with novice nursing students.
        Nurse Educator. 2006; 31: 170-174
        • Budden J.
        The intra-rater reliability of nurse supervisor competency ratings.
        National Council of State Boards of Nursing, Chicago, IL2013 (Unpublished manuscript)
        • Childs J.C.
        • Sepples S.
        Clinical teaching by simulation: Lessons learned from a complex patient care scenario.
        Nursing Education Perspectives. 2006; 27: 154-158
        • Cohen J.
        Statistical power analysis for the behavioral sciences.
        2nd ed. Lawrence Erlbaum Associates, Hillsdale, New Jersey1988
        • Cook D.A.
        • Hatala R.
        • Brydges R.
        • Zendejas B.
        • Szostek J.H.
        • Wang A.T.
        • Erwin P.J.
        • Hamstra S.J.
        Technology-enhanced simulation for health professions education: A systematic review and meta-analysis.
        JAMA: The Journal of the American Medical Association. 2011; 306: 978-988
        • Dreifuerst K.T.
        Debriefing for meaningful learning: Fostering development of clinical reasoning through simulation.
        Indiana University Scholar Works Repository, 2010 (Doctoral dissertation), Retrieved from
        • Foronda C.
        • Liu S.
        • Bauman E.
        Evaluation of simulation in undergraduate nurse education: An integrative review.
        Clinical Simulation in Nursing. 2013; 9: e409-e416
        • Gaba D.M.
        The future vision of simulation in healthcare.
        Simulation in Healthcare: Journal of the Society for Simulation in Healthcare. 2004; 2: 126-135
        • Hayden J.
        Use of simulation in nursing education: National survey results.
        Journal of Nursing Regulation. 2010; 1: 52-57
        • Hayden J.K.
        • Jeffries P.J.
        • Kardong-Edgren S.
        • Spector N.
        The National Simulation Study: Evaluating simulated clinical experiences in nursing education.
        National Council of State Boards of Nursing, Chicago, IL2009 (Unpublished research protocol)
        • Hayden J.K.
        • Keegan M.
        • Kardong-Edgren S.
        • Smiley R.A.
        Reliability and validity testing of the Creighton Competency Evaluation Instrument for use in the NCSBN National Simulation Study.
        Nursing Education Perspectives. 2014https://doi.org/10.5480/13-1130.1 (in press)
        • Hovancsek M.T.
        Using simulation in nursing education.
        in: Jeffries P.R. Simulation in nursing education: From conceptualization to evaluation. National League for Nursing, New York, NY2007: 1-9
        • Hsu C.-C.
        • Sandford B.A.
        The Delphi technique: Making sense of consensus.
        Practical Assessment, Research & Evaluation. 2007; 12: 1-8
        • International Association for Clinical Simulation & Learning Board of Directors
        Standards of best practice: Simulation.
        Clinical Simulation in Nursing. 2013; 9: S1-S32
        • Ironside P.M.
        • McNelis A.M.
        Clinical education in prelicensure nursing programs: Findings from a national study.
        Nursing Education Perspectives. 2009; 31: 264-265
        • Issenberg S.B.
        • McGaghie W.C.
        • Petrusa E.R.
        • Gordon D.L.
        • Scalese R.J.
        Features and uses of high-fidelity medical simulations that lead to effective learning: A BEME systematic review.
        Medical Teacher. 2005; 27: 10-28
        • Jeffries P.R.
        A framework for designing, implementing, and evaluating simulations used as teaching strategies in nursing.
        Nursing Education Perspectives. 2005; 26: 96-103
        • Jeffries P.R.
        • Rizzolo M.A.
        Designing and implementing models for the innovative use of simulation to teach nursing care of ill adults and children: A national multi-site, multi-method study.
        National League for Nursing, New York, NY2006
        • Jeffries P.R.
        • Rogers K.J.
        Theoretical framework for simulation design.
        in: Jeffries P.R. Simulation in nursing education from conceptualization to evaluation. National League for Nursing, New York, NY2012: 25-41
        • Katz G.B.
        • Peifer K.L.
        • Armstrong G.
        Assessment of patient simulation use in selected baccalaureate nursing programs in the United States.
        Simulation in Healthcare. 2010; 5: 46-51
        • Kirkman T.R.
        High fidelity simulation effectiveness in nursing students' transfer of learning.
        International Journal of Nursing Education Scholarship. 2013; 10: 171-176
        • Kuiper R.
        • Heinrich C.
        • Matthias A.
        • Graham M.J.
        • Bell-Kotwall L.
        Debriefing with the OPT model of clinical reasoning during high fidelity patient simulation.
        International Journal of Nursing Education Scholarship. 2008; 5 (Article 17)
        • Lambton J.
        • O'Neill S.P.
        • Dudum T.
        Simulation as a strategy to teach clinical pediatrics within a nursing curriculum.
        Clinical Simulation in Nursing. 2008; 4https://doi.org/10.1016/j.ecns.2008.08.001
        • Lapkin S.
        • Levett-Jones T.
        • Bellchambers H.
        • Fernandez R.
        Effectiveness of patient simulation manikins in teaching clinical reasoning skills to undergraduate nursing students: A systematic review.
        Clinical Simulation in Nursing. 2010; 6: e207-e222
        • Laschinger S.
        • Medves J.
        • Pulling C.
        • McGraw R.
        • Waytuck B.
        • Harrison M.
        • Gambeta K.
        Effectiveness of simulation on health profession students' knowledge, skills, confidence and satisfaction.
        International Journal of Evidence-Based Healthcare. 2008; 6: 278-302
        • LeFlore J.L.
        • Anderson M.
        • Michael J.
        • Engle W.D.
        • Anderson J.
        Comparison of self-directed learning versus instructor-modeled learning during a simulated clinical experience.
        Simulation in Healthcare. 2007; 2: 170-177
        • Meakim C.
        • Boese T.
        • Decker S.
        • Franklin A.E.
        • Gloe D.
        • Lioce L.
        • Sando C.R.
        • Borum J.C.
        Standards of Best Practice: Simulation Standard I: Terminology.
        Clinical Simulation in Nursing. 2013; 9 (Retrieved from): S3-S11
        • Meyer M.N.
        • Connors H.
        • Hou Q.
        • Gajewski B.
        The effect of simulation on clinical performance: A junior nursing student clinical comparison study.
        Simulation in Healthcare. 2011; 6: 269-277
        • McGaghie W.C.
        • Issenberg S.B.
        • Petrusa E.R.
        • Scalese R.J.
        A critical review of simulation-based medical education research: 2003–2009.
        Medical Education. 2010; 44: 50-63
        • National Council of State Boards of Nursing
        NCLEX-RN test plan [PDF file].
        (Retrieved from)
        • National Council of State Boards of Nursing
        NCLEX pass rates [PDF file].
        (Retrieved from)
        • Nehring W.M.
        History of simulation in nursing.
        in: Nehring W.M. Lashley F.R. High-fidelity patient simulation in nursing education. Jones and Bartlett, Sudbury, MA2010: 3-26
        • Nehring W.M.
        • Lashley F.R.
        Current use and opinions regarding human patient simulators in nursing education: An international survey.
        Nursing Education Perspectives. 2004; 25: 244-248
        • Pauly-O'Neill S.
        Beyond the five rights: Improving patient safety in pediatric medication administration through simulation.
        Clinical Simulation in Nursing. 2009; 5: e181-e186https://doi.org/10.1016/j.ecns.2009.05.059
        • Radhakrishnan K.
        • Roche J.P.
        • Cunningham H.
        Measuring clinical practice parameters with human patient simulation: A pilot study.
        International Journal of Nursing Education Scholarship. 2007; 4 (Article 8)
        • Rutherford-Hemming T.
        Learning in simulated environments: Effect on learning transfer and clinical skill acquisition in nurse practitioner students.
        Journal of Nursing Education. 2012; 51: 403-406
        • Scherer Y.K.
        • Bruce S.A.
        • Runkawatt V.
        A comparison of clinical simulation and case study presentation on nurse practitioner students' knowledge and confidence in managing a cardiac event.
        International Journal of Nursing Education Scholarship. 2007; 4 (Article 22)
        • Schlairet M.C.
        • Fenster M.J.
        Dose and sequence of simulation and direct care experiences among beginning nursing students: A pilot study.
        Journal of Nursing Education. 2012; 51: 668-675
        • Schoening A.M.
        • Sittner B.J.
        • Todd M.J.
        Simulated clinical experience: Nursing students' perceptions and the educators' role.
        Nurse Educator. 2006; 31: 253-258
        • Sportsman S.
        • Schumacker R.E.
        • Hamilton P.
        Evaluating the impact of scenario-based high fidelity patient simulation on academic metrics of student success.
        Nursing Education Perspectives. 2011; 32: 259-265
        • Watson K.
        • Wright A.
        • Morris N.
        • McMeeken J.
        • Rivett D.
        • Black-stock F.
        • Jull G.
        Can simulation replace part of clinical time? Two parallel randomized controlled trials.
        Medical Education. 2012; 46: 657-667