If you don't remember your password, you can reset it by entering your email address and clicking the Reset Password button. You will then receive an email that contains a secure link for resetting your password
If the address matches a valid account an email will be sent to __email__ with instructions for resetting your password
The NCSBN National Simulation Study: A Longitudinal, Randomized, Controlled Study Replacing Clinical Hours with Simulation in Prelicensure Nursing Education
Providing high-quality clinical experiences for students has been a perennial challenge for nursing programs. Short patient length of stays, high patient acuity, disparities in learning experiences, and the amount of time instructors spend supervising skills have long been issues. More recently, other challenges have emerged: more programs competing for limited clinical sites, faculty shortages, facilities not granting students access to electronic medical records, and patient safety initiatives that decrease the number of students allowed on a patient unit or restrict their activity to observing care.
With high-fidelity simulation, educators can replicate many patient situations, and students can develop and practice their nursing skills (cognitive, motor, and critical thinking) in an environment that does not endanger patients. As the sophistication of simulation has grown over the last 10 years, the number of schools using it has increased as well, and boards of nursing (BONs) have received requests from programs for permission to use simulation to replace some traditional clinical experience hours. However, the existing literature does not provide the level of evidence that BONs need to make a decision on simulation as a replacement strategy. Though studies indicate that simulation is an effective teaching pedagogy, they lack the rigor and generalizability to provide the evidence needed to make policy decisions. The NCSBN National Simulation Study, a large-scale, randomized, controlled study encompassing the entire nursing curriculum, was conducted to provide the needed evidence.
Incoming nursing students from 10 prelicensure programs across the United States were randomized into one of three study groups:
●
Control: Students who had traditional clinical experiences (no more than 10% of clinical hours could be spent in simulation)
●
25% group: Students who had 25% of their traditional clinical hours replaced by simulation
●
50% group: Students who had 50% of their traditional clinical hours replaced by simulation.
The study began in the Fall 2011 semester with the first clinical nursing course and continued throughout the core clinical courses through graduation in May 2013. Students were assessed on clinical competency and nursing knowledge, and they rated how well their learning needs were met in both the clinical and simulation environments.
A total of 666 students completed the study requirements at the time of graduation. At the end of the nursing program, there were no statistically significant differences in clinical competency as assessed by clinical preceptors and instructors (p=0.688); there were no statistically significant differences in comprehensive nursing knowledge assessments (p=0.478); and there were no statistically significant differences in CLEX® pass rates (p=0 737) among the three study groups
The study cohort was also followed for the first 6 months of clinical practice There were no differences in manager ratings of overall clinical competency and readiness for practice at any of the follow-up survey time points: 6 weeks (p=0 706), 3 months (p=0.511), and 6 months (p=0.527) of practice as a new registered nurse.
The results of this study provide substantial evidence that substituting high-quality simulation experiences for up to half of traditional clinical hours produces comparable end-of-program educational outcomes and new graduates that are ready for clinical practice.
Introduction
Nursing education in the United States is at the crossroads of tradition and innovation. High-fidelity simulation is emerging to address 21st-century clinical education needs and move nursing forward into a new era of learning and critical thinking. However, this technology raises key questions:
●
Is high-fidelity simulation sufficient to help students adequately learn and meet the competencies demanded in a challenging, high-acuity, 21st-century practice environment?
●
How do student outcomes after simulation compare with those of traditional clinical education?
Traditionally, nursing students in the United States receive didactic instruction in the classroom setting and develop technical skills, enhance critical thinking, and learn the art and practice of nursing in a clinical environment. (Hereafter, these experiences in the clinical environment are referred to as traditional clinical experiences.) In the clinical environment, students are assigned patients and provide care under the supervision of a clinical instructor. Ideally, traditional clinical experiences offer a wide breadth of learning opportunities, allowing students to practice skills; increase clinical judgment and critical thinking; interact with patients, families, and members of the health care team; apply didactic knowledge to actual experience; and prepare for entry into practice.
However, the number of undergraduate programs has increased, creating more competition for clinical placement sites. Patient safety initiatives at some acute-care facilities have reduced the number of nursing students permitted on a patient unit at one time, creating even fewer educational opportunities. In addition, faculty members report that restrictions on what students may do in clinical facilities have increased and that students' time in clinical orientation are barriers to optimizing students' clinical learning (
These recent issues, along with the existing challenges of clinical education (such as variability in patient acuity and census and decreased lengths of stay), have educators looking for new ways to prepare students for the complex health care environment.
No real alternatives to the traditional clinical model existed before the advent of increasingly sophisticated patient simulators (
). Medium- and high-fidelity human simulators appeared in medical education in the 1960s, but they did not appear in undergraduate nursing education programs until the late 1990s. The use of this technology accelerated in nursing programs in the mid-2000s as faculty realized that simulation allowed students to practice skills, critical thinking, and clinical decision making in a safe environment.
With the challenges of providing high-quality clinical experiences and the availability of high-fidelity manikins, the use of simulation in nursing education has grown rapidly. In 2002,
surveyed nursing schools and simulation centers on the use of patient simulators. To be included in the survey, a program had to have purchased a patient simulator from Medical Education Technologies, Inc. before 2002; only 66 nursing programs received surveys. Just 8 years later, a National Council of State Boards of Nursing (NCSBN) survey found that 917 nursing programs were using medium- or high-fidelity patient manikins in their curriculum (
As simulation use increased, boards of nursing (BONs) received requests from programs for permission to use simulation to replace some of the traditional clinical experience hours. However, the existing literature did not provide the level of evidence BONs needed to make a decision on simulation as replacement strategy. In 2009, during discussions of nursing education, BONs raised concerns about the availability of clinical sites, the quality of the clinical experiences, the amount of time students were spending in observational experiences rather than providing direct care, and the amount of time clinical instructors were spending supervising skill performance. Many believed simulation could address these issues, though concerns existed:
●
How much simulation should be used?
●
Are students receiving a quality experience with simulation when nine students are observing and three are performing?
●
Can simulation be used for all undergraduate courses?
The existing literature did not provide the answers.
Review of the Literature
Simulation in the education of health care practitioners is not a new concept.
notes that as early as 1847, the Handbook for Hospital Sisters called for “every nursing school to have 'a mechanical dummy, models of legs and arms to learn bandaging, a jointed skeleton, a black drawing board, and drawings, books, and models' (p. 34)” (p. 10).
Nehring describes Mrs. Chase, the first life-size manikin produced in 1911 for the purpose of nursing education. Over the years, Mrs. Chase underwent modifications and improvements and was joined by a male version and a baby version (
). Since then, tremendous advances in computer technology have provided nurse educators with the ability to design, develop, and implement complex learning activities in the academic setting. Nursing simulation with sophisticated computerized manikins began in the late 1990s and early 2000s (
With the advent of medium- and high-fidelity manikins, more nursing programs began incorporating them into their curriculum. The first study to describe the prevalence of simulation use was conducted by
. Thirty-four nursing programs and 6 simulation centers participated in the survey. The investigators found that simulation was used most frequently for teaching basic and advanced medical-surgical courses, physical assessment, and basic nursing skills. Of the 35 respondents, 57.1% (n=20) stated that simulation was used as part of clinical time; the other respondents stated that simulation rarely or never replaced clinical time.
conducted an electronic survey of baccalaureate programs accredited by the National League for Nursing (NLN). Of the 78 responding programs, 79% reported using human patient simulators; about half were using the simulators with case scenarios. Eighteen of the responding schools reported using simulation as a replacement for clinical hours, most frequently in nursing fundamentals, medical-surgical nursing, and obstetric nursing courses.
A 2010 national survey of prelicensure nursing programs found that 87% of respondents (n=917) were using high- or medium-fidelity simulation in their programs (
). High- and medium-fidelity simulation use was reported most frequently in foundations, medical-surgical, obstetric, and pediatric courses. Sixty-nine percent of respondents reported that they do or have on occasion substituted simulation for traditional clinical experiences. Substitution occurred most frequently in basic and advanced medical-surgical, obstetric, and pediatric courses, followed by nursing foundations courses. Like the
, this national study documented the increasing trend toward incorporating simulation experiences into the prelicensure curriculum.
Simulation Outcome Studies
As the use of simulation in health care education programs increased, the literature on simulation grew as well; however, research on simulation outcomes understandably lagged behind. When High-Fidelity Patient Simulation in Nursing Education was published in 2010, Nehring found only 13 research articles on nursing student outcomes, namely, satisfaction with the simulation experience (6 studies), selfconfidence (7 studies), self-ratings (4 studies), knowledge (4 studies), and skill performance or competence (3 studies). In these reports, the results were mixed. In most of the studies, students reported satisfaction with the simulation experience (
Designing and implementing models for the innovative use of simulation to teach nursing care of ill adults and children: A national multi-site, multi-method study.
Designing and implementing models for the innovative use of simulation to teach nursing care of ill adults and children: A national multi-site, multi-method study.
found significantly higher reports of self-confidence in the control group. Frequently, there were no significant differences between groups overall, but a subscale may have shown a significant difference (
Designing and implementing models for the innovative use of simulation to teach nursing care of ill adults and children: A national multi-site, multi-method study.
). In general, these and other early studies had small sample sizes, lacked a control group, or lacked randomization, but they laid the groundwork for future research.
Other nurse scholars have conducted systematic reviews of the nursing literature with similar findings. The original intent of a review conducted by
was to perform a meta-analysis of simulation outcomes in nursing. The initial search revealed 1,600 articles between 1999 and 2009. A reasonably large number were research studies; however, even after a relaxation of inclusion criteria, only eight studies could be included. The Lapkin et al. review found that simulation improved critical thinking, skills performance, and knowledge of subject matter. An increase in clinical reasoning was inconclusive; however, three components of clinical reasoning—knowledge, critical thinking, and ability to recognize deteriorating patients—improved with simulation.
Difficulty in reviewing the simulation research literature is not limited to nursing. Systematic reviews and meta-analyses of the health care literature identify issues with a general lack of appropriately powered, rigorous studies (
). Issenberg and colleagues' (2010) review of 34 years of the medical simulation literature concluded, “While research in this field needs improvement in terms of rigor and quality, high-fidelity medical simulations are educationally effective and simulation-based education complements medical education in patient care settings.”
attempted a meta-analysis of all health care literature to provide a synthesis of the evidence on the effectiveness of simulation in prelicensure education, including medicine, nursing, and rehabilitation therapy from 1995 to 2006. Though the initial literature review identified 1,118 papers, the meta-analysis could not be performed because of the types of study designs and the quality of the studies. Instead, the authors synthesized the evidence into a systematic review. They found that use of simulators (partial-task trainers through high-fidelity manikins) resulted in high learner satisfaction in learning clinical skills, but the overall results of the review were inconclusive on the effectiveness of simulation to train health care professionals. The authors concluded that simulation should be an adjunct for clinical practice, not a replacement: “It remains unclear whether the skills learned through a simulation experience transfer into real-world settings” (
All the literature reviews reach a common conclusion: Study results are inconclusive regarding the effectiveness of simulation, but they seem generally favorable. All agree that variability in study design, issues with sample sizes that cannot detect significant effect sizes, and an overall lack of controlled, longitudinal studies make it difficult to draw strong conclusions as to the effectiveness of simulation. The literature to date also indicates the need for rigorous research that is appropriately powered with a controlled comparison group.
Study Aims and Significance
The NCSBN National Simulation Study, a longitudinal, randomized, controlled trial using nursing programs across the United States, is the largest, most comprehensive study to date that explores whether simulated clinical experiences can be substituted effectively for traditional clinical experiences in the undergraduate nursing program. Students participating in the study were enrolled throughout the entire 2 years of their undergraduate nursing program. The new graduates were then followed for the first 6 months in their first clinical positions to determine long-term effects of simulation and whether replacing clinical with simulation impacts entry into professional practice.
The aims of this study were to provide BONs with evidence on nursing knowledge, clinical competency, and the transferability of learning from the simulation laboratory to the clinical setting. Specifically, the aims are as follows:
●
To determine whether simulation can be substituted for traditional clinical hours in the prelicensure nursing curriculum, using a large sample of students from different degree programs (associate degree [ADN] and bachelor's degree [BSN]) and various geographical regions of the country
●
To determine the educational outcomes of undergraduate nursing students in the core clinical courses when simulation is integrated throughout the core nursing curriculum
●
To determine whether varying levels of simulation in the undergraduate curriculum impact the practice of new graduate nurses in their first clinical positions.
This study is reported in two parts: Part I is a randomized, controlled study of nursing students during their educational programs, and Part II is a follow-up survey study of the new graduate nurses and their managers during the first 6 months of clinical practice. Appendix A provides definitions of terms used in this study.
National Simulation Study: Part I
Research Questions
1.
Does substituting clinical hours with 25% and 50% simulation impact educational outcomes (knowledge, clinical competency, critical thinking and readiness for practice) assessed at the end of the undergraduate nursing program?
2.
Are there course by course differences in nursing knowledge, clinical competency, and perception of learning needs being met among undergraduate students when traditional clinical hours are substituted with 25% and 50% simulation?
3.
Are there differences in first-time NCLEX pass rates between students that were randomized into a control group, 25% and 50% of traditional clinical substituted with simulation?
Method
Trial Design
This was a comparison study using a randomized, controlled, longitudinal, multisite design to examine whether time and activities in a simulation laboratory could effectively substitute for traditional clinical hours in the prelicensure nursing curriculum.
In 2010, prelicensure nursing programs (ADN and BSN) throughout the United States were notified of the study and its requirements via postcard and were invited to apply for participation. After a review of the applications received (n=23) and telephone interviews, 10 nursing programs (five ADN and five BSN) were selected from geographically diverse areas. The programs represented rural and metropolitan communities and ranged from community colleges to large universities.
New students accepted into these programs and matriculating in the Fall 2011 semester (with an expected graduation after the Spring 2013 semester) were asked to participate in the study. All students who consented were randomized into one of three study groups:
●
Control: Students had traditional clinical experiences (no more than 10% of clinical hours could be spent in simulation)
●
25% Group: Students had 25% of their traditional clinical hours replaced by simulation
●
50% Group: Students had 50% of their traditional clinical hours replaced by simulation
Students remained in their assigned groups throughout the 2 years they were enrolled in the nursing program. Data from course outcomes (clinical competency and course-level ATI scores) and end-of-program outcomes (comprehensive ATI scores, clinical competency, critical thinking, and readiness for practice [End-of-Program Survey]) were collected from all programs and aggregated. These data were compared across the three study groups.
Study Sites
Inclusion Criteria
Schools interested in applying for study participation had to meet the following criteria:
●
BON-approved prelicensure nursing education program
●
ADN or BSN program
●
National accreditation
●
NCLEX pass rates at or above the national rate
●
Maximum of 10% simulation use in any one of its current clinical courses
●
Willingness to randomize students to each of the three study groups
●
Access to a simulation laboratory that could accommodate the number of students and simulation scenarios required by the study
●
Willingness to designate and commit faculty and staff members (hereafter referred to as the study team) to conducting the study from August 2011 through May 2013
●
Availability of the study team to attend three training meetings.
For final selection, these factors were also considered:
●
Location of the program: Selected study sites were geographically distributed across the United States.
●
Prelicensure nursing curriculum: Selected study sites had comparable clinical course curricula.
Subjects
Inclusion Criteria
Students enrolled in the prelicensure-RN (ADN or BSN) program beginning Fall 2011 at a participating study site, with graduation anticipated in May 2013.
Exclusion Criteria
●
Accelerated BSN students
●
Degree completion students (RN to BSN students)
●
Any student who already held a nursing license (LPN/VN or RN)
Procedure
Institutional Review Board (IRB) approval was obtained by NCSBN through the Western IRB and from the IRB of each study site.
In the training sessions, study team members practiced actual simulation scenarios and conducted debriefings with volunteer students. During the final training session, experienced simulation faculty members from multiple simulation centers evaluated debriefings to ensure that the study team members were proficient in the techniques. Throughout the study, team leaders performed ongoing evaluations on their study team members to ensure debriefing methods met the study standards.
A standardized simulation curriculum was developed and provided to the participating programs to ensure that quality simulation scenarios were used at all study sites. A modified Delphi technique involving the study teams was used to determine the subject matter for the curriculum. A description of the development of the simulation curriculum is described in Appendix B.
Subsequent to the development of the standardized simulation curriculum, simulation scenarios depicting the patient conditions and key concepts in the curriculum were obtained from publishers and distributed to the programs. When published scenarios were not available for some courses, such as mental health and community/public health, a call for scenarios went to members of the International Association for Clinical Simulation & Learning (INACSL). An expert in nursing simulation reviewed all donated scenarios to ensure they were consistent with the NLN/Jeffries Simulation Framework.
Faculty members from each program selected simulations from the provided curriculum that would meet their learning objectives. Other processes used to ensure uniformity across study sites included the provision of manikin programming files and consumable supplies necessary for running the scenarios, including labeled simulated medications.
Traditional Clinical Experiences
All participants had traditional clinical experiences during each of the seven core nursing courses. The only difference among groups was the number of hours spent in the traditional clinical environment.
Traditional clinical experiences took place in inpatient, ambulatory, or community settings selected by the schools. Students were assigned patients by clinical instructors and were expected to meet clinical objectives and competencies outlined for the courses. Students were evaluated by a clinical instructor at the end of each week using the Creighton Competency Evaluation Instrument (CCEI). Clinical instructors were required to complete training on the use of the CCEI data collection form before the first day of clinical education for the course.
Simulated Clinical Experiences
For students in the two simulation study groups, 25% or 50% of required clinical hours were spent in the simulation laboratory. Controlgroup students were allowed up to 10% of their clinical hours in simulation. Study sites were allowed flexibility in how they scheduled simulation hours, whether they had pre- or post-conferences, and whether the students had to prepare for their “patient care” assignment. The programs were instructed to use requirements for simulation similar to those for the clinical setting. Simulation scenarios involved medium- or high-fidelity manikins, standardized patients, role playing, skills stations and computer-based critical thinking simulations.
Throughout the scenario and debriefing, clinical instructors observed the two students in nursing roles during the scenario and completed a CCEI form for those students. The same procedure was followed for all seven core courses. Due to the number of students participating in simulation, more than one clinical group would frequently be in the simulation laboratory at the same time. Entire clinical groups rotated through stations throughout the simulation day. Appendix C depicts a sample of a simulation day schedule used by one of the study sites.
Outcome Measurements
The study measured students' knowledge, competency, and critical thinking as well as their perceptions of how well their learning needs were met.
Knowledge
At the end of the nursing program, knowledge was measured by the ATI RN Comprehensive Predictor® 2010 (Assessment Technologies Institute, LLC), a multiple-choice, Web-based, proctored examination. The examination reports a score as a percentage of correctly answered items as well as scores for the major content areas and eight nursing dimensions categories. The total score is based on 150 items.
Knowledge of the specialty content in each clinical course was measured using the ATI Content Mastery Series® (CMS) examinations for Fundamentals of Nursing, Adult Medical-Surgical Nursing, Maternal-Newborn, Nursing Care of Children, Mental Health, and Community Health. These examinations use a Web-based format and report scores as a percentage of correctly answered items as well as scores for major content areas and nursing dimension categories. The CMS program includes other features and services that were available to all study participants, but were not required for the study.
Clinical Competency
During the study, clinical competency was measured using three instruments: the Creighton Competency Evaluation Instrument (CCEI), the New Graduate Nurse Performance Survey (NGNPS), and the Global Assessment of Clinical Competency and Readiness for Practice.
Creighton Competency Evaluation Instrument
The CCEI is a 23-item tool used by clinical instructors to rate students on behaviors that collectively demonstrate clinical competency (assessment, communication, clinical judgment, and patient safety). The tool was used to assess students in the clinical setting and the simulation setting.
These data were used to monitor how well students were progressing clinically. Detailed validity and reliability statistics are reported by
. Overall, Cronbach's alpha ranged from 0.974 to 0.979, which is considered highly acceptable. Percent agreement between the faculty raters of the reliability and validity studies and an expert rater was reported at 70% or better for 20 of the 23 items.
New Graduate Nurse Performance Survey
The NGNPS developed by the Nursing Executive Center of the Advisory Board Company consists of 36 items that assess clinical knowledge, technical skills, critical thinking, communication, professionalism, and management of responsibilities on a six-point Likert scale (
). Berkow et al. found the Chronbach's alpha coefficient to be 0.972, and the split-half reliability was 0.916 (K. Virkstis, personal communication, March 12, 2013).
Global Assessment of Clinical Competency and Readiness for Practice
The Global Assessment of Clinical Competency and Readiness for Practice scale consists of one question that asks the evaluator to rate the graduating student overall on a scale of 1 to 10 (1=among the weakest and 10=among the best). The reliability of this question has not been established; however, a similar question was used in a pilot study of continued competence of RNs. The question in that study was: “Given the above behaviors/tasks, and others that you feel are directly relevant, how would you rate this RN's performance on the competency Management of Care?” An intra-rater reliability of r=0.80 using a test-retest method 1 month apart and an 81% agreement were obtained (
The NCLEX is “an examination that measures the competencies needed to perform safely and effectively as a newly licensed, entry-level registered nurse” (National Council of State Boards of Nursing [
). This examination “assesses the knowledge, skills, and abilities that are essential for the entry-level nurse to use in order to meet the needs of clients requiring the promotion, maintenance, or restoration of health” (
). Content for the examination is based on a practice analysis survey of entry-level nurses conducted every 3 years. The NCLEX is administered using a computerized adaptive testing format in secured, proctored testing facilities.
The Clinical Learning Environment Comparison Survey (CLECS) assesses students' perceptions of how well they feel their learning needs were met in the traditional clinical and simulation environments by rating each environment side-by-side on 29 items related to clinical learning. The instrument provides a total score and six subscale scores (communication, nursing process, holism, critical thinking, self-efficacy, and teaching-learning dyad); each subscale has a rating for the traditional clinical environment and the simulation environment. The reported Cronbach's alphas of the subscales in the traditional clinical environment ranged from 0.741 to 0.877 and Cronbach's alphas for the subscales in the simulation environment ranged from 0.826 to 0.913 (K. Leighton, personal communication, June 6, 2013).
Data Collection
At the beginning of the study when informed consent was obtained, the student's demographic information was also obtained. At the beginning of each clinical course, demographic information was obtained from clinical faculty members who were completing CCEI ratings on the participants.
The CCEI was used to assess clinical competency on an ongoing basis throughout the study. In the clinical setting, students were assessed individually once a week. In the simulation setting, two students were rated on their performance in the simulation and the debriefing. During simulation days, students were assessed at least once using the CCEI. CCEI scores were graphed, and the data trends were evaluated weekly as a safety indicator to determine if students were meeting the course objectives.
All CCEI scores obtained from the clinical and simulation settings were collected weekly. Scores from simulations were used by the participating sites and a data safety monitoring board (DSMB) to monitor academic progress but were not used as study outcome measurements. For the purposes of statistical comparison, the final CCEI rating from the clinical setting was used as a proxy for the final clinical competency rating for the course.
At the end of each core clinical course, students completed several assessments:
●
CLECS to assess how well learning needs were met in the clinical and simulation learning environments
●
ATI Content Mastery Series computerized assessments of nursing knowledge
●
Student information sheet to determine if students worked as nursing assistants during the semester and whether additional ATI resources were utilized that could influence examination scores and to collect qualitative comments about the study experience.
During the final weeks of the last semester, clinical preceptors and instructors were asked to complete the End-of-Program Preceptor Survey, which consisted of three instruments to assess clinical competency and critical thinking: NGNPS, Critical Thinking Diagnostic, and the Global Assessment of Clinical Competency and Readiness for Practice. Completed surveys were mailed directly to the project director in prepaid reply envelopes.
At the end of the last semester, students completed the ATI Comprehensive Predictor 2010 for an assessment of overall nursing knowledge. Students also completed an end-of-program CLECS and the End-of-Program Survey. The end-of-program CLECS assessed overall perception of the traditional clinical and simulation settings. Students were instructed to consider all of their clinical courses and make selections based on their experiences overall in both learning environments. The End-of-Program Survey used the same scales as the preceptor version to obtain self-assessment ratings of clinical competency: the NGNPS, Critical Thinking Diagnostic, and the Global Assessment of Clinical Competency and Readiness for Practice. To ensure confidentiality of the responses, students mailed completed surveys to the project director using prepaid reply envelopes.
New graduates were eligible to take the NCLEX-RN after graduating from their nursing program. NCLEX results were collected through December 31, 2013. Table 1 outlines the instruments used and the data collection schedule.
Table 1Description of Data Collection Instruments
Instrument
Completed by
Information Collected
Timing
Demographic form
Students
Gender, age, race, previous degrees, previous health care or military experience
Beginning of study
Clinical instructors
Gender, age, race, length of RN and teaching experience, previous experience with simulation
Beginning of semester
Creighton Competency Evaluation Instrument (CCEI)
Clinical instructors
23-item competency evaluation (total score and 4 subscales)
Each week of clinical and after every simulation scenario
reported large effect sizes associated with the use of simulation, but the study did not examine different amounts of simulation. One might expect a large effect; however, the comparisons among the three amounts of simulation may have smaller effects. Based on these considerations, an effect size of d=0.35 was selected for analysis. This effect size is between what
calls a small effect (d=0.20) and what he calls a medium effect (d=0.40). Assuming this effect size, a two-tailed alpha of 0.05 and a power of 0.92, a sample of 200 students per group was needed. With three groups, a total sample of 600 was required.
Safety Monitoring
In addition to IRB approval, other mechanisms were used to ensure the study intervention was not compromising a study group or placing students at risk for poor performance.
A committee was established at each school to provide internal oversight for the study. Committees consisted of program administrators, the study team leader, other study team members, course faculty members, and other stakeholders. The committee reviewed study progress, resolved site-specific issues, tracked student progress, and provided a structured mechanism for communication regarding the study and any program effects.
A DSMB was established at NCSBN to review all study data on a continual basis and determine if the study should continue. Members of the DSMB included the project director, two statisticians, and two prelicensure nursing program directors whose schools were not involved in the study. The DSMB met regularly each semester to review data as it was collected. Aggregated national data and school level data were reviewed to ensure that the simulation groups were progressing in the nursing program and meeting program objectives. In addition to the weekly CCEI data, the DSMB reviewed end-of-semester ATI scores, CLECS ratings, grade point averages, attrition data, and adverse events. DSMB summary reports were submitted to each site for IRB review.
Recruitment and Randomization of Students
Student recruitment efforts began in the summer of 2011. All students received a detailed description of the study and were invited to participate. Informed consent was obtained from student volunteers, and demographic data were collected. The study team leader at each site assigned each study subject a study specific identification (ID) number. These ID numbers were forwarded to the lead statistician, who randomized students into one of three study groups (Control, 25% or 50%) using a standard random number generator from Statistical Analysis Systems (SAS).
The number of students in each cohort varied according to school and state requirements. Attempts were made to maintain a 1:1:1 ratio at each school. Students remained in the same study group assignment for the duration of the nursing program.
Data Analysis
Data on students were collected throughout the 2 years of the study. Paper-based data collection forms were used for the majority of data collection instruments. Data were manually entered into a data spreadsheet using a double-key entry process. ATI scores were received directly from ATI on a per student basis as de-identified data, using the study ID numbers. SAS version 9.2 was used for all analyses.
Basic descriptive statistics were run on all data. Parametric and nonparametric tests were used as indicated for each type of data. Multivariate analysis of variance (MANOVA) procedures were employed to check possible covariates (school, gender, age, ethnicity, race, nursing assistant experience, previous degree, and use of additional ATI products) for interaction effects. When the covariates were included in the MANOVA model, the Wilks' Lambda value did not change substantially, and interaction effects were determined to be insignificant. For all tables, the effect sizes displayed represent the maximum effect size calculated when comparing the means of the control, 25%, and 50% groups to each other.
Results
Sample
A total of 847 students consented to participate in the study, and they were randomized into the three study groups. The number of participants randomized at each site ranged from 60 to 103. The study sample was 86% female, 84% white, and just under 18% Hispanic. At the start of the study, the mean age of the study sample was 26.3 years (SD 8.0, range 18–60). Also at the start of the study, almost 16% of the students were certified nurse assistants, 34% had a previous degree, and 3% had prior or current military experience. As Table 2 shows, there was very little variance in the demographic characteristics of the three groups. Statistical analysis showed no difference in demographic characteristics among the three groups with the exception of ethnicity. The statistically significant difference (p=0.043) was between the control group and the 25% group for the number of Hispanic participants.
A total of 666 students completed the study. The demographic characteristics of those students were similar to the demographic characteristics of those who began the study. Among students who completed the study, 87% were female, 87% were white, and just over 18% were Hispanic. Their mean age at the beginning of the study was 26.1 years (SD 7.5, range 18–57). At the end of the study, 34% of the graduating students indicated they worked as a nursing aide or an assistant at some point during the study.
Differences existed between the demographic characteristics of those who completed the study and those who did not. Nonwhite students, males, and older students tended to drop out of the study. The demographic characteristics of the students who did not complete the study were 82% female, 74% white, and almost 16% Hispanic. Their mean age at the beginning of the study was 27.4 years (SD 9.5, range 18–60). Table 3 lists the demographic characteristics of study participants who completed the study and those who did not.
Table 3Demographics of Subjects by Completion Status
The rate of completion for the study sample was 79%. Completion rates were the same for the control group and 25% group at 81%; the completion rate for the 50% group was 74%. Students could withdraw from the study at any time, or they could be removed if they no longer met the eligibility criteria. The main reason for withdrawal from the study was not graduating on time. Aside from changing majors or leaving the nursing program, reasons for not graduating on time were dropping a required nursing course, failing a course, taking a leave of absence, and changing to part-time status.
The rate of course failures (theory or clinical) was 7.7% overall. The highest failure rate was in the control group (9.3%), and the lowest was in the 50% group (6.6%); these differences were not statistically significant (p=0.487). The rate of study withdrawal was 13.6% overall; however, the 50% group had a much higher rate of withdrawal (19.2%) than the control and 25% groups (9.3% and 12.0%, respectively, p=0.002). Table 4 outlines the reasons for not completing the study by study group.
Table 4Reasons for Study Attrition
Overall
Control
25%
50%
p value
Number of students randomized
847
268
293
286
Number of students completing the study
666
218
236
212
Rate of completion
78.6%
81.3%
80.5%
74.1%
0.072
Number of students who failed a course during the study
66
25
22
19
Rate of failure
78%
9.3%
75%
6.6%
0.487
Number of students who withdrew or were withdrawn from the study for any reason
Statistically significant differences existed between those who completed the study and those who did not. Although the mean ages of these two groups appear similar, those who were age 35 or older were more likely to not complete the study. Also, males, Black/African-American, and Asian students had significantly higher rates of not completing the study. Study completers and noncompleters also had a statistically significant difference regarding experience as a nurse assistant. In all three groups, a higher proportion of completers worked as a nursing assistant at some point during the study (p<0.001). This was true for all three study groups, but the largest difference was seen in the 50% group. Of those in the 50% group, 87% of study completers worked as nursing assistants compared with 69% of those who did not complete the study.
Research Question 1
Does substituting clinical hours with 25% and 50% simulation impact educational outcomes (knowledge, clinical competency, critical thinking and readiness for practice) assessed at the end of the undergraduate nursing program?
Nursing Knowledge
The RN Comprehensive Predictor® 2010 was used to assess overall nursing knowledge at the end of the nursing program. There were no statistically significant differences among the three study groups in the total score (p=0.478). (See Figure 1.)
Figure 1Mean ATI RN Comprehensive Predictor Scores (N=641)
Table 5 below lists the detailed results of the ATI subscale scores for each study group. Although the 50% group tends to have slightly higher percentages, the differences are minimal, and the majority of ATI scores have less than one percentage point difference among the three groups. These minimal differences are reflected in the small effect sizes and no statistically significant differences between the groups.
In the final weeks of the nursing program, clinical preceptors and instructors rated the clinical competency of the study participants using the NGNPS. There was less than a one-point difference among the mean scores across the three groups. On a scale of 1 to 6 (1=lowest rating, 6=highest), students in all three groups had mean scores above 5.0, indicating they all were rated as clinically competent by their preceptors or instructors. The effect sizes were small, and chi-square analysis indicated no statistical significance on any of the subscales. (See Table 6.)
Table 6End-of-Program Survey Preceptor Ratings
Total Sample
Control Group
25% Group
50% Group
Evaluation of Significance
n
Mean
SD
n
Mean
SD
n
Mean
SD
n
Mean
SD
F value
Effect Sizes
p value
Significant Differences
New Graduate Nurse Performance Survey® (1−6 scalae)
Preceptors and instructors rated students on their ability to think critically on a scale of1 to 6 (1=lowest rating, 6=highest rating). Again, students in all three groups had overall mean scores above 5.0. Though the overall mean scores tended to be slightly higher for the 25% group, the differences are minimal, and chi-square analysis found no statistically significant differences with small effect sizes. (See Table 6.)
Global Assessment of Clinical Competence and Readiness for Practice
Preceptors and instructors gave students an overall rating of readiness for practice on a scale of 1 to 10 (1=among the weakest students, 10=among the best). As the bottom of Table 6 indicates, the mean scores of students in all three groups were above 8.0, and there were no statistically significant differences among the control, 25%, and 50% groups on ratings of global clinical competency and readiness for practice (p=0.688, d=0.10).
End-of-Program Survey Student Ratings
The study participants completed the End-of-Program Survey as a self-assessment. Table 7 shows the detailed results of the three instruments comprising this survey: the NGNPS, the Critical Thinking Diagnostic, and the Global Assessment of Clinical Competency and Readiness for Practice. The 50% group rated themselves higher than their peers on the New Graduate Nurse Performance Survey, but the differences were only statistically significant for the critical-thinking subscale (control group mean 5.13 (SD 0.7), 25% group mean 5.10 (SD 0.7), 50% group mean 5.30 (SD 0.9; p=0.038).
Table 7End-of-Program Survey Student Ratings
Total
Control Group
25% Group
50% Group
Evaluation of Significance
(n=472)
(n=153)
(n=165)
(n=154)
Mean
SD
Mean
SD
Mean
SD
Mean
SD
F value
Effect Size
P value
Significant Differences
New Graduate Nurse Performance Survey® (1–6 scale)
On the Critical Thinking Diagnostic, the 50% group rated themselves significantly higher in every subscale. The 50% group also rated themselves significantly higher on the Global Assessment of Clinical Competency and Readiness for Practice (p=0.001). (See Table 7.)
Research Question 2
Are there course by course differences in nursing knowledge, clinical competency, and perception of learning needs being met among undergraduate students when traditional clinical hours are substituted with 25% and 50% simulation?
At the end of each core clinical course, students completed standardized tests of nursing knowledge, using the ATI Content Mastery Series. Clinical instructors rated the students' clinical competency progression during each week of clinical, using the CCEI, and at the end of each course, students completed the CLECS to compare how well traditional clinical settings and the simulation environment met their learning needs. For the CCEI, a percentage was calculated based on the items rated as competent divided by the number of items assessed.
Fundamentals of Nursing
Figure 2 illustrates the total score for the ATI Fundamentals of Nursing Assessment. The overall ATI scores were not statistically different for the three study groups (p=0.155). All three received similar scores overall and within the subscales; in some cases, the scores were separated by only fractions of a percent (Appendix D, Table D1).
Figure 2Mean Total Score: ATI Fundamentals of Nursing Assessment (N=800)
The Fundamentals of Nursing CCEI scores for each study group were graphed separately for each week of the semester. (See Figure 3.) Competency ratings were lower at the beginning of the semester for all students and improved over time, as expected. All three groups had similar competency ratings throughout the semester.
Figure 3Total CCEI Scores Assessed in the Traditional Clinical Setting for the Fundamentals of Nursing Course (N=714)
Table 8 shows the total score and subscale scores of the Fundamentals of Nursing CCEI rating for the three groups. The 25% and control groups received statistically significantly higher ratings than the 50% group; however, the competency ratings for the 50% group were still above 90%.
Table 8Fundamentals of Nursing Final CCEI Clinical Ratings
The ATI Medical-Surgical Nursing Assessment was administered after students received all medical-surgical nursing content; therefore, students in most schools took this test during the last semester of the program. Because this assessment was generally taken after advanced medical-surgical courses, the nursing knowledge results are presented in the next section of this paper.
Figure 4 shows the results of the weekly Medical-Surgical Nursing CCEI ratings for each group. As with the results for the Fundamentals of Nursing course, the assessment ratings started lower at the beginning of the semester and gradually increased over the semester. All three groups ended the semester with equally high ratings. There were no statistically significant differences among the three groups in the total score or any of the CCEI subscale scores. (See Table 9.)
Figure 4CCEI Scores Assessed in the Traditional Clinical Setting for the Medical-Surgical Nursing Course (N=692)
The total score for ATI Medical-Surgical Nursing Assessment was significantly higher (p=0.005) for the 50% group (see Figure 5) compared with the control group; however, scores were separated by less than 3 percentage points.
Figure 5Mean Total Score on ATI Medical-Surgical Nursing Assessment (N=683)
In every category and nursing dimension, one of the simulation groups had the highest score; however, the effect sizes were small to moderate. Table D2 in Appendix D shows the mean percentages with standard deviations and statistical findings for the Medical-Surgical Nursing assessment results.
Most schools offered an advanced medical-surgical course at the end of the program. All three groups (see Figure 6) started out with unusually high ratings at the beginning of the semester and maintained high ratings throughout the course. These scores reflect the clinical abilities of students in their final clinical course, indicating that clinical instructors believe all the students were demonstrating competence in the final course.
Figure 6CCEI Scores Assessed in the Traditional Clinical Setting for the Advanced Medical-Surgical Nursing Course (N=528)
The last CCEI rating by clinical instructors in the traditional clinical environment was statistically significant for the 25% group compared with the control group (p=0.025). However, Table 10 shows how similar the scores are; the overall difference between the 25% group and control group is 0.8%. Table 10 lists the total CCEI scores and subscale scores for all three study groups in the advanced medical-surgical course.
Table 10Advanced Medical-Surgical Nursing Final CCEI Clinical Ratings
The Maternal-Newborn Nursing ATI end-of-course assessments were consistent with findings in other courses: The 50% group had total scores that were higher than the 25% group or control group, and the differences were statistically significant (p=0.011). However, less than three percentage points separated the three groups. Figure 7 shows the total score for each group, and Table D3 in Appendix D details all the subscale results by study group. Though not always statistically significant, the 50% group had the highest scores in every assessment category except Management of Care; the 25% group scored the highest in this category. Again, the effect sizes were small.
Figure 7Mean Total Score on ATI Maternal-Newborn Nursing Assessment (N=680)
Figure 8 shows the weekly Maternal-Newborn Nursing CCEI scores. The control group had consistent ratings from week to week. The 25% group scores stayed lower for a longer period of time but came up at the end of the course. The 50% group ratings were more variable. Initially, this group shows the same gradual increase seen in the other courses; however, the ratings dip halfway through the semester and then increase.
Figure 8CCEI Scores Assessed in the Traditional Clinical Setting for the Maternal-Newborn Nursing Course (N=693)
In the last clinical rating by clinical instructors during the Maternal-Newborn Nursing course, the control group had the highest ratings overall and in each of the subscales. Two of these higher scores (Total score p=0.022, Assessment p=0.038) are statistically significantly; however, the mean scores for all groups were over 94% overall and for each of the subscales, indicating clinical competency was demonstrated by all groups. Table 11 lists the scores for each group.
Table 11Maternal-Newborn Nursing Final CCEI Clinical Ratings
The ATI Nursing Care of Children total scores were significantly higher for the 50% group (p=0.002, d=0.37). Again, the scores for the three groups are close, with only 3.4 percentage points separating them. (See Figure 9 and Table D4 in Appendix D.)
Figure 9Mean Total Score on ATI Nursing Care of Children Assessment (N=620)
The Pediatric Nursing weekly clinical CCEI ratings for the control and the 25% group started high and remained high for the duration of the course. Ratings for the 50% group started out lower than their peers and then climbed until the end of the semester, when all three groups were receiving top ratings from their clinical instructors. (See Figure 10.)
Figure 10CCEI Scores Assessed in the Traditional Clinical Setting for the Pediatric Nursing Course (N=686)
The final CCEI rating in the traditional clinical setting was lowest for the 50% group. The control and 25% groups received significantly higher ratings during their last clinical day compared with the 50% group. (See Table 12.) Total score and subscale scores for the control and 25% groups were similar. The 50% group, though receiving lower scores than their peers, received ratings at or above 92%, again indicating a high level of clinical competence in this course.
Table 12Pediatric Nursing Final CCEI Clinical Ratings
Mental Health Nursing knowledge assessments show that the 50% group scored higher overall compared with the 25% group and control group. (See Figure 11.) Total ATI scores were significantly higher for the 50% group than the control group (p=0.011; d=0.30). However, there is less than a 3-point difference between the scores. Both the 25% and 50% groups had the highest scores in the categories and dimensions of the Mental Health Nursing assessment; however, only four of the subscale scores were significantly different as outlined in Table D5 in Appendix D.
Figure 11Mean Total Score on ATI Mental Health Nursing Assessment (N=633)
Figure 12 illustrates the CCEI ratings for the Mental Health Nursing course over the semester. The 25% group started the semester lower than the 50% and control groups, however by the second week, ratings were equivalent to the control group ratings. The 25% group and the control group have similar ratings throughout the semester.
Figure 12CCEI Scores Assessed in the Traditional Clinical Setting for the Mental Health Nursing Course (N=665)
Clinical scores for the 50% group showed more variability than scores for the 25% group and control group. Although end of semester ratings were lower for the 50% group, the ratings were above the 90% level. (See Table 13.) The control group had the highest ratings for all five scores, and in three instances these differences were statistically significant. Still, the two simulation groups were rated over 93% in all areas.
Table 13Mental Health Nursing Final CCEI Clinical Ratings
Interestingly, there were more instances of instructors using the Not Applicable option when completing the CCEI rating forms for Mental Health Nursing. This could indicate that the items on the CCEI do not pertain to the mental health environment or that the terminology on the CCEI form was less familiar to the instructors in the mental health setting and thus they had difficulty utilizing all the items on the form.
Community Health Nursing
There were no statistically significant differences among the three study groups for any of the Community Health Nursing assessment scores (p=0.387). Figure 13 shows the overall results for the three groups; Table D6 in Appendix D lists all the individual subscale results.
Figure 13Mean Total Score on ATI Community Health Nursing Assessment (N=344)
Some schools did not have a separate Community Health Nursing course; instead they integrated community health concepts into the medical-surgical, maternal-newborn, pediatric, and mental health courses. When community health clinical experiences or simulations were incorporated into another course, clinical instructors labeled the data collection forms according to the course title, resulting in fewer CCEI forms identified as community health nursing. Community Health Nursing courses had fewer clinical hours than other clinical courses and therefore provided fewer opportunities for assessments. Additionally, CCEI data collection was challenging because students in Community Health courses are in various settings at the same time and clinical instructors cannot always observe and assess them. Fewer CCEI rating forms were received for this course, and there were more occurrences of instructors using the Not Applicable option when completing the CCEI forms.
Figure 14 depicts the Community Health Nursing scores for the three groups over the semester. All three groups started out above the 90% level and remained there throughout the course, indicating that all groups received competent ratings by their clinical instructors. This is echoed in the final CCEI ratings, where all scores are over 94%. (See Table 14.)
Figure 14CCEI Scores Assessed in the Traditional Clinical Setting for the Community Health Nursing Course (N=252)
The CLECS was utilized to obtain side-by-side ratings of traditional and simulation settings. Ratings are based on a 4-point scale (1=learning needs not met, 4=learning needs well met). The instrument produces a total score and six subscale scores, each having a rating for the traditional clinical environment and the simulation environment. Comparisons were completed in two ways: by comparing the three study groups on total scores and subscale scores in each environment and by comparing the clinical environment rating to the simulation environment rating within each group.
Students completed the CLECS at the end of each clinical course and again at the end of the program. Detailed tables from the end-of-program ratings and each clinical course are in Appendix E (Tables E1-E8).
At the end of the nursing program, students were asked to reflect on all their traditional clinical and simulation experiences throughout their nursing education and rate how well each environment met their learning needs overall. In every instance, the control group rated the traditional clinical environment higher than the simulation environment; the 50% group rated the simulation environment higher in every category; and the 25% group was in the middle with a tendency to rate the clinical environment higher than simulation for meeting their learning needs. Table 15 shows the mean ratings with standard deviations for each group, effect sizes, and p values of the between-group and within-group analyses for the overall ratings on the end-of-program CLECS.
Are there differences in first-time NCLEX pass rates between students that were randomized into a control group, 25% and 50% of traditional clinical substituted with simulation?
A total of 660 study participants took the NCLEX examination as of December 31, 2013; two students from each of the study groups had not taken the NCLEX as of December 31, 2013. The first-time pass rate of the study cohort overall was 86.8%. The pass rate was higher for the control group, but the difference was not statistically significant (p=0.737; v=0.04). (See Figure 15.) The pass rate for all three groups was higher than the national average of 80.2% during the same period (
The main findings for Part I of the study revealed no significant differences among the three study groups' end-of-program educational outcomes. Comprehensive nursing knowledge, preceptor and clinical instructor ratings of clinical competency, and NCLEX pass rates show no statistically significant differences when simulation experiences are used to replace a portion of traditional clinical hours. The two simulation groups perceived that their learning needs were being met, and they were able to synthesize theory content and perform as well as the control group on tests of nursing knowledge. The learning that occurred in simulation was translated to the clinical environment as evidenced by the high competency ratings made by clinical instructors.
National Simulation Study: Part II
To determine the long-term impact of substituting simulation for traditional clinical experience, the study subjects were followed for 6 months after beginning their first clinical position as an RN, evaluating performance in three areas (clinical competency, critical thinking, and readiness for practice). Part II of the study also evaluated acclimation to the role of the RN for any differences among the subjects from the three groups.
Research Questions
1.
Are there differences in clinical competency, critical thinking and readiness for practice among the new graduate nurses from the three study groups?
2.
Are there differences among new graduates from the three study groups in acclimation to the role of RN?
Method
Procedure
All study subjects who completed Part I of the National Simulation Study and graduated from the prelicensure program were asked to participate in Part II of the study. Additional requirements for participation in Part II included passing the NCLEX and being employed as an RN in a clinical position by December 31, 2013.
Part I study subjects who agreed to participate in Part II provided contact information and agreed to notify the National Simulation Study project director of the start date for their first RN position. All Part II subjects were given written information for their managers and asked to inform managers of the study.
To assess clinical competency, critical thinking, readiness for practice, and acclimation to the RN role, the new graduates were sent a survey consisting of the following evaluation tools at 6 weeks, 3 months, and 6 months after the start of their first RN position:
●
NGNPS (See Part I for psychometric properties)
●
Critical Thinking Diagnostic (See Part I for psychometric properties)
●
Global Assessment of Clinical Competency and Readiness for Practice.
Additional questions used previously in NCSBN studies were added to the questionnaire to assess acclimation to the RN role. Acclimation questions focused on the length of orientation, assigned patient loads, charge nurse responsibilities, and workplace stress.
Demographic data, including practice setting and work schedule, were also collected. The evaluation tools (hereafter referred to as surveys) were accessible to the new graduates via electronic survey links. The links were sent to each new graduate via an e-mail message 6 weeks, 3 months, and 6 months after the employment start date. If the new graduate left his or her first position for any reason, the graduate was discontinued from the study.
The new graduates' managers or preceptors (hereafter referred to as managers) played an important role in Part II of the study. They were asked to evaluate the new graduate using the NGNPS, Critical Thinking Diagnostic, and Global Assessment of Clinical Competency and Readiness for Practice at the same time intervals as new graduates (6 weeks, 3 months, and 6 months after the start date). Managers had access to the surveys via an e-mail link sent to the graduates, who forwarded it to managers. A cover letter explaining the entire study and their role was included in the first link sent with the 6-week survey. Follow-up reminders to the managers were sent via e-mail to the new graduates, who were asked to forward the reminder.
In the event of nonresponses, reminders were sent via e-mail and cell phone text messages to new graduates. All communication was with the new graduate; study researchers did not directly contact managers.
New graduates and managers had a specific window of time in which they could respond and provide the study data. For the 6-week surveys, the data collection period extended +/− 2 weeks from the 6-week date. The 3-month data collection period was +/− 4 weeks from the 3-month date. The 6-month data collection period extended +/− 6 weeks from the 6-month date. Only data received within these collection periods were included in the analysis to ensure that the responses represented data from the intended time period.
Data Analysis
Survey data from the new graduates and managers were collected via an electronic survey (WorldApp KeySurvey) except for data from three participants who requested paper surveys. Survey data were downloaded into a spreadsheet and analyzed using SAS version 9.2. Data analysis procedures for the follow-up study mirrored those for Part I of the study.
Basic descriptive statistics were utilized for all data, and parametric and nonparametric tests were used as indicated for each type of data. MANOVA procedures were employed to check possible covariates (school, gender, age, ethnicity, race, nursing assistant experience, previous degree, and use of additional ATI products) for interaction effects. When the covariates were included in the MANOVA model, the Wilks' Lambda value did not change substantially, and interaction effects were determined to be insignificant. The effect sizes displayed represent the maximum effect size calculated when comparing the means of the control, 25%, and 50% groups to each other.
Results
Prior to graduation, 575 participants provided contact information so they could receive follow-up surveys. A total of 375 of these participants contacted the research team with their start dates (a 65% hire rate); however, not all provided this information right away. Some new graduates provided information after the allowable survey windows. In these instances, the graduates were sent the next survey during the allowable data collection period.
Six-week follow-up surveys were e-mailed to 354 new graduates, and 328 completed surveys were returned. After excluding surveys returned beyond the data collection window and surveys from graduates who left their first nursing position, 266 surveys were included in the analysis, a 75% response rate. The new graduates were responsible for forwarding the survey request to their managers. After excluding surveys returned outside the data collection window, 135 6-week manager surveys were included in the analysis, for a 38% response rate.
Response rates improved with the 3-month and 6-month surveys. The new graduates had an 89% response rate for the 3-month survey, and managers had a 68% response rate. For the 6-month survey period, new graduates had an 86% response rate, and managers responded 66% of the time. (See Table 16.)
Table 16Follow-Up Survey Response Rates of New Graduates and Managers
Two-thirds of the new graduates were hired as RNs in urban areas. Higher proportions of new graduates from the 50% group were working in urban areas, while the 25% group had higher proportions of new graduates working in suburban and rural areas.
More than 80% of the new graduates were working in a hospital or medical center, and 10% reported working in long-term care facilities. Of the new graduates, 27% reported working in a facility with Magnet® designation. One-third of new graduates reported working in critical care environments, and 26% reported working in medical-surgical units. There were no statistical differences in employment settings or patient-care environments by study group. (See Table F1 in Appendix F.)
Manager Demographics
Of the managers completing surveys, 61% reported being a clinical preceptor to the new graduate nurse, and 57% reported receiving formal training to be a preceptor. This group had previously been preceptors for an average of 10 new graduates.
Research Question 1
Are there differences in clinical performance (clinical competency, critical thinking, and readiness for practice) among the new graduate nurses from the three study groups when assessed by their preceptor or manager in their first RN position?
Clinical Competency
The NGNPS was used to measure clinical competence. The new graduates performed a self-assessment administered via electronic survey, and their managers completed an electronic version of this instrument. The NGNPS consists of 36 items organized into 6 subscales and rated on a scale of 1 to 6 (1=lowest rating, 6=highest).
Table 17 lists the manager ratings overall for each time period and for each study group. Self-assessment ratings provided by the new graduates are listed in Appendix F, Table F2. Despite the fact that the managers performed 18 ratings over 6 months (six assessment areas for three time periods), few differences in the manager ratings exist, regardless of the study group. The only two exceptions are the 6-week ratings of clinical knowledge, in which the 25% and 50% groups did better than the control group, and the difference was statistically significant (p=0.017); and the critical thinking ratings for the 25% group at 6 weeks, which were also statistically significant (p=0.037).
The clinical knowledge ratings for all new graduates were similar at each survey period: 5.11 for the 6-week period, 5.15 for the 3-month period, and 5.17 for the 6-month period. A different pattern emerged from the new graduates' self-assessments. Though there were no differences among the study groups, the new graduates gave themselves higher ratings in clinical knowledge at the end of the nursing program than in practice. The self-assessment ratings increased over time, but even after 6 months of practice, new graduates were not rating their clinical knowledge as high as they did before graduating from their nursing programs.
For technical skills, the manager ratings increased over time. The new graduates rated themselves lower after 6 weeks than at graduation. After 6 months, however, new graduates gave themselves ratings similar to those at graduation (ratings of 4.95).
Critical Thinking
Managers and new graduates also completed the Critical Thinking Diagnostic, the same instrument used before graduation in Part I of the study. The five categories that make up the Critical Thinking Diagnostic are problem recognition, clinical decision making, prioritization, clinical implementation, and reflection.
Results of the Critical Thinking Diagnostic are similar to those of the NGNPS; new graduates from all three groups were given high ratings by their managers. Both manager and self-ratings increased over time, but the new graduates generally rated themselves lower than their managers did. Table 18 shows the manager ratings for each survey period, and Table F3 in Appendix F provides the results of the new graduate self-ratings.
Global Assessment of Clinical Competence and Readiness for Practice
At the end of the survey, managers were asked to consider the items they just completed and any other aspects of nursing care relevant to overall clinical competency and readiness for practice and then to rate the new graduate on a scale of 1 to 10 (1=among the weakest, 10=among the best). Ratings of overall competence were high for all three groups, and ratings increased between 6 weeks of practice and 6 months. There were no statistical differences among the groups. (See Table 19.)
Table 19Manager Global Assessment of Readiness for Practice
Table 20 shows the new graduates' self-ratings of overall clinical competency and readiness for practice. As with the manager ratings, self-assessments increased as the new graduates gained experience over time. There were no statistically significant differences among the three groups.
Table 20New Graduate Nurse Self-Ratings of Global Assessment of Readiness for Practice
Are there differences among new graduates from the three study groups in acclimation to the role of the registered nurse?
Acclimation to the role of the professional nurse is multifactorial. Several concepts were chosen to assess new graduate acclimation: leaving the first nursing position, charge nurse responsibilities, and workplace stress. The results are descriptive and should be interpreted cautiously as it was not possible to control for differences in work environment or participation in nurse residency or transition-to-practice programs.
Preparation for Practice
The new graduates were asked how well their clinical experiences (traditional and simulated) prepared them for practice as an RN. Generally, the new graduates reported feeling “quite a bit prepared” or “very well prepared.” (See Table F4 in Appendix F.) After 6 weeks of working as an RN, 66% of the graduates reported they felt “quite a bit prepared” or “very well prepared” for practice based on their clinical experiences during their nursing program. The 50% group consistently reported higher levels of feeling “prepared for practice” compared with their study peers.
Left First Nursing Position
By the 6-month survey, 25 new graduates reported leaving their first nursing position; 4 left at or before the 6-week survey; another 5 left by the 3-month survey; and 18 more left by the 6-month survey. (See Table 21.) There were no statistically significant differences among the groups (p=0.578). Reasons for leaving included accepting a position with a better schedule or better compensation, accepting a position in a first-choice specialty area, and having a spouse who was relocating.
Table 21New Graduates Who Left First Nursing Position
After 6 months of practice, 67% of new graduates were working 12-hour shifts; 43% were working the night shift. (See Table 22.) The entire group of new graduates reported on average they were caring for 8 patients per shift. When asked about the difficulty of recent patient-care assignments, 83% reported their assignments as “just right”; 5% said “not challenging enough”; and 12% said their patient-care responsibilities were “too challenging.” There were no statistical differences among the study groups on any workplace factors.
Table 22New Graduate Nurse Work Schedules and Patient Loads
There was a statistically significant difference among the new graduates regarding charge nurse responsibilities. Overall, 12% had charge nurse responsibilities within the first 6 months of practice. The control group had higher rates of charge nurse responsibilities (21%) than the 25% group (13%) and the 50% group (5%) (p=0.005). (See Table 23.)
In each survey, new graduates were asked questions about stress in the workplace. The majority agreed or strongly agreed that they were experiencing stress at work, and the percentage of those reporting stress at work increased over time. The only difference in stress ratings occurred on the 3-month survey, in which 25% of the control group strongly agreed with the statement “I am experiencing stress at work” compared with 15% of the 25% group and 12% of 50% group graduates (p=0.03). (See Figure F1 in Appendix F).
When new graduates were asked whether they felt overwhelmed by patient-care responsibilities, the results were similar across study groups. Of all new graduates, 22% reported “often” or “almost always feeling overwhelmed” after 6 weeks of practice. These results remained stable: On the 3-month survey, 21% reported “feeling overwhelmed,” and on the 6-month survey, 20% were “feeling overwhelmed.” There were no statistically significant differences among the study groups. (See Figure F2 in Appendix F.)
The third question regarding stress in the workplace asked new graduates how often in the last week they felt expectations of them were unrealistic. All new graduates reported the lowest levels of unrealistic expectations on the 6-week survey. The highest levels of unrealistic expectations were reported on the 3-month survey (12.7%). This level slightly decreased on the 6-month survey (11.4%). Again, there were no statistically significant differences among groups for any of the survey periods. (See Figure F3 in Appendix F.)
Limitations
All studies have some degree of limitation. Although students were randomly assigned to the study groups, the schools participating in the study were not randomly selected. The chosen schools had an interest in using simulation to educate nursing students, and they had a simulation laboratory and the equipment for the high volume of simulation required for the study. Not all schools may be prepared to begin or increase their simulation programs with the aggressive level of simulation used in this study.
The preceptors and clinical instructors in Part I of the study and the managers in Part II were not blinded as to which study group the students or new graduates were assigned. Ratings may be biased based on the raters' personal feelings about traditional clinical experiences and simulation experiences. During the study, some students reported that nurses in the clinical environment made statements to the study participants that hands-on clinical experiences were superior for teaching students the role of the nurse. Clinical instructor attitudes regarding traditional clinical and simulation experiences were not measured and therefore it cannot be determined to what extent these attitudes may have influenced the results.
Another limitation was the distribution of the End-of-Program Surveys to the clinical preceptors for schools with a capstone course. Capstone students were responsible for providing the survey to their preceptors. Likewise in Part II of the study, new graduates were responsible for forwarding the electronic survey links to their managers. It is possible that weaker students and new graduate nurses did not provide the surveys to their managers, and therefore the results may reflect the ratings of stronger participants.
Discussion
There were no significant differences among study groups regarding end-of-program nursing knowledge, clinical competency, or overall readiness for practice. NCLEX pass rates were statistically equivalent, and managers gave all new graduates similar ratings in critical thinking, clinical competency, and overall readiness for practice. All evaluative measures produced the same results: Educational outcomes were equivalent when up to 50% of traditional clinical experience in the undergraduate nursing program was replaced by simulation.
Although all studies have limitations, this study provides strong evidence supporting the use of simulation as a substitute for up to 50% of traditional clinical time and makes a substantial contribution to the literature in both nursing regulation and education. The strengths of the study were the longitudinal design, the large sample size, and use of multiple data collection sites. The diversity of the sites is a strength as the results reflect both associate and baccalaureate programs and programs in urban and rural areas from across the country. This variety represents the variety in the nursing programs overall and lends to the generalizability of the results. The large sample provided adequate power to find statistical significance; however, it also produced statistical significance with nominal differences in some instances. Therefore, the results need to be interpreted carefully.
Another strength of the study was the methodology employed to conduct the simulation experiences. A designated simulation team in all participating nursing programs were taught theory-based simulation and debriefing methods. We believe this was integral to the positive outcomes of the study, but it also is essential for any simulation experience.
The consistent findings across the two time periods (educational period and early employment period) in two settings (academic setting and practice setting) with two types of evaluators (educators and managers) give further credence to the findings of this study.
The demographic characteristics of the three study groups were consistent, but more students in the 50% group dropped out of the study. These students were older, male, and members of a minority population. However, the students in the 25% and 50% groups who remained in the study rated their simulation experiences highly, as indicated on the CLECS. More research may be needed in this area to ascertain whether simulation is suitable for all students.
Other recent research studies on educational outcomes when simulation replaces a portion of traditional clinical experiences have reported findings similar to this study.
conducted two multi-site, randomized, controlled trials in which 25% of clinical hours were replaced with standardized patient simulation experiences in physiotherapy programs in Australia. Both studies found no differences in clinical competency evaluations by an independent examiner when simulation replaced clinical experiences.
evaluated student performance when 25% of pediatric clinical hours were replaced with simulation. At the end of the course, no differences in clinical evaluation scores existed between those who experienced simulation and those who did not. Meyer's study used clinical faculty to assess the students on an ongoing basis in the clinical setting.
studied the effects of simulation “dose” when students received no simulation or 30%, 50%, or 70% simulation in place of clinical experiences in a nursing fundamentals course. The sequence of the delivery of simulation and direct-care experiences was also studied. They found no differences among the groups in critical thinking or nursing knowledge standardized assessments. The 30% group that experienced simulation at the end of the course had significantly lower clinical judgment scores than other students (
). The investigators noted that the small sample size may have contributed to the nonsignificantly different results, although the current study validates their findings.
The current study found that scores on standardized assessment tests on end-of-program comprehensive nursing knowledge were no different among the study groups. Even course-by-course level results found few meaningful differences among groups, with all students achieving high scores.
Similarly, students with more simulated clinical experiences had clinical competency ratings that were comparable to those of students who spent the majority of their clinical hours in the traditional setting. Some nominal differences were found—for example, the control group received slightly higher ratings in the final clinical assessmentsof most courses, but the end-of-program ratings made by the last clinical preceptor conducting a summative evaluation indicate no significant differences in critical thinking, clinical competency, or overall readiness for practice among the three groups. These results indicate that the skills learned in simulation transfer to the clinical setting. Transfer of learning from simulation to clinical practice has been a documented concern for many (
), and the results of this study, learning that occurs in simulation does transfer to the clinical setting.
Similar passing rates on the NCLEX examination were achieved by all three groups. Not only were all three comparable, the passing rates of the three groups were above the 2013 national average passing rate of 80%.
The follow-up surveys in Part II completed by the managers of new graduates support the findings from Part I: All three groups were well prepared for clinical practice. There were no meaningful differences among the groups in critical thinking, clinical competency, and overall readiness for practice as rated by managers after 6 weeks, 3 months, and 6 months of practice. These results come 6 years after
surveyed nurse educators and nurse managers about the preparation of new graduates. They found that 90% of nurse educators believed their new graduates were prepared for clinical practice, but only 10% of managers believed new graduates were prepared for the reality of clinical practice. Using the same instrument, the clinical instructors and the managers rated the study participants similarly, agreeing that they were prepared for professional practice.
At the end of their nursing program, all students rated themselves highly on clinical competence, critical thinking, and readiness for practice. The 50% group rated themselves statistically significantly higher than their peers, indicating the group with the most simulation experience had the most self-confidence. The simulation literature documents this as well (
Designing and implementing models for the innovative use of simulation to teach nursing care of ill adults and children: A national multi-site, multi-method study.
). Additionally, the 50% group more often reported feeling “very well prepared” for practice, another indicator of self-confidence for those entering the nursing profession.
All the study findings indicate that students were able to adapt to the method with which they were taught. The 25% and 50% simulation groups had experiences in both the traditional clinical and simulation environments and rated both highly in meeting their learning needs. Students in the control group spent the majority of their time in the traditional clinical setting and rated it better for meeting their learning needs. Students from all groups were rated highly by their clinical instructors in weekly as well as end-of-program clinical competency assessments. All groups scored high on nursing knowledge assessments throughout the program and on the end-of-program comprehensive examination. NCLEX passing rates were comparable among the groups. Manager ratings of clinical competence, critical thinking, and overall readiness for practice were consistent with Part I findings that there were no differences in outcomes among the three groups.
Conclusion
This study provides substantial evidence that up to 50% simulation can be effectively substituted for traditional clinical experience in all prelicensure core nursing courses under conditions comparable to those described in the study. These conditions include faculty members who are formally trained in simulation pedagogy, an adequate number of faculty members to support the student learners, subject matter experts who conduct theory-based debriefing, and equipment and supplies to create a realistic environment. BONs should be assured by nursing programs that they are committed to the simulation program and have enough dedicated staff members and resources to maintain it on an ongoing basis.
A most important way to ensure high-quality simulation is to incorporate best practices into a simulation program; these best practices include terminology, professional integrity of the participant, participant objectives, facilitation, facilitator, the debriefing process, and participant assessment and evaluation (
Expanding on the current study to explore other aspects of simulation is needed. The ratio of traditional clinical hours to simulation hours should be studied further. The current study used a 1:1 ratio, but other proportions may be effective as well. Research that studies active simulation participation for longer periods of time are needed. For example, in this study, the student was often in an active nurse role only once a day for 15 to 30 minutes; the rest of the time, the student was an active observer. The effects of high-dose simulation that engages the student as an active participant throughout the clinical time period might indicate further uses for simulation and need to be studied.
This study makes a substantial contribution to nursing and the scientific literature which has been void of a large scale, multisite study of simulation across the prelicensure nursing curriculum. This analysis provides valuable data for boards of nursing, who often receive requests from nursing programs to allow time/activities in a simulation lab to be substituted for clinical hours. The better understanding regulators have of simulation and its impact on nursing education, the more effectively they can develop prelicensure education requirements, guide programs and develop policy at the state level. In addition, this study provides important information for nursing educators for determining the best approaches in teaching students and shaping the future of nursing education.
The most significant finding of this study is the effectiveness of two types of educational methods: traditional clinical and simulation experiences. In both environments, when structure, an adequately prepared faculty with appropriate resources, dedication, foresight, and vision are incorporated into the prelicensure nursing program, excellent student outcomes are achieved.
Acknowledgments
The authors would like to thank the following for their contributions to the National Simulation Study:
The Deans and Directors at the 10 participating schools for their enthusiasm and commitment to the project and their shared vision of its importance.
The study team members for their dedication to this project on a daily basis.
Nursing program team leaders: Kristen Zulkosky, PhD, RN, CNE, Kay Buchanan, MSN, RN, Donna Enrico, MBA, BSN, RN, Pat Riede, MSN, MBA, RN, CNE, Henry Henao, MSN, BSN, BBA, Linda Fluharty, MSN, RN, Rochelle Quinn, MSN, RN, Joyce Vazzano, MS, RN, CRNP, Sandy Swoboda, MS, RN, FCCM, Maggie Neal, PhD, RN, Pam Anthony, MSN, RN, Erin McKinney, MN, RNC, Susan Poslusny, PhD, RN, Kathy Masters, DNS, RN, and Kevin Stevens, MSN, RN, for their leadership and study management.
Kim Leighton, PhD, MSN, Mary Tracy, PhD, RN, Martha Todd, MS, APRN, Julie Manz, MS, RN, Kim Hawkins, PhD, APRN, Maribeth Hercinger, PhD, RN, BC, and The Advisory Board Company— for use of their evaluation instruments.
Lou Fogg, BS, PhD, Brigid Lusk, PhD, RN, and Julie Zerwic, PhD, RN, FAHA, FAAN, for ongoing review of the data and safety determinations.
NCSBN staff: Nancy Spector, PhD, RN, FAAN, for assistance with the design of the study; Jo Silvestre, MSN, RN, for assistance planning and conducting pilot study work; and Laura Jarosz, Renee Sednew, Esther White, and Lindsey Gross for administrative assistance.
The hundreds of clinical instructors and preceptors who completed ratings on the students and most importantly, the students who participated in the study—without them, this study would not have been possible.
References
Alinier G.
Hunt B.
Gordon R.
Determining the value of simulation in nurse education: Study design and initial results.
Designing and implementing models for the innovative use of simulation to teach nursing care of ill adults and children: A national multi-site, multi-method study.