AbstractIntroductionThe importance of clinical skills training in traditional Korean medicine education is increasingly emphasized. Since the clinical skills tests are high-stakes tests that determine success in national licensing exams, it is essential to develop reliable multifaceted analysis methods for clinical skills tests in actual education settings. In this study, we applied the multifaceted validity evaluation methods to the evaluation results of the cardiopulmonary resuscitation module to confirm the applicability and effectiveness of the methods.
MethodsIn this study, we used internal consistency, factor analysis, generalizability theory G-study and D-study, ANOVA, Kendall’s tau, descriptive statistics, and other statistical methods to analyze the multidimensional validity of a cardiopulmonary resuscitation test in clinical education settings over the past three years.
ResultsThe factor analysis and internal consistency analysis showed that the evaluation rubric had an unstable structure and low concordance. The G-study showed that the error of the clinical skills assessment was large due to the evaluator and unexpected errors. The D-study showed that the variance error of the evaluator should be significantly reduced to validate the evaluation. The ANOVA and Kendall’s tau confirmed that evaluator heterogeneity was a problem.
Discussion and ConclusionClinical skills tests should be continuously evaluated and managed for validity in two steps of pre-production and actual implementation. This study has presented specific methods for analyzing the validity of clinical skills training and testing in actual education settings. This study would contribute to the foundation for competency-based evidence-based education in practical clinical training.
Acknowledgement본 연구는 부산대학교의 연구비지원을 받았음.
This work was supported by a 2-Year Research Grant of Pusan National University.
참고문헌1. Chae H, Cho E, Kim SK, et al. Analysis on validity and academic competency of mock test for Korean Medicine National Licensing Examination using Item Response Theory. Keimyung Medical Journal. 2023.
2. Park SH. Possibilities and Limits of High Stakes Testing in US. Korean Journal of Comparative Education. 2010; 20:1–21.
3. Eggen TJ, Stobart G. High-stakes testing–value, fairness and consequences. High-Stakes Testing in Education. Routledge: 2015. p. 1–6.
4. Korea Health Personnel Licensing Examination Institute. Clinical skill test.
https://www.kuksiwon.or.kr/EngHome/cnt/c_3109/view.do?seq=18(2023, accessed 2023-06-18 2023).
5. Kim KS. Introduction and administration of the clinical skill test of the medical licensing examination, republic of Korea (2009). Journal of Educational Evaluation for Health Professions. 2010; 7
6. Han SY, Lee S.-H, Chae HDeveloping a best practice framework for clinical competency education in the traditional East-Asian medicine curriculum. BMC Med Educ. 2022; 22:3522022; 05. 11. DOI:https://doi.org/10.1186/s12909-022-03398-4
7. Shin J, Go Y, Song C, et al. Presentation on research trends and suggestion for further research and education on Objective Structured Clinical Examination and Clinical Performance Examination in Korean Medicine education: Scoping review. Society of Preventive Korean Medicine. 2022; 26:87–112. 10.25153/SPKOM.2022.26.2.008
8. Korean Laws Information Center. ACT ON DEVELOPMENT OF E-LEARNING INDUSTRY AND PROMOTION OF UTILIZATION OF E-LEARNING. Ministration of Trade, Industry and Engergy. 18358:Sejong, Korea: Korean Laws Information Center;2021.
9. Chae H, Han SY, Yang G, et al. Study on the herbology test items in Korean medicine education using Item Response Theory. Kor J Herbology. 2022; 37:13–21. DOI:https://doi.org/10.6116/kjh.2022.37.2.13
10. Chae H, Lee SJ, Han c-h, et al. Study on the Academic Competency Assessment of Herbology Test using Rasch Model. J Korean Med. 2022; 43:27–41. DOI:https://doi.org/10.13048/jkm.22017
11. Kang Y. Evaluating the cutoff score of the advanced practice nurse certification examination in Korea. Nurse Education in Practice. 2022; 63:103407DOI:https://doi.org/10.1016/j.nepr.2022.103407
12. Kim S, Kim Y. Generalizability Theory. 2nd ed. Paju: Education Science Publishing;2016.
13. Nunnally JC, Bernstein IH. Psychometric theory. 3 rd ed. New York: McGraw-Hill;1994.
14. Shin S, Kim GS, Song JA, et al. Development of examination objectives based on nursing competency for the Korean Nursing Licensing Examination: a validity study. J Educ Eval Health Prof. 2022; 19:192022; 08. 23. DOI:https://doi.org/10.3352/jeehp.2022.19.19
15. Seong T, Kang DJ, Kang E, et al. Introduction to Modern Pedagogy. Seoul: Hakjisa;2018.
16. Lee M-j. Exploring the feasibility of implementing criterion-referenced assessment in Korean medicine education: enhancing comprehension and relevance. J Kor Med Edu. 2023; 1:10–14. DOI:https://doi.org/10.23215/JKME.PUB.1.1.10
17. Chae H. Jamovi, an open-source software for teaching data literacy and performing medical research. J Kor Med edu. 2023; 1:28–36. DOI:https://doi.org/10.23215/JKME.PUB.1.2.28
18. Navas-Ferrer C, Urcola-Pardo F, Subirón-Valera AB, et al. Validity and reliability of objective structured clinical evaluation in nursing. Clinical Simulation in Nursing. 2017; 13:531–543.
19. Hur HK, Park SM, Kim KK, et al. Evaluation of Lasater judgment rubric to measure nursing student’ performance of emergency management simulation of hypoglycemia. Journal of Korean Critical Care Nursing. 2012; 5:15–27.
20. Kim J, Cho L.-RAnalysis of error source in subjective evaluation on patient dentist interaction: Application of Generalizability Theory. The Journal of the Korean Dental Association. 2019; 57:448–455.
21. Lee SY, Lm SJ, Yune SJ, et al. Assessment of Medical Students in Clinical Clerkships. Korean Medical Education Review. 2013; 15:120–124.
22. Rim MK, Ahn D.-S, Hwang IH, et alValidation study to establish a cutoff for the national health personnel licensing examination. 2014. Korea Health Personnel Licensing Examination Institute.
23. Ahn S, Choi S. Proposal for a Cut Score for the Physics Ability Test: Comparison between the Modified Angoff, Bookmark, and IDM Methods. New Physics: Sae Mulli. 2018; 68:599–610. DOI:http://dx.doi.org/10.3938/NPSM.68.599
24. Schoonheim-Klein M, Muijtjens A, Habets L, et al. Who will pass the dental OSCE? Comparison of the Angoff and the borderline regression standard setting methods. European Journal of Dental Education. 2009; 13:162–171. DOI:https://doi.org/10.1111/j.1600-0579.2008.00568.x
25. Pell G, Fuller R, Homer M, et al. How to measure the quality of the OSCE: A review of metrics – AMEE guide no. 49. Medical Teacher. 2010; 32:802–811. 10.3109/0142159X.2010.507716
26. Downing SM. Validity: on meaningful interpretation of assessment data. Med Educ. 2003; 37:830–837. 10.1046/j.1365-2923.2003.01594.x
27. Harden RM, Lilley P, Patricio M. The Definitive Guide to the OSCE: The Objective Structured Clinical Examination as a performance assessment. Elsevier Health Sciences;2015.
28. Patrício MF, Julião M, Fareleira F, et al. Is the OSCE a feasible tool to assess competencies in undergraduate medical education? Medical teacher. 2013; 35:503–514.
|
|