Study on the Academic Competency Assessment of Herbology Test using Rasch Model

Article information

J Korean Med. 2022;43(2):27-41
Publication date (electronic) : 2022 June 1
doi : https://doi.org/10.13048/jkm.22017
1School of Korean Medicine, Pusan National University
2Department of Psychology, Kyungsung University
3Department of Internal Medicine, College of Korean Medicine, Dongguk University
4Division of Police Administration, College of Law and Police, Dongguk University
Correspondence to: Hyungwoo Kim, School of Korean Medicine, Pusan National University, 49, Busandaehak-ro, Mulgeum-eup, Yangsan-si, Gyeongsangnam-do, 50610, South Korea, Tel: +82-51-510-8425, Email: kronos7@pusan.ac.kr

H Chae and SJ Lee contributed equally to this work

Received 2022 March 31; Accepted 2022 May 6.

Abstract

Objectives

There should be an objective analysis on the academic competency for incorporating Computer-based Test (CBT) in the education of traditional Korean medicine (TKM). However, the Item Response Theory (IRT) for analyzing latent competency has not been introduced for its difficulty in calculation, interpretation and utilization.

Methods

The current study analyzed responses of 390 students of 8 years to the herbology test with 14 items by utilizing Rasch model, and the characteristics of test and items were evaluated by using characteristic curve, information curve, difficulty, academic competency, and test score. The academic competency of the students across gender and years were presented with scale characteristic curve, Kernel density map, and Wright map, and examined based on T-test and ANOVA.

Results

The estimated item, test, and ability parameters based on Rasch model provided reliable information on academic competency, and organized insights on students, test and items not available with test score calculated by the summation of item scores. The test showed acceptable validity for analyzing academic competency, but some of items revealed difficulty parameters to be modified with Wright map. The gender difference was not distinctive, however the differences between test years were obvious with Kernel density map.

Conclusion

The current study analyzed the responses in the herbology test for measuring academic competency in the education of TKM using Rasch model, and structured analysis for competency-based Teaching in the e-learning era was suggested. It would provide the foundation for the learning analytics essential for self-directed learning and competency adaptive learning in TKM.

Fig. 1

Characteristic Curve and Information Curve for (A and B) Items and (C) Test.

The solid black line represents Characteristic Curve and the dashed red line the Information Curve of items and test.

Fig. 2

Scale Characteristic Curve for (A) Item 12, (B) Item 13 and (C) Test in Years of 2012, 2016 and 2018 Compared to Those of 2011.

The solid black line is for specific test year of 2012, 2016 and 2018 and the dashed red line for the year 2011 as a reference.

Fig. 3

The Kernel Density Map of Representing Prevalence of Student in Response to Academic Competency in (A) Overall Students, (B) Male and Female Students, and (C) Eight (2011–2018) Years.

Fig. 4

The Wright Map Showing Person Density in Relation to the Item Difficulty in (A) Overall, (B) Male and (C) Female Students.

The red box on the left is for person density and the black marks on the right for item difficulty.

Description and Example of Item Response Theory Terms Used in Current Study

Test Score and Academic Competency According to Sex and Test Year

References

1. Huh S. 2014;Computer-based testing and construction of an item bank database for medical education in korea. Korean Med Educ Rev 16(1):11–15. https://doi.org/10.17496/kmer.2014.16.1.011 .
2. Lee CK, Park JS, Lee E, Lee SJ, Park ES, Park YJ. 2001;A comparative study of item analysis by item response theory based for initiating cat (computer adaptive test) system. Korean J Med Educ 13(1):107–115.
3. Park JH, Son JY, Kim S. 2012;Experiences with establishing and implementing learning management system and computer-based test system in medical college. Korean J Med Educ 24(3):213–222. https://doi.org/10.3946/kjme.2012.24.3.213 .
4. Im E-J, Lee W-K, Lee Y-C, Choe B-H, Chung S-K, Lee T-H, et al. 2008;Development of computer-based test (cbt) and student recognition survey on cbt. Korean J Med Educ 20(2):145–154. https://doi.org/10.3946/kjme.2008.20.2.145 .
5. Jeong G-H, Yim MK. 2005;Applicability of item response theory to the korean nurses’ licensing examination. J Educ Eval Health Prof 2:23–29. https://doi.org/10.3352/jeehp.2005.2.1.23 .
6. Korea Health Personnel Licensing Examination Institute. Computer based test Available from: https://www.kuksiwon.or.kr/cnt/c_2033/view.do?seq=11 .
7. Choi Y-J, Asiilkalkan A. 2019;R package for item response theory analysis: Description and features. Measurement: Interdisciplinary research and perspectives 17(3):168–175. https://doi.org/10.1080/15366367.2019.1586404 .
8. Meyer JP. 2014. Applied measurement with jmetrik New York: Routledge.
9. Seong T-J. 2006;A propose for individualized comprehensive evaluation system. J Curric Eval 9(2):121–138. https://doi.org/10.29221/jce.2006.9.2.121 .
10. Chae H, Han SY, Yang G, Kim H. 2022;Study on the herbology test items in korean medicine education using item response theory. Kor J Herbology 37(2):13–21. https://doi.org/10.6116/kjh.2022.37.2.13 .
11. Kim SH, Han SY, Kim JD, Choi S, Lee SJ, Lim JH, et al. 2015;Study on stress and burnout in medical education at the school of korean medicine. J Oriental Neuropsychiatry 26(2):103–116. https://doi.org/10.7231/jon.2015.26.2.103 .
12. Short Ec. 1985;The concept of competence: Its use and misuse in education. Journal of Teacher Education 36(2):2–6. https://doi.org/10.1177/002248718503600202 .
13. Schauber SK, Hecht M, Nouns ZM. 2018;Why assessment in medical education needs a solid foundation in modern test theory. Adv in Health Sci Educ 23(1):217–232. https://doi.org/10.1007/s10459-017-9771-4 .
14. Kim Y, Lim C-I. 2011;Competency-based medical education: Possibilities and limitations. KMER 13(1):13–23. https://doi.org/10.17496/kmer.2011.13.1.013 .
15. Chae H, Hwang S, Kwon Y, Baik Y, Shin S, Yang G, et al. 2010;Study on the prerequisite chinese characters for education of traditional korean medicine. J Physiol & Pathol Korean Med 24(3):373–379.
16. Lee Y, Kwak M-J, Jung H, Ha H-y, Chae H. 2012;A study on the statistical methods used in kci listed journals of traditional korean medicine from 1999 to 2008. Korea Journal of Oriental Medicine 18(2):55–64.
17. Lee J, Han JH, Kim MS, Lee HS, Han SY, Lee SJ, et al. 2020;Teaching yin-yang biopsychology using the animation, “pororo the little penguin”. Eur J Integr Med 33:101037. https://doi.org/10.1016/j.eujim.2019.101037 .
18. Gray AC, Steel A, Adams J. 2019;A critical integrative review of complementary medicine education research: Key issues and empirical gaps. BMC Complement Altern Med 19(1):73. https://doi.org/10.1186/s12906-019-2466-z .
19. De Champlain AF. 2010;A primer on classical test theory and item response theory for assessments in medical education. Medical education 44(1):109–117. https://doi.org/10.1111/j.1365-2923.2009.03425.x .
20. Schauber SK, Hecht M. 2020;How sure can we be that a student really failed? On the measurement precision of individual pass-fail decisions from the perspective of item response theory. Medical Teacher 42(12):1374–1384. https://doi.org/10.1080/0142159X.2020.1811844 .
21. Lee SY, Lee Y, Kim MK. 2018;Effectiveness of medical education assessment consortium clinical knowledge mock examination (2011–2016). Korean Med Educ Rev 20(1):20–31. https://doi.org/10.17496/kmer.2018.20.1.20 .
22. Lee S, Lee Y, Han SY, Bae N, Hwang M, Lee J, et al. 2020;Urinary function of the sasang type and cold-heat subgroup using the sasang urination inventory in korean hospital patients. Evid Based Complement Alternat Med 2020;:7313581. https://doi.org/10.1155/2020/7313581 .
23. Chae H, Cho YI, Lee SJ. 2021;The yin-yang personality from biopsychological perspective using revised sasang personality questionnaire. Integr Med Res 10(1):100455. https://doi.org/10.1016/j.imr.2020.100455 .
24. Lee Y-j, Lee S, Kim S-h, Lee J, Chae H. 2021;Study on the revision and clinical validation of the sasang digestive function inventory. J 33(3):54–71.
25. Park JC, Kim KS. 2012;A comparison between discrimination indices and item-response theory using the rasch model in a clinical course written examination of a medical school. Korean J Med Educ 24(1):15–21. https://doi.org/10.3946/kjme.2012.24.1.15 .
26. Rasch G. 1960. Probabilistic models for some intelligence and attainment tests Danmarks Paedagogiske Institut.
27. Aksu G, GÜZELLER CO, Eser MT. 2019;Jmetrik: Classical test theory and item response theory data analysis software. Journal of Measurement and Evaluation in Education and Psychology 10(2):165–178. https://doi.org/10.21031/epod.483396 .
28. Park JC, Kim KS. 2012;A comparison between discrimination indices and itemresponse theory using the rasch model in a clinical course written examination of a medical school. Korean J Med Educ 24(1) https://doi.org/10.3946/kjme.2012.24.1.15 .
29. Han SY, Kim HY, Lim JH, Cheon J, Kwon YK, Kim H, et al. 2016;The past, present, and future of traditional medicine education in korea. Integr Med Res 5(2):73–82. https://doi.org/10.1016/j.imr.2016.03.003 .
30. Yu J-S. 2019;Recommendation for development of clinical skill contents in the competency-based sasang constitutional medicine education. J Sasang Constitut Med 31(4):1–8. https://doi.org/10.7730/JSCM.2019.31.4.1 .
31. Choi J-b, Kim Y-j. 2018;A study on the education of medical classics through flipped learning. J Korean Med Classics 31(2):1–16. https://doi.org/10.14369/jkmc.2018.31.2.001 .
32. Kim H-S, Yang U-h, Na C-s. 2018;A study on the perception of international healthcare development cooperation of korean medicine students for competency-based clinical herbology education. Kor J Herbology 33(5):39–46. https://doi.org/10.6116/kjh.2018.33.5.39 .
33. Hong J, Kang Y. 2018;The implications of the case of medical education in north america on korean medicine education. The Journal of Korean Medical History 31(2):91–101. https://doi.org/10.15521/JKMH.2018.31.2.091 .
34. Harris P, Snell L, Talbot M, Harden RM. 2010;Competency-based medical education: Implications for undergraduate programs. Med Teach 32(8):646–650. https://doi.org/10.3109/0142159x.2010.500703 .
35. Lee YB. 2013;The role of the concept of competence in korean outcome-based medical education. Korean Med Educ Rev 15(3):144–150.
36. Koschmann T. 1995;Medical education and computer literacy: Learning about, through, and with computers. Acad Med 70(9):818–821.
37. Plch L. 2020;Perception of technology -enhanced learning by medical students: An integrative review. Medical science educator 30(4):1707–1720. https://doi.org/10.1007/s40670-020-01040-w .
38. Naciri A, Radid M, Kharbach A, Chemsi G. 2021;E-learning in health professions education during the covid-19 pandemic: A systematic review. Journal of educational evaluation for health professions 18:27. https://doi.org/10.3352/jeehp.2021.18.27 .

Article information Continued

Fig. 1

Characteristic Curve and Information Curve for (A and B) Items and (C) Test.

The solid black line represents Characteristic Curve and the dashed red line the Information Curve of items and test.

Fig. 2

Scale Characteristic Curve for (A) Item 12, (B) Item 13 and (C) Test in Years of 2012, 2016 and 2018 Compared to Those of 2011.

The solid black line is for specific test year of 2012, 2016 and 2018 and the dashed red line for the year 2011 as a reference.

Fig. 3

The Kernel Density Map of Representing Prevalence of Student in Response to Academic Competency in (A) Overall Students, (B) Male and Female Students, and (C) Eight (2011–2018) Years.

Fig. 4

The Wright Map Showing Person Density in Relation to the Item Difficulty in (A) Overall, (B) Male and (C) Female Students.

The red box on the left is for person density and the black marks on the right for item difficulty.

Table 1

Description and Example of Item Response Theory Terms Used in Current Study

Term Description Example in current study
Subject The entity with unique and latent ability or trait to be tested with clinical examination or academic evaluation. 390 students.
Y axis of Figure 3 as a density.
Left side of the Figure 4 as a density.
Ability, Theta (θ) or Competency Unique and latent ability (θ) of a subject rated as a continuous variable with the range of −4 and +4. The academic competency is a major interest of Item Response Theory (IRT) in current study, and it can be used for the absolute evaluation of academic achievement. Table 2 and Figure 2.
X axis of Figure 1, 2 and 3.
 P(θ) and Characteristic Curve P(θ) is a logistic probability function made with item difficulty, discrimination and guessing parameters for estimating latent ability. The Characteristic Curve is an illustrative figure for intuitive understanding of P(θ) in IRT. P(θ) for drawing ICC (Figure 1-A) of item and TCC (Figure 1-C) of test.
 True Score or Expected Value The score representing the latent ability of subject which is estimated using P(θ). Y axis of Figure 1-C.
Y axis of Figure 2.
 Test Score The sum of item scores as for the relative evaluation and Classic Test Theory (CTT). Table 2
Item The basic unit consist of question and answer(s) for competency measure or test score. 14 items used in herbology test
 Item Characteristic Curve (ICC) A curve representing the probability (P(θ)) of getting correct answer corresponding to subject’s ability in specific item. Solid black line of Figure 1-A
 Difficulty (β) It is used for describing how difficulty an item is to achieve 0.5 probability of correct response at a given ability. The difficulty parameter of IRT is negatively correlated with that of CTT. black marks on the right side of Figure 4.
 Discrimination (α) It is the slope of the ICC at the point of 0.5 probability which measures the differential capability of an item. The Rasch model uses 1.0 as for the discrimination parameter. Fixed as 1 in Rasch model
 Item Information Curve (IIC) It shows the amount of information yielded by the item at specific ability level. It can also be used for measuring the reliability of item. Dashed red line of Figure 1-A
Test Collected body of items for analyzing academic competency Herbology test consist of 14 items
 Test Characteristic Curve (TCC) A curve representing estimated test score corresponding to subject’s ability. Solid black line of Figure 1-C
 Test Information Curve (TIC) It shows the amount of information yielded by the test at specific ability level, and it can also be used for measuring the reliability of test. Dashed red line of Figure 1-C
Visualization of test analysis A figure presenting results of IRT analysis for intuitively understanding.
 Scale Characteristic Curve A figure showing estimated score corresponding to specific level of ability. It is useful for comparing estimated score corresponding to specific ability in several groups. Figure 2
 Kernel Density Map A figure showing density of population corresponding to a specific ability. It is useful for intuitive understanding on distribution of subjects across ability levels. Figure 3
 Wright Map or item-person Map A figure with two parts to illustrate whether the difficulty of item is proper for analyzing the ability of subjects; left box is for the distribution of subjects, and right box for difficulty parameter of items Figure 4

Table 2

Test Score and Academic Competency According to Sex and Test Year

Test Score (CTT) Statistics Academic competency (IRT) Statistics
Total (n=390) 10.72±0.11 1.71±0.06

Sex
 Male (n=196) 10.61±0.15 T=0.909, p=0.341 1.64±0.08 T=1.424, p=0.233
 Female (n=194) 10.82±0.16 1.78±0.09

Year
 2011 (n=44) 10.2±0.35 F=5.029, p<0.001 (2018>2011, 2012, 2013, 2014, 2016. 2015>2012) 1.46±0.18 F=6.376, p<0.001 (2018>2011, 2012, 2013, 2014. 2015>2012)
 2012 (n=52) 9.98±0.25 1.25±0.11
 2013 (n=47) 10.15±0.3 1.36±0.14
 2014 (n=51) 10.61±0.24 1.57±0.12
 2015 (n=52) 11.29±0.27 1.97±0.15
 2016 (n=47) 10.64±0.35 1.73±0.19
 2017 (n=48) 10.81±0.36 1.81±0.19
 2018 (n=49) 11.98±0.25 2.49±0.17
*

Bold represents significant difference found only in the Test Score.