Distractor Suites: A Method for Developing Answer Choices in Automatically Generated Multiple-Choice Items

Distractor Suites: A Method for Developing Answer Choices in Automatically Generated Multiple-Choice Items

Authors

  • Senior Research Scientist, Edmentum, 5600 W 83rd St #300, Bloomington, MN 55437

Keywords:

Automatic Item Generation, Mathematics, Test Development

Abstract

In recent years, Automatic Item Generation (AIG) has increasingly shifted from theoretical research to operational implementation, a shift raising some unforeseen practical challenges. Specifically, generating high-quality answer choices presents several challenges such as ensuring that answer choices blend in nicely together for all possible item stems and reducing the labor required to write computer specifications for each answer choice. In this paper, using the context of K-12 mathematics items, I give an overview of the unique challenges associated with developing answer choices for automatically-generated items and then illustrate a methodological innovation known as the distractor suite that can improve the quality of automatically-generated answer choices while simultaneously reducing the labor to write and review answer choice specifications.

Downloads

Download data is not yet available.

Metrics

Metrics Loading ...

Downloads

Published

2021-03-03

How to Cite

Kosh, A. E. (2021). Distractor Suites: A Method for Developing Answer Choices in Automatically Generated Multiple-Choice Items. Journal of Applied Testing Technology, 22(1), 12–22. Retrieved from http://jattjournal.net/index.php/atp/article/view/155880

Issue

Section

Articles

References

Bejar, I. I. (2010). Recent development and prospects in item generation. S. Embretson (Ed.), Measuring psychological constructs. Washington, DC: American Psychological Association. p. 201–26.

Bejar, I. I. (2013). Item generation: Implications for a validity argument. Gierl, M. J. & Haladyna (Eds.), Automatic item generation: Theory and practice. New York: Routledge. p. 40–56.

Embretson, S. E., Daniel, R. C. (2008). Understanding and quantifying cognitive complexity level in mathematical problem solving items. Psychology Science, 50(3), 328.

Gierl, M. J., Haladyna, T. M. (2013). Automatic item generation: An introduction. Gierl, M. J. & Haladyna (Eds.), Automatic Item Generation: Theory and Practice. New York: Routledge. PMCid: p. 3–12.

Gierl, M. J., & Lai, H. (2016). Automatic item generation. Lane, S., Raymond, M. R., & Haladyna, T. M. (Eds.), Handbook of Test Development (2nd ed.). New York: Routledge. p. 410–29.

Gierl, M. J., Lai, H., Hogan, J. B., & Matovinovic, D. (2015). A method for generating educational test items that are aligned to the common core state standards. Journal for Applied Testing Technology. 16(1), 1–18.

Gierl, M. J., Zhou, J., & Alves, C. (2008). Developing a taxonomy of item model types to promote assessment engineering. The Journal of Technology, Learning and Assessment. 7(2).

Haladyna, T., & Rodriguez, M. (2013). Developing the test item. T. Haladyna & M. Rodriguez (Eds.). Developing and validating test items. New York: Routledge. p. 17–27. https://doi.org/10.4324/9780203850381

Kellogg, M., Rauch, S., Leathers, R., Simpson, M. A., Lines, D., Bickel, L., Elmore, J. (2015). Construction of a dynamic item generator for K-12 mathematics. Paper presented at the 2015 National Council on Measurement in Education Conference, Chicago, IL.

Lee, Y., & Cho, J. (2015). Personalized item generation method for adaptive testing systems. Multimedia Tools and Applications. 74(19), 8571–91. https://doi.org/10.1007/ s11042-013-1421-0

Sinharay, S., & Johnson, M. (2013). Statistical modeling of automatically generated items. Gierl, M. J. and Haladyna (Eds.), Automatic Item Generation: Theory and practice. New York: Routledge. p. 183–95.

Wainer, H. (2002). On the automatic generation of test items: Some whens, whys and hows. S. Irvine and P. Kyllonen (Eds.). Item generation for test development. Mahwah, NJ: Lawrence Erlbaum Associates, Inc. p. 287–305.

Loading...