Automated Assessment Methodology for Games & Simulations


Automated Assessment Methodology for Games & Simulations



As computer-based, instructional games & simulations become ever more ubiquitous, and the technology that drives these environments grows ever more sophisticated, the pedagogical models for assessing performance within these systems must also evolve accordingly. Based on an automated assessment methodology developed at UCLA / CRESST (Koenig, Iseli, Wainess, and Lee, 2013), this presentation will focus on the design and development of an automated assessment engine slated for use at the U.S. Navy's Surface Warfare Officer's School (SWOS) in Newport, RI. The assessment engine interfaces with a pre-existing simulator (the Conning Officer Virtual Environment), as well as a computer-based, intelligent tutoring system. Collectively, these tools are used to train naval officers in ship piloting and maneuvering techniques. But due to the emergent nature of the scenarios undertaken (i.e. student responses to changing simulator variables such as current, wind, visibility, # of ships in the vicinity, etc.), there's no one, single pathway to success. And although discrete actions can be observed (and scored) by the system, it's often the higher order, more latent skills of a student that are of interest to instructors (for example, situational awareness, maneuvering proficiency in rough seas, etc.). To get at these measures, instructors typically rely on subjective interpretations of discrete scores of observable actions. This presentation will show how, using Baysian networks, the CRESST automated assessment engine is able to replicate and match the subjective judgements of SWOS expert instructors, and make probabilistic predictions of a student's latent skills-mastery.

Koenig, A., Iseli, M., Wainess, R., & Lee, J. J. (2013). Assessment methodology for computer-based instructional simulations. Military Medicine.




Technical Level


  • Data Collection
  • Analytics and Data Analysis