As computers become more and more integrated into learning processes at work, school and home, administrators are increasingly seeking assessment solutions that can maximize the technological potential for assessment purposes. It is apparent that paper and pencil testing is insufficient to meet the expectations of today's and tomorrow's educators and human resources professionals. Computer based assessment, or e-testing, delivers substantial benefits in flexibility, efficiency and effectiveness, particularly in time and cost savings for grading and the availability of instant feedback for test takers. The wireless nature of the internet removes barriers of location and time from the assessment process, objectivity brings improved reliability, and digital administration creates advanced reporting and analysis capabilities.
New customizations in exam creation software are able to provide higher levels of engagement than ever before through innovative methods of interaction. Better assessment tools help to shape institutional practice and widens participation and enthusiasm for involvement, while embracing technological advances enhances the institution's reputation as a leader in the field. The personalization of testing to individual requirements can mitigate the potentially distorting influence of external factors related to distance, disability, illness or other commitments.
Computer Adaptive Testing (CAT) is one such tailored assessment that successively selects questions or test items to focus the precision of exams based on the answers of previous questions. By adapting to the test taker's level of ability, fewer test items are generally required to arrive at equally accurate scores. An adaptive test can typically be half as short as a traditional test and still achieve a higher level of precision. This results in time savings for both test maker and taker, although many more questions need to be developed for these tests so a large population is necessary for CAT to be financially viable.
Automated Essay Scoring
A major challenge for computerized testing is for more subjective material such as essays. The phenomenon of MOOCs (Massive Open Online Courses) has succeeded based mostly on technical courses with relatively easily quantifiable material. As online learning becomes more normalized, social sciences, arts and language courses are increasingly prevalent, requiring innovative solutions for assessment. Some have experimented with a calibrated peer review system, in which students are trained on a scoring method using practice essays, multi-source feedback and external tools.
Automated Essay Scoring (AES) is an emerging practice that has implications for the types of papers that can be scored, the consistency of feedback given and the level of instructor intervention. Most AES applications build statistical models to predict scores based on algorithms developed from initial instructor input. After an instructor has graded 100 essays regarding a particular writing assignment, the computer can predict scoring determined by features correlating with human rated essays. This is an emerging field and results have thus far been mixed but it signifies that the vast potential of test generator software is only beginning to be realized.