Automatic Essay Marking
The absolute worst experience for any educational professional is to sit down on an evening or a weekend (it always seems to be an evening or weekend, when you should be doing something more enjoyable) with 100 essays or exam scripts, all on the same topic, and slowly, resentfully, plod your way through and grade them. It’s hell. At times like that I would have been willing to cut off a finger if somebody could have showed me a way that they could all be marked automatically.
Well, recently it seems the prayers of educators may have been answered. Several companies are working on software that automatically gives marks/grades to written assignments. This article covers the basics, but briefly, students can upload their work to a web-portal, and get instant feedback on their written work. One particular company has produced a piece of software called ‘SAGRader‘ which they claim uses artificial intelligence and NLP (Natural Language Processing) algorithms to effectively ‘read’ the essay, and thereby provide much more detailed and specific feedback on the content. Such a system should in theory be able to not only grade on simple things like spelling and grammar (Word processors have been detecting and correcting these things for years) but on the actual semantic content of a piece of work. If it works, this would be a massive help, and the numbers and testimonials on the SAGrader website do seem to suggest that it works. What you have to remember is that human graders are massively fallible so, to be useful, a piece of software doesn’t have to be perfect – it just has to be better than human graders.
it seems to me that automatic grading in this way has a number of significant advantages:
1. Firstly time. Marking papers and exams is a massive, massive time-suck for academic staff. Plus it is boring, dispiriting repetitive work most of the time.
2. Instant feedback. Students hate waiting for feedback, and sometimes it can take lecturers weeks to get around to marking essays. With automatic marking through a web-portal the feedback could be instant. This is related to point…
3. Multiple drafting. A system could be devised that allowed students to submit multiple drafts of work, get feedback at each stage, and only submit a final draft when they were ready. This would be impossible with human markers. In fact SAGrader reckon that there’s an 11% average increase in grades between a first and a final draft.
4. Accuracy, reliability and comparability. Humans are not very good at working to the same set of standards, and inter-rater reliability between, say 10 TAs who are grading the papers of a large class of 1000 students is often poor. Software-grading should eliminate this, and also allow comparability between two assignments, two classes, or two colleges, as everything is being marked to the same criteria.
The biggest problem with this approach is that people just don’t believe that the system works well, or well enough. I can see that many academics and probably also students will be sceptical about its usefulness. However, the advantages seem to be large enough that I would seriously consider giving it a try.
P.S. Just came across this paper while looking for material for this post, which attempts to describe “The next development in essay processing technology” i.e. automatic generation of essays, given a question. Software that could write a good essay would be seriously cool – and pose serious problems for educators. Its probably only a matter of time before such things become widely available though.