Tuned Models of Peer Assessment in MOOCs, Chris Piech, Jonathan Huang, Zhenghao Chen, Chuong Do, Andrew Ng, Daphne Koller. In Proceedings of The 6th International Conference on Educational Data Mining (EDM 2013), Memphis, TN, USA, July 2013.


In massive open online courses (MOOCs), peer grading serves as a critical tool for scaling the grading of complex, open-ended assignments to courses with tens or hundreds of thousands of students. But despite promising initial trials, it does not always deliver accurate results compared to human experts. In this paper, we develop algorithms for estimating and correcting for grader biases and reliabilities, showing significant improvement in peer grading accuracy on real data with 63,199 peer grades from Coursera's HCI course offerings --- the largest peer grading networks analyzed to date. We relate grader biases and reliabilities to other student factors such as student engagement, performance as well as commenting style. We also show that our model can lead to more intelligent assignment of graders to gradees.


author = {Chris Piech and Jonathan Huang and Zhenghao Chen and Chuong Do and Andrew Ng and Daphne Koller},
title = {Tuned Models of Peer Assessment in {MOOC}s},
booktitle = {Proceedings of The 6th International Conference on Educational Data Mining (EDM 2013)},
year = {2013}