Preliminary Analysis of Contestant Performance for a Code Hunt Contest
Platforms of programming contests are increasingly adopted to incentivize students’#. Such analysis is conducted on a Code Hunt data set (released to the public) that contains the programs written by students worldwide during a contest interests in programming and train their programming and problem-solving skills. Code Hunt (https://www.codehunt.com/) is one such popular platform from Microsoft Research, being adopted in various contests worldwide. For a contest, Code Hunt hosts a sequence of programming puzzles provided by the contest organizers and provides interactive feedback to the contestants to assist them in solving the puzzles. Code Hunt has been used by over 350,000 players as of August 2016 since it was first released in April 2014. Analyzing platform-collected data for a Code Hunt contest can provide valuable insights on understanding both the contestants and the puzzles in the contest, in order to improve the design of future contests and training of the contestants. In this paper, we present preliminary analysis of contestant performance among all contestants along with comparing contestant performance between contestants using mostly Java and contestants using mostly C over 48 hours. The contest was attended by 259 contestants to attempt to solve 24 puzzles. The contest finally included about 13,000 programs submitted by these contestants. The analysis results expose a number of interesting and useful observations for future research.