Questions demanding subjective (open) responses have been considered to be the most desirable assessment format in order to gauge candidate learning. Such questions allow candidates to express creatively and help evaluators to understand a candidate’s thought process. The evaluation of such subjective responses, however, has traditionally required human expertise and is challenging to automate. On the other hand, automated assessments provide scalability, standardization and efficiency. Given the recent shift towards online learning and the massive scale of operations, there is a need to develop systems which can combine advantages of both, expert assessors and automated systems. Drawing from attempts made by both, the machine learning community and educational phsychologists, we provide general principles on how any subjective evaluation problem can be cast in the framework of machine learning. These principles highlight the various choices and challenges one would need to consider while devising a machine learning based assessment system. We go on to demonstrate, as a case-study, how a system to assess computer programs has been successfully designed using the principles described.