There are three sources of assessment for COMP34120 which count towards the grade: Semester 1 project, Semester 1 project, and the final exam. The problem sets do not contribute to the final mark. They have been include to aid learning, and as preparation for the final exam. This document is about the assessment of the Semester 1 project.
The breakdown of marks for the entire course is: for the first-semester project, for the second-semester project, and for the exam. For the semester 1 project, is approach and is for performance. So, a bot which does not win many games but is based on an approach which is well informed from the literature and/or based on a very clever idea could get a high approach mark, even if it does not get a good performance mark. For the semester 1 project, the performance mark is broken down to win performance -- which is how many games it wins, and speed performance -- which is how fast it is. Win performance is worth times more than speed performance, because it is easy to make a fast player simply by using a player which makes no calculations to choose its move. So, of the performance mark, is for win performance, and is for speed performance.
There is one exam in May/June 2021 which covers both semesters. Half of the exam is on semester 1 material and half on semester 2 material.
The next point is after the project work is submitted and marked. Written feedback will accompany the marks on Blackboard.
The approach mark is assessed by the lecturers based on the presentation which is given during the last week of the semester, and what is described on the group journal. We are interested in how the choices you made in designing and improving your bot were informed by the following.
We will follow the University policy, which is marks per hour late. If a bot is submitted on time but does not run or work without later intervention, it could be considered late if it is quickly rectified by the students. The length of time it is late is the time between the students being informed that it does not work and the students rectifying it. Otherwise, it is marked as below.
This will be based not on absolute run time, but on run-time rank, where the fastest bot is ranked and the slowest bot is ranked where is the number of bots in the tournament.
Sections 2.2, 2.3, 2.4, and 2.5 describe how marks are assigned to groups. There also need to be mechanism to assign marks to individuals. Obviously, the exam does that. During the presentation, lecturers have the option of assigning different approach marks to different students. Typically we do not do this, because everyone in the group is sharing the same approach, unless there is a clear discrepancy between a student and the others in the group. However, the journal is another matter. We need all of you to document your contribution to the project on your group's journal.
If during the project work, it transpires that the group is not functioning as a team, we urge you to let us know as soon as possible so we can try to fix it. This could be because one member of the group is not engaged, or one group member is not being given the opportunity to engage by other members of the group. It is much better to fix the problem as soon as it emerges, than to fix it after the project is over. However, a peer assessment mechanism is provided to request differential marks for different members of your group. We sincerely hope that you don't need to use it.
This document was generated using the LaTeX2HTML translator Version 2008 (1.71)
Copyright © 1993, 1994, 1995, 1996,
Computer Based Learning Unit, University of Leeds.
Copyright © 1997, 1998, 1999, Ross Moore, Mathematics Department, Macquarie University, Sydney.
The command line arguments were:
latex2html -local_icons -split 0 MarkingScheme2020.tex
The translation was initiated by "" on 2020-10-05"" 2020-10-05