Assessment for semester 1 project

JLS



1 Marks breakdown

The breakdown of marks for the entire course is: $ 30\%$ for the first-semester project, $ 30\%$ for the second-semester project, and $ 40\%$ for the exam. For the semester 1 project, $ 15\%$ is approach and $ 15\%$ is for performance. So, a bot which does not win many games but is based on an approach which is well informed from the literature and/or based on a very clever idea could get a high approach mark, even if it does not get a good performance mark. For the semester 1 project, the performance mark is broken down to win performance -- which is how many games it wins, and speed performance -- which is how fast it is. Win performance is worth $ 3$ times more than speed performance, because it is easy to make a fast player simply by using a player which makes no calculations to choose its move. So, of the $ 15\%$ performance mark, $ 11.25\%$ is for win performance, and $ 3.75\%$ is for speed performance.

There is one exam in May/June 2020 which covers both semesters. Half of the exam is on semester 1 material and half on semester 2 material.


2 First semester assessment


2.1 Feedback

There are two points of feedback for the project work of semester 1. During the second week of the project work period, each group will fill out a proforma stating what method(s) they propose to use, and how they are going to split up the work across the groups. They bring this to the lab to discuss verbally with the lecturers. Written feedback will appear on Blackboard.

The next point is after the project work is submitted and marked. Written feedback will accompany the marks on Blackboard.


2.2 Approach mark

The approach mark is assessed by the lecturers based on the presentation which is given during the last week of the semester, and what is described on the group journal. We are interested in how the choices you made in designing and improving your bot were informed by the following.

Good understanding of the methods used:
We don't want you to produce a course on what we taught you. We want to see evidence in what you implement and what decisions you make that you understand the methods you are using.
Use of the literature:
Was the system you built and the choices you made informed by work found in established literature?
Use of evidence and experiments:
Did you do careful testing or comparative tests to choose parameters, evaluate approaches, etc. Did you show us the outcomes of these?
Effective implementation:
Did you optimise the representation, the tree search, use parallelization, or use some other optimizations?
It is very important that you document what you have done on your journal, both individually and as a group.


2.3 Marking of late and non-working bots

We will follow the University policy, which is $ 10$ marks per $ 24$ hour late. If a bot is submitted on time but does not run or work without later intervention, it could be considered late if it is quickly rectified by the students. The length of time it is late is the time between the students being informed that it does not work and the students rectifying it. Otherwise, it is marked as below.

$ 0\%$ :
No bot submitted, late or otherwise.
$ 32\%$ :
Bot submitted on time, but will not compile or run.
$ 40\%$ :
A bot submitted on time, but could not in principle win any games because it always either fails to produce commands which satisfy the protocol, always times out, or fails to play any games at all gets a mark of $ 40\%$ .


2.4 Marking of win-performance of bots:

$ 50\%$ - $ 90\%$ :
Any bot which could in principle win games (i.e. it plays correctly) gets a mark in this range. The mark is in proportion to the fraction of games won. A bot which wins half of its games gets a mark of $ 70\%$ . A bot which could win a game but loses every game will get a mark of $ 50\%$ (i.e. plays correctly, but extremely poorly). A bot which wins all of its games will get a mark of $ 90\%$ .

Extra credit:
Up to $ 10\%$ marks can be awarded for an exceptionally well-thought out or novel approach. If a bot is faster than bots of similar win performance, or is better then many bots in both win performance and speed performance, it can gain marks here.


2.5 Marking of speed-performance of bots:

This will be based not on absolute run time, but on run-time rank, where the fastest bot is ranked $ 1$ and the slowest bot is ranked $ N$ where $ N$ is the number of bots in the tournament.

$ 50\%$ - $ 90\%$ :
Any bot which could in principle win games (i.e. it plays correctly) gets a mark in this range. The mark is in proportion to the rank of the speed. A bot with the median speed gets a mark of $ 70\%$ ; the fastest which can in principle win a game gets $ 90\%$ ; the slowest gets $ 50\%$ .
Extra credit:
Up to $ 10\%$ can be added to the mark. Often the run-times range over two or three orders of magnitude, which is not captured in the rank. Extra credit can be given to bots which are much faster than those of similar rank or similar win performance.
As mentioned previously, win performance is three times more important than speed. In other words,

   Performance score $\displaystyle = \frac{3\times \mbox{wins score} + \mbox{speed score}}{4} .
$


2.6 Individual marks

Sections 2.2, 2.3, 2.4, and 2.5 describe how marks are assigned to groups. There also need to be mechanism to assign marks to individuals. Obviously, the exam does that. During the presentation, lecturers have the option of assigning different approach marks to different students. Typically we do not do this, because everyone in the group is sharing the same approach, unless there is a clear discrepancy between a student and the others in the group. However, the journal is another matter. We need all of you to document your contribution to the project on your group's journal.

If during the project work, it transpires that the group is not functioning as a team, we urge you to let us know as soon as possible so we can try to fix it. This could be because one member of the group is not engaged, or one group member is not being given the opportunity to engage by other members of the group. It is much better to fix the problem as soon as it emerges, than to fix it after the project is over. However, a peer assessment mechanism is provided to request differential marks for different members of your group. We sincerely hope that you don't need to use it.

About this document ...

Assessment for semester 1 project

This document was generated using the LaTeX2HTML translator Version 2012 (1.2)

Copyright © 1993, 1994, 1995, 1996, Nikos Drakos, Computer Based Learning Unit, University of Leeds.
Copyright © 1997, 1998, 1999, Ross Moore, Mathematics Department, Macquarie University, Sydney.

The command line arguments were:
latex2html -split 0 -html_version 4.0 -show_section_numbers MarkingScheme201819.tex

The translation was initiated by Jonathan Shapiro on 2018-09-20

Jonathan Shapiro 2018-09-20