TPP Case Study 3: Assessing Learning and Exchanging Feedback

Assessment of Work Produced Using AI and the Students’ Benefit from the Class

Contextual background

As an Associate Lecturer working mostly at UAL Creative Computing Institute, I mostly teach classes related to creative coding. For the past year, my students have been clearly using more and more AI tools to produce the final submissions, but it is often difficult to tell to what extent. The main issue is evaluating the learning outcomes based on the submitted work while hoping to prepare them for industry work which will undoubtedly involve AI while struggling to maintain academic standards and fair assessment.

Evaluation

Last year after noticing the issue, my main manager at the time, Dr Hunter Brueggemann, and I, have agreed on a set of guidelines for students for the use of AI, that Hunter formalised into slides, with the main intention being to encourage students to use AI tools critically to help them learn rather than do the work for them, and requiring them to disclose it (2023). We’ve been including them as a part of the class briefing at the start of the term. I believe a lot of students still use AI tools without disclosing it as I get submissions that are quite advanced and can’t be readily found online but are quite off-topic. We clearly struggle to get through to the students and I don’t feel like they’re benefiting much from the class this way even though they get good and very good marks for the work.

Moving forward

Since last year, a few different policies around the use of AI tools were developed throughout UAL. It’s also been clearly defined that the use of AI tools without citation is considered plagiarism, and as such is treated as academic misconduct.

One of my other managers, Dr Louis McCallum (2024), shares submission templates with students, which contain a section on the use of AI that students need to fill. Given how the LOs are defined, code or essays with large sections generated by AI do not meet the LOs very well. Explaining this to students is partially helpful.

Going forward, I would like to start a discussion about the use of AI tools in the class more, with a focus on using AI for supporting self-directed learning and re-evaluating the learning outcomes, aiming to a constructive alignment (Biggs, 2003) between the desired LOs and Assessment. As generative AI tools have become an industry standard and we aim to ready the students for employment, there may be a misalignment between the desired LOs and what we currently expect the students to do. If methods we require them to use gradually become outdated in the industry, and if we aim for the LOs to represent practical skills connected to employability, the current model does not work.

Using AI comes with its own set of unique challenges. One of them is the encoded bias and possible errors of AI tools that the students often do not consider. I believe a critical understanding of this issue, particularly in an intersectional context, is another thing to potentially include in the revised Learning Outcomes.

References

Biggs, J. (2003). Aligning Teaching for Constructing Learning. Online: The Higher Education Academy. Available at: https://www.researchgate.net/publication/255583992_Aligning_Teaching_for_Constructing_Learning (accessed: 25.07.2024).

Brueggemann, H. and TaƄska, M. (2023). Responsible Use of Large Language Models (LLMs) in Coding One: Advanced Creative Coding (Modular). London: UAL Creative Computing Institute.

McCallum, L. (2024). An observation made while teaching on the Personalisation and Machine Learning module at the MSc Data Science and AI for Creative Industries programme 2023/24. London: UAL Creative Computing Institute.

Leave a Reply

Your email address will not be published. Required fields are marked *