top of page

ChatGPT and Assessments: Finding the Balance

Updated: Jun 1


Navigating assessment in the age of generative AI


AI and ChatGPT present both opportunity and difficulty for teachers. Read on for more information about how you can navigate assessment.




Technology is always changing; that’s the nature of it. Generative AI in general, and ChatGPT specifically, are currently a major topic of conversation and speculation globally. They raise a lot of questions for educators.


Programmes like ChatGPT bring with them myriad possibilities for teachers. Like any tool, they can be used well or badly. In this particular case, they can also be used—or misused—by students to either supplement or skirt their learning.


How can educators make the most of this new tech while also preventing misuse? There are many areas of education touched by it—in this article, we’re focusing on assessment.



Assessment can still be effective in the age of AI

One of the primary questions that will be raised is this: how can you prevent students from using ChatGPT and other generative AI tools to write answers to their assessments for them?


There is no clear-cut path, but let us say that good assessment practice before AI remains good practice after it. To avoid being taken in by computer-generated answers that give a unclear picture of a student’s true understanding, you should be keeping abreast of their progress in a variety of different ways. This has not changed!


To put it bluntly, teachers who are paying attention will notice when a student is struggling with their grammar throughout the year then submits a perfectly-written essay answer at the end of it. An answer (or answers) that goes beyond a student’s real understanding should be an outlier that is fairly easy to recognise. And to gain this recognition and understanding, you should be designing ongoing assessments in a way which won’t allow students to simply copy and paste.


But what does that mean in practice?



Assessment with AI needs to capture the learning journey


Good assessment practice in the age of generative AI can include:


  • Creating learning portfolios, developed throughout the year to capture what and how students are doing on their learning journey. Any single assessment relies on how a student can perform on that day, and also whether they find a way to “beat” the system. A portfolio shows the journey, up and down days alike. They can include feedback and reflections from both student and teacher. Multimedia presentations. Spoken and written assessments. A progression of learning artefacts. A portfolio should be a valid record of the learning journey and where the gaps are.


  • Using instructional rubrics as a way to capture, grade, and quantify their learning portfolio. The difference between a rubric and a traditional marking schedule is that it is used as a tool and guide throughout the learning, not kept secret from the learner. Allowing them to take ownership of their learning outcomes encourages real progress, not just box-ticking.


  • Manual mode! As a fallback, you can’t go past the classic pen and paper for an AI-proof assessment option. However, this should be used a) sparingly, and b) in conjunction with digital assessment, because otherwise you’re ignoring the real world. There are also accessibility issues to consider; some students will struggle with handwriting and should be supported to succeed if this is the case.


  • Tools such as proctoring software and screen monitoring which are available to prevent cheating during digital or online exams. AI detection tools are also being developed. However, these are simply helpful tools, not systemic solutions to what will be an ongoing challenge.


It’s also important to note that AI can be helpful for teachers when creating assessments—don’t miss out on taking advantage of it when you can! It is adept at creating quizzes and questions for topics that are reasonably fact-based and requiring closed answers. Human oversight and adjustment may still be needed, but it can reduce the precious time you need to spend on such tasks.


Christopher Lind goes into this in further depth in his article “Decoding Generative AI in Assessment”, describing how “Generative AI can be paired with video to create dynamic, adaptive, and comprehensive skill assessments.” And while he’s speaking from a business context, it is certainly relevant to the future of assessment in education. He notes that while AI is the tool, humans are the craftsmen—a promising thought for those ready to embrace the capabilities of AI in their work.


For more on how you can tackle the challenges prevented by AI in education, take a look at this fantastic conversation shared by tech commentator Sinead Bovell. We love this perspective: “test for that knowledge in new and more challenging ways”.



 


If you are interested in topics like this one and would like to arrange further training or resources for your team, get in touch with Think e-Learning! We love equipping leaders to guide and nurture their teams as they ease into digital literacy and discover the full potential of online learning.





Comentarios


bottom of page