How do we approach the impact of Generative AI in our institution - Transcript

Mark Hancock:

Generative Artificial Intelligence, or GAI, has taken the world by storm over the last 6 months, and we have seen the rapid development of emerging technologies such as Chat GPT releasing 3 consecutive enhancements within this time frame. It is estimated that over 100 million users accessed and produced text using Chat GPT3 in the first 2 months of its release from December 2022.

Chat GPT is a large language model (LLM) that allows people to interact with a computer in a natural or conversational way. The Generative Pre-trained Transformer, known as GPT, is a natural language model developed by Open AI.  It uses natural language processing to learn from data to provide users with AI based, written answers to prompts or questions.

Several concerns have been raised about the reliability of Generative AI, given its rapid evolvement. Other contentious issues include its ethical impact, including privacy, EDI, bias and lack of regulation in addition to its impact on academic integrity within higher education institutions.

Due to the rapid development pace so far, and the anticipated continuation of this, there is a need for collective discussion and understanding of the impact as well as exploring debates around ethics, skills, and literacy of Generative AI more broadly.

We have seen the sector respond quickly to the idea of detection with large software providers such as Turnitin suggesting institutions adopt their software solutions in response to these emerging technologies. Our current position on detection is aligned with the Russell Group and the early research suggesting detection cannot be relied upon and therefore we haven’t enabled this functionality.

At institution level, our initial focus has been on revising the Academic Integrity Code of Practice, reiterating the message that students should not submit something passed off as their own when they have used software, including Generative AI.

A position statement on GAI in Education will be published shortly by the University of Birmingham detailing revisions to the Academic Integrity Code of Practice and guidance for staff, compiled in collaboration with Russell Group institutions. This will provide consistent guidance and advice in supporting staff and the student learning experience.

These developments present several opportunities for us to learn and think about ways in which this emerging technology can enhance the staff and student experience in Birmingham without compromise. We know there’s more to come from Microsoft and Google in this space and as a result, we will be commencing a community of best practice for staff in education and a steering group led by senior academic stakeholders and supported by HEFi in the coming weeks.

HEFi will also be offering a number of resources to support teams in reviewing or redeveloping their assessment in response to this.

Look out for invites to join a new Generative AI community of best practice which will feed into and from a new university Generative AI steering group led by Michael Grove, DPVC Education Policy and Standards.