Mark Carrigan<p><strong>Why generative AI guidance for students needs to be embedded in departments</strong></p><p>I just read the <a href="https://www.russellgroup.ac.uk/sites/default/files/2025-01/Russell%20Group%20principles%20on%20generative%20AI%20in%20education.pdf" rel="nofollow noopener" target="_blank">Russell Group AI principles</a> for the first time since they were released and was struck by principle number 2: “<strong>Staff should be equipped to support students to use generative AI tools effectively and appropriately in their learning experience</strong>“. This is exactly what I’ve been <a href="https://markcarrigan.net/2025/08/08/are-uk-universities-ready-to-cope-with-generative-ai-in-the-25-26-academic-year/" rel="nofollow noopener" target="_blank">blogging about</a> <a href="https://markcarrigan.net/2025/08/10/the-gap-between-student-genai-use-and-the-support-students-are-offered/" rel="nofollow noopener" target="_blank">recently</a> as the point where the sector is struggling to adapt to the diffusion of LLMs which has <em>already happened </em>within the student community. As the guidance itself acknowledges what it means to use LLMs “effectively and appropriately in their learning experience” will vary between disciplines: </p><blockquote><p>The appropriate uses of generative AI tools are likely to differ between academic disciplines and will be informed by policies and guidance from subject associations, therefore universities will encourage academic departments to apply institution-wide policies within their own context. Universities will also be encouraged to consider how these tools might be applied appropriately for different student groups or those with specific learning needs.</p></blockquote><p>Unfortunately this places a great burden on subject associations at a point where many of them are still grappling with the financial difficulties generated by the pandemic, with declining membership rates, increasing costs and at least some event income having been knocked out temporarily. It also assumes that subject associations would have <em>capacity </em>to do this beyond resources. It might be possible for associations with dynamic leadership and a strong base of academic members working on these issues. But even that it’s asking a lot and most do not have this baseline level of resource. Where they do engage it’s likely to be subject to institutional isomorphism, replicating the assumptions of other groups because no one is clear about what this all means yet and is worried about being seen to misstep. </p><p>Subject associations were never going to be able to provide this guidance with sufficient depth and contextual sensitivity. This seems so obvious to me that it’s hard not to read the Russell Group principles as an (unconscious?) passing of responsibility for a difficult task to an external agent. Because the final statement under principle two illustrates what <em>is </em>needed in order to address this: </p><blockquote><p>Engagement and dialogue between academic staff and students will be important to establish a shared understanding of the appropriate use of generative AI tools. Ensuring this dialogue is regular and ongoing will be vital given the pace at which generative AI is evolving.</p></blockquote><p>I see no possible way around this. This dialogue has to take place, be embedded in existing processes and involve safe spaces in which staff and students feel able to talk frankly about their perceptions. It has to be informed by university policy but not subordinated to it. It has to continue for as long as the landscape of Generative AI is changing. It has to be lightweight enough to get buy in from a sufficient number of staff when workloads are spiralling amidst a general sense of crisis. It has to be robust enough to actually have some hope of generating norms and standards concerning what “effective and appropriate” use of LLMs means in their context. </p><p>The Russell Group principles <strong>describe the problem as if it’s the solution</strong>. This is not a straightforward undertaking, as evidenced perhaps by the lack of evidence that it’s taking place in the sector. Saying ‘dialogue is important’ necessitates that we think about what that infrastructure for dialogue can and should look like. In practice there’s a range of questions this addresses:</p><p><strong>What’s actually happening on the ground?</strong></p><p>What are students in our discipline using AI for? Which specific tools at what points in their work? How does this differ from what we imagine is happening?</p><p><strong>What makes our discipline what it is?</strong></p><p>Which capabilities and ways of thinking are foundational to what we do? What has to remain human for this to still be our field? Where might AI genuinely enhance rather than undermine these capabilities?</p><p><strong>When does support become substitution?</strong></p><p>At what point does AI use shift from supporting learning to bypassing it? How do we recognize genuine engagement versus its simulation? What’s the difference between scaffolding and outsourcing?</p><p><strong>How do we assess in an AI-saturated world?</strong></p><p>What forms of assessment still tell us something meaningful? How do we evaluate understanding when outputs can be generated? What new approaches might we need to develop?</p><p><strong>Who gets left behind?</strong></p><p>Which students have access to what tools? How does the wealth gap manifest in AI capability? What would meaningful support look like?</p><p><strong>What’s the disconnect with professional practice?</strong></p><p>How is AI actually used in our field outside universities? What happens when we prohibit tools that are standard in the workplace? How do we prepare students for reality?</p><p><strong>How do we build collective capacity?</strong></p><p>What do staff need to feel less anxious about this? What helps students use AI thoughtfully rather than desperately? How do we learn from what’s working and what isn’t?</p><p><a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://markcarrigan.net/tag/ai/" target="_blank">#AI</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://markcarrigan.net/tag/ai-prinicples/" target="_blank">#AIPrinicples</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://markcarrigan.net/tag/artificial-intelligence/" target="_blank">#artificialIntelligence</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://markcarrigan.net/tag/chatgpt/" target="_blank">#ChatGPT</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://markcarrigan.net/tag/education/" target="_blank">#education</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://markcarrigan.net/tag/generative-ai/" target="_blank">#generativeAI</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://markcarrigan.net/tag/large-language-models/" target="_blank">#largeLanguageModels</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://markcarrigan.net/tag/russell-group/" target="_blank">#russellGroup</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://markcarrigan.net/tag/students/" target="_blank">#students</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://markcarrigan.net/tag/technology/" target="_blank">#technology</a></p>