AI in further education – nearly three years down the road
Artificial Intelligence, and especially generative AI, has rapidly made its way into classrooms, lecture halls, and even assignment guidelines.
We are now nearly three years since ChatGPT was launched (in November 2022).
Some of the use of AI in education has matured.
And yet many of us are still grappling with what this technology means for our teaching models, assessment strategies, and student engagement.
This post takes stock of where we are now with AI in Further Education (FE) in terms of policies in the UK and Ireland.
What students are doing with these tools and how educators might integrate AI into their teaching practice.
Current state of AI in further education
AI in FE is no longer an abstract concept or distant future.
Many FE colleges in the UK and elsewhere are already experimenting with AI in both the classroom and back-office functions.
A recent research project showed that AI champions play a crucial role in creating a buzz around AI and supporting staff.
These AI champions are teachers with technology expertise who support their colleagues in demystifying AI and demonstrating its potential.
In the UK, the Department for Education now encourages teachers to use AI tools like ChatGPT for:
- lesson planning
- admin tasks
- marking support
as long as humans stay in control.
National bodies have issued principles to guide responsible AI use in colleges, stressing ethics, data protection, and fairness.
FE colleges are also starting to write their own AI policies.
Often focusing on how students may (or may not) use AI in assessments.
Ireland is taking a similar path.
The AI Advisory Council has recommended national guidelines and teacher training on AI literacy.
Institutions like the College of Further Education and Training have adopted practical policies that avoid blanket bans and focus on teaching students how to use AI critically and ethically.
How students are using AI
Students are already using generative AI, whether teachers endorse it or not.
Some students use it as a learning tool for summarising course materials, generating practice questions, or brainstorming ideas for assignments.
An interesting recent development is the introduction of “study mode” in ChatGPT.
Study mode was built to support real learning, according to the developers.
The goal is to encourage students to think critically about their learning.
Designed to provide active engagement with lesson materials.
At the same time, some students use genAI in ways that do not support their learning and substitute their thinking processes.
Entire assignments may be AI-generated, sometimes without understanding the content, let alone learning from it.
This creates concerns not only about academic integrity, but also about whether students are actually developing the skills their qualifications aim to certify.
At the same time, “catching” this unethical behaviour is difficult, as traditional similarity checks cannot properly identify AI-generated contents.
In my classroom, I have adapted to the current situation in two ways:
- Focus on fact-checking: Last semester, I required all my students to follow an online course on prompt engineering and one on trustworthy generative AI, to teach them fact-checking of AI-generated content. For writing reports, I allowed the use of genAI, and requested a one-page clarification on the prompts used and how the student fact-checked the text.
- Change delivery mode: For some projects, I have replaced the written report by an oral presentation this semester. My open code, open book, open laptop exams are replaced by closed book, one page of formulas, and pencil and paper only exams. In my opinion, there is a time and place for using AI and one for banning its use.
One important note to make here is that we are all guinea pigs in a big experiment at the moment.
We do not quite know what AI’s impact on educational outcomes is, as the AI tools change rapidly, and we do not have enough data yet to study the effect.
As educators, we should always keep this concern in mind.
Supporting teachers
FE staff already carry heavy workloads: from lesson planning to marking, from admin to pastoral care.
AI has real potential to ease this burden—if used wisely. Here are some ways AI is already helping:
- Lesson planning: When faced with the challenge of creating a new course, FE teachers can use ChatGPT to help calculate the workload of lesson plans for students and give suggestions on topics or activities that are overlapping or missing.
- Resource creation: Teachers can use AI tools like ChatGPT or Microsoft Copilot to generate worksheets, exercises or quizzes in minutes.
- Improving communications: To provide clear information to students, genAI tools can assist in drafting emails or bulletins of information. GenAI can, for example, take on the role of a student and be asked to identify any parts in the communication that are not clear.
The key is that AI assists FE teachers; it does not replace them.
Like any tool, it requires thoughtful use and human oversight.
And while it might be tempting to automate everything, the most effective FE teachers are still those who build relationships with learners, adapt to their needs, and foster curiosity; and these tasks require human warmth and compassion for our students.
Conclusion
AI is transforming education, and Further Education is no exception.
While there are genuine concerns around plagiarism and academic integrity, there’s also a world of opportunity to enhance both teaching and learning.
Rather than resisting AI, FE colleges should embrace the inclusion of AI in teaching and learning, critically, ethically, and creatively.
The system as a whole needs to focus on what really matters: empowering learners with the skills they need for the future; AI included.
Leave a Reply