What Should we Know Before Using Chat GPT in Schools? An Interview with Lidija Kralj

Lidija Kralj is Senior Analyst at European Schoolnet. Before joining EUN, Lidija worked at the Ministry of Science and Education in Croatia, where she led projects in digital education, learning analytics, data-based decisions in education and comprehensive curricula reform. She is an expert in the European Commission's working groups on Artificial intelligence and data in education and training, digital education and safer internet, UNESCO & Council of Europe workgroups on AI in education.

 

"You recently spoke in the Teachers of Europe Podcast about Chat GPT. You suggest that some ethical models should be built into it from the start, such as disinformation and hate speech filters. How is Chat GPT currently handling bias and hate speech? How can this affect teachers and students?"

 

ChatGPT and other generative AI tools are new tools and almost everybody is amazed by how fast they generate text, picture or video. However, people rarely think about where those creations come from, on what data sets AI tools are trained, and what bias, disinformation, hate speech, or cyberbullying filters are already part of the tool. Unfortunately, it seems history is repeating. As policy makers reacted to internet abuse and misuse when it was already too late, a similar thing might happen with AI use. None of the fundamental safety building blocks seems to have been applied with any forethought.

If we want to use AI tools with students in primary and secondary school, we have to be aware of all risks and advocate for "ethics by design" so that any AI tool has already built-in profanity filters, prevent misuse (for example adults could now use AI tools for grooming kids), recognize and block hate speech and fake news. Safeguarding children must be a priority when developing such a tool.

In a recent interview, the Australian eSafety commissioner mentioned: "We've already seen chatbots exhibiting sociopathic behaviour, child sexual abuse imagery being created by generative AI and precision language propagating spam, phishing and scams", emphasizing the risks.

 

Chat GPT is not able to evaluate whether the information it provides is accurate or not. Maybe this could be used as an opportunity to design new learning activities in the classroom?

 

ChatGPT works on probability models, which means it gave answers that are most probable for the datasets it has. Whether the information is correct or not, it cannot say. There are several examples of how users led ChatGPT to give false information. Without further investigation, people cannot quickly say if the answer is true or false. Users should approach with common sense and verify the results with independent sources. One of the examples from February 2023 is the list of references on the topic of technology-enhanced learning design models where references were created with real names of academics, real articles or journal titles. However, 5 out of 6 references were non-existant.

We should also mention problems with pictures or videos created from existing resources, but without respecting intellectual property rights or copyright. Or even worse, using those tools to intentionally create fake photos or videos of real people (The Pope Francis puffer coat, Donald Trump being arrested).

Similar examples could be used in learning activities, of course with age-appropriate content, to discuss with students the ethical aspects of the creation and usage of such AI works. Students can generate essays and then do a fact-checking assignment. Regarding the use of AI tools by primary or secondary school students, it is very important that e-safety rules are followed to ensure that students' privacy and data are protected and that students are not exposed to unwanted content.

 

You are a member of the European Commission's expert group on AI and data in education and training. Could you tell us a bit about the highlights from the ethical guidelines coming out of this group?

Ethical guidelines on the use of artificial intelligence and data in teaching and learning for educators are designed to help educators understand the potential of AI and data usage in education and to raise awareness of the possible risks so that they are able to engage positively, critically and ethically with AI systems and to realise their full potential (see the animated presentation).

Ethical guidelines cover common misconceptions about AI, usage examples in education, ethical dimensions, key requirements for trustworthy AI, guidance for educators and school leaders, planning for effective use of AI and data in school and raising awareness and community engagement.

The guiding questions in the guidelines can be used in different ways when reviewing an AI system prior to it being set up in a school or while it is being used. The questions can be asked of the educators themselves, of those making the decision at management level, or of the system providers.

The questions can also inform discussion with learners, parents and the wider school community. Below you can read some of the questions from all seven dimensions.

Additionally, you can find similar approaches and questions in the report Promises of AI in Education by Netherlands SURF and 7 key conditions for reliable AI by Belgium Digisprong Knowledge Center.

Human Agency and Oversight

  • Is the teacher's role clearly defined so as to ensure that there is a teacher in the loop while the AI system is being used? How does the AI system affect the didactical role of the teacher?
  • Are there monitoring systems in place to prevent overconfidence in or overreliance on the AI system?

Transparency

  • Are the instructions and information accessible and presented in a way that is clear both for teachers and learners?
  • Do teachers and school leaders understand how specific assessment or personalisation algorithms work within the AI system?

Diversity, non-Discrimination and Fairness

  • Does the system provide appropriate interaction modes for learners with disabilities or special education needs? Is the AI system designed to treat learners respectfully adapting to their individual needs?
  • Are there procedures in place to ensure that AI use will not lead to discrimination or unfair behaviour for all users?

Societal and Environmental Wellbeing

  • How does the AI system affect the social and emotional wellbeing of learners and teachers?
  • Does use of the system create any harm or fear for individuals or for society?

Privacy and Data Governance

  • Are learners and teachers informed about what happens with their data, how it is used and for what purposes?
  • Is it possible to customise the privacy and data settings?

Technical Robustness and Safety

  • Is there a strategy to monitor and test if the AI system is meeting the goals, purposes and intended applications?

Accountability

  • How is the effectiveness and impact of the AI system being evaluated and how does this evaluation consider key values of education?