AI education will help people understand the risks, limitations, and opportunities
Former judge Kay Firth-Butterfield began to think about how humans might live and work with artificial intelligence (AI). She’s a senior research fellow at the University of Texas, investigating tech (AI) use and governance. She became the world’s first AI ethics officer (at Lucid Holdings LLC) in 2014 and is a leading expert on responsible AI. For more than five years she led AI and Machine Learning at the World Economic Forum, where she was charged with helping to steer nations and businesses towards a responsible use of the new technologies. She sits on a council and advisory board for the US administration and UNESCO respectively.
Today she’s chief executive of the Good Tech Advisory, which works with government, charities, businesses and academia to help implement responsible and productive use of AI while remaining legally compliant. Long recognised as a leading woman in AI governance, she received a TIME100 Impact Award in February of this year.
She spoke to BOLD about the challenges faced by schools and universities, students and teachers as they grapple with the advance of AI.
Helena Pozniak: Is the use of AI within education inevitable?
Kay Firth-Butterfield: Yes – you can’t step back from it now. Students are going to be using it for their homework. We must focus more on how we can make it safe for them to use rather than banning it. Generative AI is making information on the internet more accessible. As it gets better, it’s effectively the brains sitting next to you. But getting it right is critical – our children need to be educated to work and live with artificial intelligence. It’s humans who should be in charge. There’s much bias in large language models. It’s essential that all users are trained to understand what the machine can do for us – and its limitations.
HP: What are the dangers of AI for children?
KFB: One of the things I worry about is that children form their beliefs, values and attitudes before they’re seven years old, so we must think carefully about policies for the early years.
We really need to understand the impact of educational toys we give them. Computers are arguably better at influencing, nudging and manipulating behaviour than are humans. We also must know where children’s data are, whether devices can be hacked, and whether children can be identified.
“But getting it right is critical – our children need to be educated to work and live with artificial intelligence.”
Most AI toys for young children – such as ‘smart dolls’ – are made in China. If you want a connected toy to have ‘conversations’ with your child, the toy will have to collect data from interactions. Where is that information stored and is it secure? We don’t know, so there are huge issues of data privacy.
We also need to have a conversation about the extent to which we are prepared to allow tech to ‘look after’ our children. What if this connected doll becomes a child’s best friend – but then ‘dies’? How will the child respond? Will such a ‘death’ be more difficult for the child than when a teddy bear falls to pieces? What if your best friend is a machine? Is interacting with these machines preparation for the future? We don’t know yet, but we’re testing this on the most vulnerable among us: children. This brings us back to the need for widespread education on AI so parents can make informed decisions about the toys and tools their children use.
HP:What about older children?
KFB: AI must be considered at all levels. We are educating children for the future and potentially for multiple careers. They must be equipped to get the best out of technology as it changes. And we must educate everyone about AI so we can really engage in the debate about what future we want our kids and grandkids to have.
HP: We already know about AI – why do we need AI education?
KFB: One of the greatest problems is that the capabilities of AI are outstripping almost everyone’s understanding of AI. When it’s used in education, in hospitals, in our voting structure, people don’t necessarily understand what is happening. It’s terribly important that everyone – teachers especially – understand. Teachers urgently require training. Also, we are seeing increasing mistrust in AI. Education will help people know what they should be wary of and what they can safely use.
HP: What are the fundamental components of a responsible AI policy in schools?
KFB: One of the first actions would be to educate children to understand what interaction with a generative AI model means. Schools must also ask: is AI increasing your knowledge or just making you lazier? If children are going to learn anything about AI, they must learn to use it properly.
It’s fun for kids to interact with AI, but what does that mean in terms of privacy and data, and where and how the information is stored? Students need to be aware that some of these tools can be hacked, and schools need to install guardrails, particularly around data and privacy.
More on ethics, privacy, and security in AI
The data scientist putting humans at the center of educational AI
We’ve seen this backfire in the corporate world: In April 2023, Samsung engineers in South Korea uploaded sensitive code to ChatGPT, prompting the firm to ban the use of generative AI on its devices and internal networks, and some US banks have restricted its use. Any generative model that trains from the internet uses data uploaded to the internet.
“Students need to be aware that some of these tools can be hacked, and schools need to install guardrails, particularly around data and privacy.”
Students must be aware they may receive misinformation from ‘hallucinations’ [when an AI large language model makes incorrect predictions] and ‘cannibalism’ [when an AI ‘learns’ from AI-generated data, creating a potentially poor quality feedback loop].
Parents also need to understand how AI is being used. But the final decision on AI must rest with schools rather than parents, who mustn’t be allowed to dictate content. We’ve already seen certain types of schools ban certain books in the US, and this leaves teachers feeling beleaguered.
Schools also need to know that AI can be used as a form of bullying – for example, using generative AI to create deepfake pornographic images of fellow students. Policing this places extra burdens on schools.
HP: What about the future? Are you optimistic?
KFB: I believe in the power of AI for good. I wouldn’t be working to get it right if I didn’t think it was worth doing. I’m very optimistic that if we install the right guardrails, and understand that it’s not a magic wand to make everything easier or better, there’s a huge potential to do great things for human beings. But we need some really novel thinking about education for tomorrow.
HP: What impact will AI have on teachers’ roles? What support do they need?
KFB: Teachers urgently need help, support and training.
Students are going to use AI if and when they can, and banning it isn’t the answer. We’ve been talking about flipped classrooms for years. Rather than teaching content, teachers can use teaching time to challenge their students and encourage critical and analytical thinking. Unless you test what students have told you in an essay for which they probably got help from generative AI, then you are no longer teaching them.
HP: Could AI ease teachers’ workloads and even help with the recruitment crisis?
KFB: One of my great hopes is that we can get AI right in education. There are already AI I tools to help with marking, certainly for science and maths, and I could see that you could train generative AI to help mark humanities work as well, although it’s worth noting that the EU AI Act concludes that using AI for grading is highly risky. But I hope that AI can help with all the administrative tasks too, the ‘drudgery’ of teaching – which will free up time to interact with pupils. Of course that’s important in the UK, for example, but it’s extraordinarily important if it can reach the Global South, where teachers may be responsible for teaching classes with as many as 60 kids of different ages and abilities.
We also need a complete change of thinking about how and what humans are going to need to learn. Children need to learn how to think critically about facts and analyse them. In the age of deep fakes and misinformation, I do not think we should reduce teaching children to requiring them to remember things, but rather concentrate on providing them with tools to question.
HP: Where are schools and universities, in your opinion, in their AI adoption?
KFB: Most universities have already introduced an AI policy. Students are using AI and professors are learning how to teach in a world with AI. But it’s much easier for universities, which have more academic freedom than schools. As machines become more and more capable, it’s what makes us human that will be important, so that implies concentrating on humanities (especially for the scientists creating the AI).
“As machines become more and more capable, it’s what makes us human that will be important, so that implies concentrating on humanities.”
HP: Are we making too much fuss?
KFB: No – I think we need to make the fuss now, because we are making fundamental decisions: at what age will we allow our children to have smart toys? What is their role in socialisation? How will AI and humans work together in the future? Is this the future we want? Really understanding what humans want out of AI is perhaps more relevant than worrying about AI becoming super intelligent. We need to think about how much AI will dominate our future, and how much we will own our future. Education is the starting point for this conversation, but it is a conversation we need to have now.