Ofsted will train inspectors on artificial intelligence use and explore how the technology can help the watchdog to make “better decisions”.
The government requested regulators and agencies set out their strategic approach to AI by the end of April.
In its response, published today, Ofsted said it already used AI, including in its risk assessment of ‘good’ education providers, to help decide whether to carry out full graded inspections or short ungraded visits.
But Ofsted is also “also exploring how AI can help us to make better decisions based on the information we hold”, to work “more efficiently” and “further improve” how it inspects and regulates.
The biggest benefits from AI could include assessing risk, working more efficiently through automation and making best use of the data – particularly text.
It will also “develop inspectors’ knowledge” about the technology so they “have the knowledge and skills to consider AI and its different uses”.
Ofsted won’t inspect AI tool quality Ofsted said it supported the use of AI by education providers where it improves the care and education of learners.
When inspecting, it will “consider a provider’s use” of AI “by the effect it has on the criteria set out” in its existing inspection frameworks.
But “importantly” it will not directly inspect the quality of AI tools.
“It is through their application that they affect areas of provision and outcomes such as safeguarding and the quality of education,” Ofsted said.
“Leaders, therefore, are responsible for ensuring that the use of AI does not have a detrimental effect on those outcomes, the quality of their provision or decisions they take.”
Ofsted warned the effect of the new technology on children is still “poorly understood” so it will try to better understand the use of AI by providers and research on the impact.
“By better understanding the effect of AI in these settings, we can consider providers’ decisions more effectively as part of our inspection and regulatory activity.”
‘Modest number’ of AI malpractice cases Exams regulator Ofqual said there had been “modest numbers” of AI malpractice cases in coursework, with some leading to sanctions against students.
In its evidence, also published today, the regulator said it would add AI-specific categories for exam boards to report malpractice.
It has also requested “detailed information” from boards on how they are managing AI-related malpractice risks.
The regulator has adopted a “precautionary principle” to AI use, but remains open to new, compliant innovations.
But Ofqual told exam boards last year that AI as a sole marker of work does not comply with regulations, and using the technology as a sole form of remote invigilation is also “unlikely” to be compliant.
It has launched an “innovation service” to help exam boards understand how their innovations meet regulatory requirements.