Artificial intelligence (AI) is rapidly becoming common in classrooms worldwide. Many educators are exploring potential technologies like GPT-3 (the large language model behind ChatGPT) to improve teaching and learning. Education technology firms, such as Noodle Factory, are actively developing tools to assist educators in tapping into the power of conversational AI. While AI could offer many exciting possibilities, educators need to be aware of the potential risks and ethical considerations that come with it.
One of the key features of AI that sets it apart from other technologies is its ability to generate human-like text and perform a wide range of language tasks. This makes it particularly useful for summarising course materials, generating personalised lesson plans, providing real-time feedback to students, and generating tutoring flows to support explaining complex concepts. However, it is crucial to remain aware that AI models can sometimes perpetuate or amplify biases or incorrect information that might be present in the data the algorithms are trained on. It is critical to consider the data used to train these models carefully. For example, Noodle Factory focuses on allowing educators to repurpose content they are already using to teach courses. While this only partially eliminates the prospect of bad information making its way into a knowledge base, it significantly lowers the risk of this issue.
In addition to the potential benefits of AI, there are also risks that educators should be aware of. One of the main risks is the potential for misusing these tools or for students to rely too heavily on them rather than developing their own critical thinking and problem-solving skills. This, as well as the possibility of cheating, has prompted some schools and school districts, such as the New York City and Los Angeles School districts, to ban the use of ChatGPT on their networks. While I understand this type of response, many metaphors come to mind–the genie is out of the bottle, Pandora’s box has been opened, the ghost has exited the machine–it may be a better strategy to manage the use of these tools rather than ban them altogether. In fact, this is a conversation I had today with an investor when explaining our approach to using conversational AI–specifically GPT-3 powered tools–in the structured environment that Noodle Factory’s “Walter” platform provides rather than simply leaving it to chance that students are encountering the correct information and/or answers “in the wild”. I will focus more on this specific fear in Part IV of this series.
There is also a fear that we sometimes hear expressed from clients that increasing reliance on AI in education could have the knock-on effect of reducing the workforce needed to support education. In other words, the fear is that using powerful AI tools could lead to the loss of jobs for educators, particularly in lower-skilled roles, and might replace the need for human teachers. Given the current shortage of teachers worldwide and the current state of AI, this concern is not grounded in reality. Could AI replace teachers in the future? If we are strictly talking (hypothetically) about what might be possible in the future, the answer is “yes”. However, if we are talking about the realistic near-term, the answer is “no”, and we should all embrace ways in which AI can augment and assist educators or even relatively lower-skilled support staff.
Privacy is another key concern when it comes to the use of AI in education. These tools often require access to substantial amounts of personal data, and it is crucial to ensure that appropriate safeguards are in place to protect student privacy. Educators should also be transparent about the collected data and its use.
Finally, several ethical considerations must be considered when using AI in education. These include the potential for these tools to monitor or control student behaviour and the need to ensure that AI is used responsibly and ethically.
Overall, AI in education can revolutionise how we think about teaching and learning. However, it is essential for educators to be aware of the potential risks and carefully consider this technology's ethical implications. By doing so, we can ensure that AI is used to benefit students, educators, and society.