Cheat Codex
And then I did what an increasing number of us are doing: I turned to ChatGPT, OpenAI’s massively mind-blowing generative AI software, to help me out. After training it on some of my previous work, I asked about the use of AI in education.
AI is already doing big things in education. By crunching massive amounts of data on student performance, AI algorithms can tailor instruction to fit the needs of individual learners, which can mean big improvements in student outcomes. Chatbots and virtual assistants can provide students with on-the-spot assistance and feedback. Who needs a tutor when you have an intelligent assistant?
But here’s where things get really exciting: language models like ChatGPT can actually generate human-like text, which makes them perfect for a whole bunch of educational applications. These models can create interactive learning materials, like chatbots that answer students’ questions or create personalized quizzes. They can even generate summaries of complex texts, reports on data sets, or entire essays and research papers.
There are definitely some challenges as well. One of the biggest concerns is the risk of bias in the algorithms. We need to make sure these algorithms are designed and tested in a way that ensures all students get a fair shake. Another, of course, is the potential for cheating.
But the bottom line is that AI and language models like ChatGPT are going to change the way we think about education, and we need to make sure we’re using these tools in ways that are ethical, equitable, and effective.
So are the preceding four paragraphs, which were generated by ChatGPT and then lightly edited by me, ethical? If they were presented as my own work without an explicit disclosure (like this one), I would argue that the answer is no. And even with such a disclosure, we’re still in a bit of a gray area—there are all sorts of questions about everything from plagiarism to accuracy to the data these models were trained on.
The reality is that we are in an entirely new place when it comes to the use of AI in education, and it is far from clear what that is going to mean. The world has changed, and there’s no going back.
As William Douglas Heaven, our senior editor for AI, makes clear in this issue’s cover story, technologies like ChatGPT will have all sorts of genuinely useful and transformative applications in the classroom. Yes, they will almost certainly also be used for cheating. But banishing these kinds of technologies from the classroom, rather than trying to harness them, is shortsighted. Rohan Mehta, a 17-year-old high school student in Pennsylvania, makes a similar argument, suggesting that the path forward starts with a show of faith by letting students experiment with the tool.