An undergraduate student writing for the Stanford Review in January 2023 confirms the rise in cheating as a result of using "CheatGPT" at Stanford University: "
"For those of you who don’t believe the effectiveness of Chat GPT for writing essays, perhaps you would be persuaded by the fact that it actually wrote this entire article from the prompt 'Write a contrarian article about Stanford students using Chat GPT to cheat.' Even this self-aware section was penned by a machine. Despite the style not fitting my usual way of writing, and the writing being a bit clunky, its readability should be a warning to professors and graders everywhere." ~Thomas Adamo, Stanford Review, Opinion"
A qualitative instrumental case study earlier this year demonstrated ethical concerns that face educators who are stymied by whether or not students are actually using chatbots to write their essays. "ChatGPT has proven that it can help students write essays and answer short-answer and multiple-choice exam questions, hence facilitating cheating. However, the most critical issue to pay attention to is that students can even get away with playing the system." The study looked at the former AI model, ChatGPT2, and did a side by side comparison to scan for human versus AI authorship. By just changing one word in the essay, the chatbot determined that there was only a 24% probability that the essay was AI written, when in fact the entire essay was AI authored.
Open AI's general FAQ's acknowledge that ChatGPT can have common factual errors, since the chatbot is "not connected to the internet". The chatbot is relying upon the human user to "thumbs down" the answer if they find any inaccurately generated content.
Here's an example I asked ChatGPT that as a librarian, knowing the local history documents, knew the chatbot would not be able to prove one way or another.
5700 College Rd.
Lisle, IL 60532
225 E. Main St.
Mesa, AZ 85201