A recent article by Fast Company highlights the prevalence of “hallucinations” in chatbot responses, with research indicating that up to one in five citations generated by GPT-4 are fictitious. This unreliability of chatbots has given rise to a concerning trend in the education sector.
AI essays advertising on TikTok and Meta
Essay mills, notorious for producing academic content in exchange for a fee, are now proudly proclaiming the fusion of AI and human labor in creating undetectable academic materials.
Shockingly, a recent analysis published on the open-access repository arXiv reveals that these essay mills are actively soliciting clients on social media platforms like TikTok and Meta, despite the practice being illegal in several countries, including England, Wales, Australia, and New Zealand.
Michael Veale, an associate professor in technology law at University College London and one of the authors of the study, has raised an alarm about the criminality of these platforms advertising such services. He notes:
Platforms like TikTok and Meta are committing a criminal offense by advertising these systems because most of these laws contain explicit criminal provisions about advertising.
Veale’s discovery came about while examining ad archives provided by TikTok and Meta in response to the EU’s Digital Services Act, which aims to enhance transparency in advertising on major tech platforms.
Why is this a problem?
The ads in question were purchased by various companies offering a range of essay-writing services, involving 11 different AI-based services. Some of these services claim to refine text to give it an academic tone, while others offer to compose entire essays, provide citations, and conduct plagiarism checks using AI.
Many of these tools seem to be built on top of existing Large Language Models (LLMs), employing tailored prompts to extract the desired responses from chatbots.
TikTok responded promptly by removing the highlighted videos and banning the accounts responsible for breaching its advertising policies.
A TikTok spokesperson clarified that while ads promoting AI applications like ChatGPT are allowed under certain circumstances, they do not tolerate misleading or dishonest advertisements.
On the other hand, Meta did not provide a response to inquiries, and some ads named in the report remain visible, albeit inactive, on their ad transparency platform.
The critical question now revolves around how these social media platforms address the presence of potentially illegal advertisements. Veale underscores the vagueness of these laws, highlighting that platforms like Meta and TikTok must make their own determinations regarding the extent to which they enforce these legal provisions.
In theory, some broadly worded legislation could potentially even outlaw general-purpose AI systems or assistive tools, not dissimilar to autocorrect.
Possible implications of essay mills
The implications of this issue are significant, as both students and contract-cheating providers increasingly turn to AI.
Thomas Lancaster, an academic integrity specialist at Imperial College London, asserts that “AI is just standard technology,” emphasizing its growing ubiquity.
As AI continues to integrate into education, tackling the proliferation of AI-powered cheating services through advertisement takedowns may prove to be a futile endeavor.
The Fast Company article suggests that the use of AI for academic dishonesty is difficult to reliably detect and curb, raising concerns about the integrity of education systems worldwide.
In conclusion, the convergence of AI and academic cheating on social media platforms like TikTok and Meta underscores the need for a concerted effort to address this issue. As technology advances, ensuring academic integrity becomes an increasingly complex challenge, necessitating cooperation among educators, tech companies, and policymakers to maintain the credibility of educational institutions.
Fast Company’s investigation serves as a crucial wake-up call to confront this issue head-on, preserving the sanctity of education in the digital age.
How do teachers deal with AI cheating?
Dealing with AI cheating, where students use AI tools or services to cheat on assignments and exams, is indeed a concern for educators.
That’s why, for example, some schools and universities were investing in AI-powered plagiarism detection tools that could identify papers or assignments generated with the help of AI. Some educators were paying closer attention to the quality and consistency of students’ work. Sudden improvements in writing or a significant shift in writing style could be indicators of AI use.
Moreover, teachers were reevaluating their assignment designs to make them less susceptible to AI cheating. Creating assignments that required critical thinking, analysis, and personal reflection rather than simple regurgitation of information could make it harder for students to use AI to cheat.
And finally, schools and universities were communicating their policies on academic integrity clearly to students. They were emphasizing the consequences of AI cheating and promoting a culture of academic honesty.
However, there are teachers who fully embrace the use of LLMs, such as ChatGPT, in the classroom, as reported by NPR.
Namely, Ethan Mollick, an associate professor at the University of Pennsylvania’s Wharton School, believes that integrating ChatGPT into the classroom isn’t necessarily a bad idea, emphasizing the need for humans and AI to coexist and adapt to the changing educational landscape.
“The truth is, I probably couldn’t have stopped them even if I didn’t require it (…) After all, we taught people how to do math in a world with calculators,” Mollick said in an interview for NPR.
In the debate surrounding the use of ChatGPT in academic work, opposing views exist. To maintain academic integrity, it is imperative that any use of ChatGPT or similar AI tools in academic assignments be transparently disclosed, ensuring fairness and honesty in education. Anything to the contrary is considered a breach of academic integrity and constitutes fraud.