Bookmarked Article

ChatGPT creator OpenAI unveils tool to detect AI-written text

MMM DD YYYYWORDS BY MARCUS LEE
It doesn’t function completely accurately (yet), but the newly-released AI text classifier will at least mitigate problems of academic dishonesty and misinformation.

With ChatGPT reaching over one million users within a week of its launch back in December, its proliferation has begun to disrupt the education sector at large. Reports of students using ChatGPT to instantly generate entire essays have become such a noticeable problem that New York City has blocked ChatGPT access for students and teachers at public schools
 

Aside from obvious concerns about cheating, the nature of ChatGPT –  a large language model built upon a vast corpus of existing text –  could mean that outputs are blatantly plagiarized or contain inaccuracies within its training data. Grassroots efforts like GPTZero from Edward Tian, a Princeton University student, have been released to help educators combat the problem of AI plagiarism. 

 

In response to these emerging issues, OpenAI has released their own classifier capable of distinguishing between human-written text versus AI-generated text from a variety of AI providers. 

 

However, the classifier does come with a few caveats.

 

  • OpenAI states the classifier is extremely unreliable for texts below 1,000 characters
  • Currently limited to English text only; unreliable for other languages and code
  • Predictable text (like math equations or lists) cannot be discerned
  • AI-written text can be edited to evade the classifier
  • Text outside of its training set often result in a wrong prediction 

 

OpenAI warns that the current iteration of the classifier should not be relied upon as the primary method of determining the source of a piece of text, but rather as a complement to other evaluation methods.

 

In a set of benchmark assessments, the classifier correctly identified only 26% of AI-generated text as “likely AI-written”, but also wrongly labeled 9% of human-written text as AI-written 9% of the time. 

 

So what's the big deal?

While its current iterations will not able to detect all AI-generated text, well-trained classifiers will eventually play a key role in addressing the most pressing forms of abuse: AI-powered academic dishonesty, the spread of false information through misinformation campaigns, and attempts of misrepresentation AI chatbots as actual humans.

 

To supplement these efforts, OpenAI has provided a preliminary resource to help educators to learn more about the capabilities, limitations and ethical guidelines of having ChatGPT in the classroom. 
 

OpenAI highlights how use of both ChatGPT and its associative classifier can provide a streamlined and personal teaching experience. Drafting lesson plans, designing quiz questions and customizing materials for tailored preferences are amongst some of the examples given.

 

If our future will inevitably be linked with AI, OpenAI believes that it will ultimately be necessary for members of academia to learn how to navigate a world where tools like ChatGPT are commonplace. This will include new skills like how to effectively use a language model, as well proper disclosures and avoiding over-reliance.