Ensuring Authenticity A Powerful Detector for AI-Generated Text in Chemistry PapersEnsuring Authenticity A Powerful Detector for AI-Generated Text in Chemistry Papers

Content by:

Atir Naeem Qurashi

LinkedIn: Click here to see Atir’s profile

The prevalent concern stems from the potential misinformation generated by AI text generators, posing integrity issues for the scientific record when employed without due diligence. Therefore, the development of effective methods to detect AI involvement in scientific paper is crucial. However, existing tools may exhibit limitations, such as non-native English speakers may face difficulties.


Heather Desaire and her team from the University of Kansas in Lawrence, USA, have designed an AI detector tailored for evaluation on articles sourced from chemistry journals. The objective is to distinguish content generated by both ChatGPT’s GPT-3.5 and GPT-4 versions, along with text produced through prompts strategically crafted to conceal the involvement of AI.

To develop their classification model, the researchers utilized ten chemistry journals in the training set, extracting the introduction sections of ten articles per journal to amass a total of 100 instances of human writing. Each of these instances underwent the generation of two distinct “AI versions,” with prompts derived from both the title and the abstract of the corresponding paper.

For every paragraph within the resulting writing samples, the team identified 20 features encompassing text complexity, sentence length variability, punctuation usage, and the frequency of specific words characteristic of either human writers or ChatGPT. Subsequently, this data was employed to train an XGBoost model—a form of machine learning library—capable of classifying writing samples beyond the confines of the training set.

The team’s model underwent testing using articles from a different issue of the same journals in the training set. It accurately classified 94% of human-generated text, 98% of AI-generated text from abstracts, and 100% of AI-generated text from titles. Compared to other leading AI detectors, the new model demonstrated superior performance in identifying AI-generated texts.

The evaluation expanded to include chemistry papers from various journals and publishers, with newspaper articles as a comparison. New prompts aimed at concealing AI usage were employed, instructing ChatGPT to write like a chemist or use technical language. The model correctly classified chemistry articles between 92% and 98%, even with new prompts, while human-written newspaper articles were often misclassified. This shows the detector’s effectiveness in academic scientific writing, though adaptation to other text types would necessitate feature set re-engineering and model retraining.

Also Read: Addition of 400,000 new compounds to open-access materials database

By admin

Leave a Reply

Your email address will not be published. Required fields are marked *