ChatGPT Used in Thousands of Scientific Studies: Here’s the Proof
ChatGPT: A Revolution in Scientific Studies
ChatGPT has been used in the realization of thousands of scientific studies, a suspicion that now seems to have become a certainty thanks to an incredible study conducted by Andrew Gray, a Scottish librarian at University College London.
Gray analyzed the abstracts of 5 million scientific studies published last year, a unique and meticulous work that undeniably stands out.
The Influence of ChatGPT in Scientific Language
The suspicious increase in the usage of specific terms in scientific studies, such as “commendable,” “meticulously,” “intricate,” and “meticulous,” all commonly used by the artificial intelligence, hints at the systematic use of ChatGPT in these research papers.
According to Gray, the surge in the usage of these words in scientific studies, in some cases exceeding 100% compared to the previous year, can only be explained by a systematic reliance on ChatGPT.
Impact and Implications
The findings of Gray’s research, involving studies published in the world’s most prestigious scientific journals, raise concerns.
As a result, both Italy and the EU have recently started to legislate on the complex topic of artificial intelligence.
An article by Focus in January 2023 highlighted how ChatGPT was capable of writing scientific articles shortly after the launch of this AI-based software.
Gray’s research suggests that ChatGPT may have been involved in at least 60,000 scientific studies, over 1% of those analyzed in 2023.
While these studies were not entirely produced by the chatbot, authors often sought the AI’s assistance in translating texts and revising their writings.
Gray pointed out a gray area where some scientists rely heavily on ChatGPT’s assistance without verifying the text’s accuracy.
This lack of transparency on the actual usage of ChatGPT poses a significant challenge.
Language Influence and Future Concerns
Gray’s analysis revealed a sudden increase in the usage of terms like “meticulously” (up to 137%), “intricate” (117%), “commendable” (83%), and “meticulous” (59%), indicating potential AI-generated content.
The risk of the broader society being influenced by this artificially meticulous language is significant.
Studies have shown that AI language models tend to disproportionately use certain words with positive connotations, potentially leading to linguistic and even cognitive homogenization.
In conclusion, the issue extends beyond using AI for scientific studies to a potential homogenization of language and thought patterns, reminiscent of George Orwell’s impactful concepts.