ChatGPT and nutrition research: Academics stress the need to disclose use, journals to step-up monitoring

By Tingmin Koe

- Last updated on GMT

AI tools such as ChatGPT has become the talk of the town since its launch in November last year. ©Getty Images
AI tools such as ChatGPT has become the talk of the town since its launch in November last year. ©Getty Images

Related tags ChatGPT AI

A group of scientists has published a list of ‘best practices’ when using AI and related tools such as ChatGPT when writing research manuscripts, while an editor-in-chief of a scientific journal told us that there are plans to develop tools to identify whether a script has been written with the help of AI.

ChatGPT, an AI model developed by OpenAI based on the Generative Pretrained Transformer 3 (GPT-3) architecture, has taken the research community by storm since it was launched last November. 

To use it, one would need to type in a question, a statement, or any text for the AI model to generate a response to. The service has been made available in 15 languages as of March, including English, France, Indonesian, Mandarin, and Japanese. 

On March 14, OpenAI released GPT-4, an upgraded version of ChatGPT which can accept images as inputs and generate captions, classifications, analyses, as well as the ability to handle over 25,000 words of text. Other similar AI tools have since entered the market, including China search engine Baidu's Ernie Bot which was launched on March 16.  

In response to the new phenomenon, a group of scientific researchers has come up with a list of seven ‘best practices’ for using AI and ChatGPT when writing manuscripts.

Published in ACS Nano, ​the list was written by a total of 44 researchers, including high profile figures such as Professor Ajay K Sood, principal scientific adviser to the government of India.

First on the list is to acknowledge the use of an AI bot or ChatGPT when preparing the manuscript, clearly indicating the parts where AI is used, and providing the prompts and questions that were put to AI in the Supporting Information section of the manuscript.

Second is to check the accuracy of the output. Third is to refrain from using text verbatim from ChatGPT as the bot might have reused text from other sources, leading to inadvertent plagiarism.

Fourth, citations recommended by the bot need to be verified with the original literature. Fifth, the bot should not be listed as a co-author as it “cannot generate new ideas or compose a discussion based on new results, as that is our domain as humans”.

Sixth, the bot cannot be held accountable for any statement or ethical breach, and the responsibility ultimately lies in the researchers who wrote the paper.

Lastly, the authors emphasised that the research creativity should not be limited by the bot. 

“It is important to state that even among the authors here, there is a diversity of thought and opinion, and this editorial reflects the middle ground consensus.

“In its current incarnation, ChatGPT is merely an efficient language bot that generates text by linguistic connections. It is, at present, ‘just a giant autocomplete machine.’

“AI tools are adequate for regurgitating conventional wisdom but not for identifying or generating unique outcomes,”​ said the authors.

They added that ChatGPT was built based on information gathered before 2021, which in turn restricted its utility when it comes to providing up-to-date information.

Major publisher in the midst of preparing policy

Speaking to NutraIngredients-Asia, ​Professor Raju Vaishya, a senior orthopaedic consultant, joint replacement surgeon, and the editor-in-chief of the Journal of Orthopaedics​, said the journal’s publisher, Elsevier​, was in the process of preparing a policy on the use of AI.

It has not come to our knowledge as yet,”​ he said, when asked if he was aware of manuscripts submitted to his journal that were written with the help of AI.

“But our publisher Elsevier is in the process of preparing a policy for all their journals so that we can either have to make it mandatory for authors to submit this disclosure or acknowledgement.

“Secondly, they may develop some tools which can identify whether the script has been written with the help of AI based tools.”

He said that the guidelines were expected to be published in the coming weeks.

He believes that AI could be good for gathering knowledge about a specific topic, however, it should not be used holistically by researchers, adding that there is a need to cross check the information gathered.

“Most of the journals or editors are highly concerned about the obligation of such knowledge, where scientists are not actively involved, and the research may be inaccurate or that they don't have any personal inputs of the knowledge in the paper,” ​he said.

In the event that AI is used, he pointed out the need to cross check and ask specific questions.

“They need to cross check and ask very specific questions to ChatGPT and moreover, since ChatGPT cannot be considered as an author, the authors must disclose in their manuscript if they're used the help of ChatGPT or similar applications.

“This is so that the readers, reviewers, and editors are well versed with the scientific nitty gritty of the knowledge that has been submitted in the manuscript.

“Human intelligence is more than artificial intelligence because, you know, you can raise questions in your mind which you can answer but artificial intelligence can't do that,” ​he said.

Breakthrough findings on the decline?  

On the other hand, there are concerns that the use of AI bot could lead to cookie-cutter research and reduce the frequency of ground-breaking findings.  

“AI tools are adequate for regurgitating conventional wisdom but not for identifying or generating unique outcomes.

“They might be worse at assessing whether a unique outcome is spurious or ground-breaking.

“If this limitation is true for ChatGPT and other language chatbots under development, then it is possible that reliance upon AI for this purpose will reduce the frequency of future disruptive scientific breakthroughs,”​ said the authors of the best practices.

They said that this was especially concerning since a paper published on January 4 this year has already concluded that the frequency of disruptive scientific breakthroughs was on a negative trajectory.

The paper, published in Nature​,​ pointed out that progress was slowing in several major fields, based on its analysis using data on 45 million papers, 3.9 million patents across six decades.

We find that papers and patents are increasingly less likely to break with the past in ways that push science and technology in new directions.

“This pattern holds universally across fields and is robust across multiple different citation- and text-based metrics.

“Subsequently, we link this decline in disruptiveness to a narrowing in the use of previous knowledge, allowing us to reconcile the patterns we observe with the ‘shoulders of giants’ view,” ​said the paper.

Related topics Science

Follow us

Products

View more

Webinars