
Users are now depending on AI-powered chatbots, such as xAI's Grok, OpenAI's ChatGPT, and Google's Gemini, in search of authentic information. (image:Steve Johnson/unsplash)
Nowadays, social media users are increasingly relying on AI chatbots for verification, which, as an AFP report points out, only accelerates the spread of falsehoods. This was especially visible during India’s recent four-day conflict with Pakistan and it underlines AI chatbots’ unreliability as a fact-checking tool.
Users are now depending on AI-powered chatbots, such as xAI’s Grok, OpenAI’s ChatGPT, and Google’s Gemini, in search of authentic information.
Also Read | 80% Charge In Just 6 Minutes: Here’s Everything You Need To Know About India’s New Sodium-Ion Battery
“Hey @Grok, is this true?” is often the common query seen on Elon Musk’s platform X, as users seek instant gratification of reliable news, but the responses are filled with misinformation.
Grok is facing currently facing criticism for inserting the “white genocide” conspiracy theory into responses for unrelated queries. Also, the AI chatbot misidentified old video footage from Sudan’s Khartoum airport as a missile strike on Pakistan’s Nur Khan airbase during a recent conflict between India and Pakistan. It also wrongly identified unattached footage of a burning building in Nepal as Pakistan’s military response to India’s military strikes.
“The growing reliance on Grok as a fact-checker comes as X and other major tech companies have scaled back investments in human fact-checkers,” McKenzie Sadeghi, a researcher with the disinformation watchdog NewsGuard, told AFP.
“Our research has repeatedly found that AI chatbots are not reliable sources for news and information, particularly when it comes to breaking news,” she added.
According to the NewsGuard’s recent research, 10 leading chatbots were susceptible to repeating falsehoods, including Russian disinformation narratives and false or misleading claims related to the recent Australian election.
Similarly, the Tow Center for Digital Journalism at Columbia University, in their recent study of eight AI search tools, observed that since chatbots would normally answer questions even if they couldn’t answer accurately, they provided ‘incorrect or speculative answers’ instead.
For instance, Grok recently assessed a purported video of a giant anaconda swimming in the Amazon River as “genuine,” and to support its wrong claim, the chatbot even cited credible-sounding scientific expeditions. But in actuality, the video was AI-generated, AFP points out.
Also Read | Apple Plans To Launch Smart Glasses Next Year: Report
Apart from this, researchers have also questioned the effectiveness of “Community Notes” in tackling falsehoods.
The quality and accuracy of AI chatbots can change depending on how they are trained and programmed, leading to assessments that their output may be subject to political influence or control.
“I am especially concerned about the way Grok has mishandled requests concerning very sensitive matters after receiving instructions to provide pre-authorized answers,” Angie Holan, director of the International Fact-Checking Network, told AFP.
(With input from agencies)