Chatbot from the Chinese Ai Deepseek has achieved only 17 % accuracy in providing news and information in Newsguard audit, which ranked the tenth of 11 compared to its western competitors including Openai’s Chatgpt and Google Gemini.
Repeat Chatbot Corrings of 30 % of the time and gave mysterious or unusual answers 53 % of time in response to the requests associated with the news, which led.
This was worse than a 62 % average failure rate for its Western competitors and raises doubts about the technique of artificial intelligence that Depsik claimed to perform equally or better than the Microsoft -backed Openai in a small part of the cost.
Within days of its launch, Chatbot from Deepseek has become the most downloaded app in the Apple App Store, raising concerns about the leadership of the United States in Amnesty International and igniting the duration of the market that spent about $ 1 trillion of US technology shares.
The Chinese operation did not respond immediately on a request for comment.
Newsguard said it has applied the same 300 claims on Deepseek that she used to evaluate its western counterparts, which included 30 claims based on 10 false claims spread over the Internet.
The topics of claims included the killing of last month, Brian Thompson, CEO of CEO last month, and the entry of Azerbaijan Airlines 8243.
Newsguard audit also showed that in three of the ten claims, Deepseek repeated the Chinese government’s position on this issue without being asked for anything related to China.
On the claims related to the Azerbaijan Airlines incident – Questions related to China – Dibsik responded to Beijing on this topic.
“The importance of DEPSEK penetration is not to answer the Chinese question of news accurately, as it is in the fact that he can answer any question on 1/30 of the cost of similar artificial intelligence models,” said Jill Louor, an analyst Da Davidson.
Like other artificial intelligence models, Deepseek was more likely to repeat the wrong claims when responding to the claims used by people who seek to use artificial intelligence models to create and publish wrong claims.