304 Google and Microsoft Chatbots Erroneously Report Ceasefire in Israel-Hamas Conflict In the midst of the ongoing Israel-Hamas conflict, the chatbots of Google and Microsoft have mistakenly announced a ceasefire, reigniting concerns about their reliability in delivering accurate real-time news updates. During an investigative experiment by Bloomberg’s Shirin Ghaffary, questions posed to the AI chatbots about the conflict resulted in significant errors, underscoring the potential for these technologies to contribute to public confusion during complex and rapidly evolving situations. Google Bard, in particular, incorrectly declared a ceasefire, while also projecting future death toll figures, subsequently admitting to potential mistakes in understanding news updates and advising users to consult multiple sources for accurate information. Microsoft’s Bing Chat similarly corrected its initial response, acknowledging the absence of a ceasefire in the conflict. The incident highlights the limitations of AI chatbots, especially when handling sensitive and rapidly changing subjects. Both Google and Microsoft have acknowledged the need for ongoing improvements to enhance the accuracy and relevance of their AI-powered information services. Despite efforts to provide disclaimers and citations alongside AI-generated responses, the potential for inaccuracies remains, particularly in situations where conflicting reports exist about a complex event. You Might Be Interested In Google developing free anti-terrorism moderation tool for smaller websites Google CEO Pichai Testifies $26 Billion Payments Intended for Device Upgrades Tesla Faces Crucial Vote on Elon Musk’s $56 Billion Pay Package Amidst Shareholder Division China’s capital city Beijing has big plans for robotaxis, Pony.ai says Apple Developing AI Chips for Data Centers Industry Leaders Warn of Impending AI Revolution as Threat to Humanity