Statements are colour-coded for clarity, indicating similarity to existing content (green), lack of relevant content (orange), or insufficient information for evaluation (unhighlighted).
Google’s Gemini AI users can now verify the authenticity of the content generated by the chatbot using Google. When Gemini AI offers a response to a query, users can cross-check the information provided in the AI-generated content using the Google search engine.
Gemini has added a Google toggle right below the AI-generated content labelled “double-check response,” which can quickly cross-verify the authenticity of the content generated by Gemini AI. This feature can be accessed on both the mobile app and the web version of Gemini AI.
The new feature is available on both app and web version of Gemini.
Google says, “The double-check responses feature helps you assess the credibility of Gemini’s statements using Google Search to find content that’s likely similar or different.”
For easier understanding, the cross-verification classifies statements into three different colours. Text highlighted in green suggests that Google search has found content similar to the AI-generated information and includes a link to it.
Similarly, if the text is indicated in orange, it means that Google did not find relevant content. Lastly, if part of the text is not highlighted, it indicates there isn’t much information on the web similar to the AI-generated content to evaluate it.
Depending on the response generated by Gemini AI, it could contain information similar to the generated response, and part of the response might not be relevant, with no such information accessible on the web by Google.
Large language models like Gemini AI are known to have issues such as generating inaccurate information, which can affect the credibility of their output. Google has integrated a double-checking feature using its search engine with the Gemini chatbot, making it easier for users to verify the accuracy of the AI-generated content.