Monday, May 15, 2023

AI Botches Think Tank Analysis

Here is more from the Washington Post, which asked Microsoft's Bing AI dozens of questions to evaluate the chatbot's answers and sources:

Another problem suggested by our results: When the AI chooses a source, is it adequately understanding what it has to say? In a different answer to that same question about immigrants, Bing cited the Brookings Institution. However, Bing’s AI wrote that Brookings said immigrants may “affect social cohesion or national identity” and push down wages for native-born workers — a claim Brookings never made.

“We are flattered that chatbots like Brookings content, but the response is not accurate,” said Darrell West, a senior fellow in the Center for Technology at Brookings. He said Bing not only failed to adequately summarize that one article, but it also missed the organization’s more recent writing on the topic.

Microsoft told us it couldn’t reproduce that result. “After consulting engineering, we believe you encountered a bug with this answer,” Manfre said.

 

The newspaper concluded that nearly 1 in 10 answers/sources were inadequate or inaccurate.