Skip to main content

Do you actually want to use AI search like Google’s Bard and Microsoft’s new Bing? [Poll]

This month has seen the debut of Microsoft’s new Bing and Google’s Bard, and it’s also seen both AI search products make some crucial mistakes almost immediately. It really raises the question: Do you actually want to use AI search?

9to5Google has a rebooted newsletter that highlights the biggest Google stories with added commentary and other tidbits. Sign up here!

Both Microsoft’s new Bing and Google Bard, the latter not actually being available or even finalized yet, take advantage of AI to craft conversational answers to questions. And, at a glance, it’s quite impressive what these can do. A simple sentence can lead the AI spitting out tons of information about the subject, often tailored to the specifics of what you asked.

Quickly, though, it’s been observed that these products are far from perfect.

In Google’s first demo of Bard, for example, a slightly erroneous statement about the James Webb Telescope left many questioning the accuracy of the product and also led to a huge stock dip for Google’s parent company Alphabet.

And as more and more folks have gotten access to Microsoft’s “new Bing,” which is powered by ChatGPT, the errors have been flowing in. One example that’s been shared around Twitter shows Bing’s chatbot thinking that Avatar: The Way of Water is still months away from its release, while the movie actually released late last year.

Mistakes are expected with these AI bots to some extent, but it does seem that one of the big problems they are facing is with the confidence the bot replies with. As in the example above, the tone of the conversation exudes confidence in the completely wrong information. Google’s slip up with Bard was also quite confident in its phrasing.

Problems like these may get better over time, as more people using the AI can further build up the information it has to train off of. But still, it certainly raises alarms with how accurate AI-powered search can be.

What I really want to know is who actually wants to use this? With slipups like this happening almost immediately, it seems… not worth it. The big reason AI search and this conversational UI seems so compelling is in finding quick answers that aren’t plagued by a bunch of spammy SEO articles that just want to get your clicks. But if the information these bots are delivering isn’t even accurate, what’s the point? One potential savior are the other AI tools that splinter off of the search products, such as Bing’s ability to look at a webpage and offer a summary. But even with that in mind, glaring errors don’t really build up confidence in other areas. Our Max Weinbach noted last night that Bing’s chatbot isn’t even very accurate with math.

How do you feel? Do examples like this lessen your confidence in AI search? Do you actually want to use products like these? Vote in the poll below, and let’s discuss in the comments!

FTC: We use income earning auto affiliate links. More.

You’re reading 9to5Google — experts who break news about Google and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Google on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel



Avatar for Ben Schoon Ben Schoon

Ben is a writer and video producer for 9to5Google.

Find him on Twitter @NexusBen. Send tips to or encrypted to