"In fact, the AI Overview of Google mostly uses text generation to come up with its response. Then, it finds sources to back up its claims. (ChatGPT Search, on the other hand, does the opposite)."
Probably not important but that's a interesting nugget on how Ai overview works. Do you have a cite or link?
I've often wondered why Google Ai overviews seems far worse than normal RAG search systems.
From the technical point of view i think it does not matter if it generates first, looks for things to back it up and edit the text if it does not etc as RAG variants include "generate and search" as well as the more common "search and generate" style and both if properly implemented should be equalvant.
My guess is Google ai overviews is bad more because it's designed to be fast and light weight due to costs (number of Google users are insane)
Hello Aaron! Thank you for commenting! I always appreciate our discussions! There were multiple sources that explore the fallibilities of Google's AI Overviews specifically, but the comment about it making things up and THEN looking at sources to corroborate it was from a conversation I had with Steve Hargadon as we looked at its capabilities and patterns.
There are multiple ways to see that it does this (at least to infer.):
1. If you ask it a nonsense question and it generates an overview, it will give you a complete answer WITHOUT LINKS.
2. Especially in the beginning, AI Overviews on REAL questions would be generated either a. without sources, b. with completely hallucinated links, c. initially have no sources, but then when you refreshed the page, be replaced by one of Google's "info cards." d. There would be "sources," and the information would be reliable, but the sources would not CORRESPOND to the information attributed to them.
3. In a particularly damning example, look at the trend of "asking Google what a made-up saying means, and it will come up with a completely bullcrap answer." Many of these responses do not have links at all, but they are still generated.
That's just my reasoning and my thought processes, and I am sure there are more credible, non-primary-research sources I can find eventually. I will get back to you!
Fascinating. I really have no sense what AI overviews is doing. I don't think Google has ever described it even a bit technically what is going on. My guess is it probably isn't simple RAG but some complicated hybrid system that may trigger for different scenarios.
"In fact, the AI Overview of Google mostly uses text generation to come up with its response. Then, it finds sources to back up its claims. (ChatGPT Search, on the other hand, does the opposite)."
Probably not important but that's a interesting nugget on how Ai overview works. Do you have a cite or link?
I've often wondered why Google Ai overviews seems far worse than normal RAG search systems.
From the technical point of view i think it does not matter if it generates first, looks for things to back it up and edit the text if it does not etc as RAG variants include "generate and search" as well as the more common "search and generate" style and both if properly implemented should be equalvant.
My guess is Google ai overviews is bad more because it's designed to be fast and light weight due to costs (number of Google users are insane)
Hello Aaron! Thank you for commenting! I always appreciate our discussions! There were multiple sources that explore the fallibilities of Google's AI Overviews specifically, but the comment about it making things up and THEN looking at sources to corroborate it was from a conversation I had with Steve Hargadon as we looked at its capabilities and patterns.
There are multiple ways to see that it does this (at least to infer.):
1. If you ask it a nonsense question and it generates an overview, it will give you a complete answer WITHOUT LINKS.
2. Especially in the beginning, AI Overviews on REAL questions would be generated either a. without sources, b. with completely hallucinated links, c. initially have no sources, but then when you refreshed the page, be replaced by one of Google's "info cards." d. There would be "sources," and the information would be reliable, but the sources would not CORRESPOND to the information attributed to them.
3. In a particularly damning example, look at the trend of "asking Google what a made-up saying means, and it will come up with a completely bullcrap answer." Many of these responses do not have links at all, but they are still generated.
That's just my reasoning and my thought processes, and I am sure there are more credible, non-primary-research sources I can find eventually. I will get back to you!
Fascinating. I really have no sense what AI overviews is doing. I don't think Google has ever described it even a bit technically what is going on. My guess is it probably isn't simple RAG but some complicated hybrid system that may trigger for different scenarios.