The emergence of generative artificial intelligence is redefining how people access knowledge online and deepening the concentration of information that began with traditional search engines (Google is used by 93% of users in South America), by shifting the production and prioritization of knowledge to opaque systems controlled by (once again) a few big tech companies – further solidifying the negative impact on the sustainability of media outlets and journalism.
With the emergence of generative artificial intelligence systems—such as ChatGPT, Claude, and Bard—this model is transforming. These systems not only allow users to search for information, but they also synthesize, explain, and deliver it as a direct answer. This marks the beginning of a new era: from the “search engine” to the “answer engine.”
The change is not merely technical. It involves a reconfiguration of the logic of access to knowledge, of intermediation models, and of information hierarchies in the digital environment. Major technology companies are competing to integrate generative AI into their services : Google launched its Search Generative Experience; Microsoft incorporated ChatGPT into Bing; OpenAI enabled real-time searches within ChatGPT; and Apple is advancing its own assisted search system.
From an economic standpoint, this transition challenges the contextual advertising-based business model that has sustained the internet’s information infrastructure. If people get answers directly from a chatbot, clicks and visits to sites that rely on that visibility for revenue will decrease. This impacts both search engines and the sustainability of media outlets and content creators, reinforcing the concentration of attention on a few AI platforms. In fact, “a study reveals that nearly 60% of Google searches in the United States and the European Union end without the user clicking on any links.”
At a societal level, the answer engines are presented as more user-friendly tools, capable of simplifying and speeding up information searches. However, this experience of apparent efficiency masks new risks: the answers can contain errors, biases, “hallucinations,” and unverified claims. Furthermore, the logic of a single, synthesized answer reduces exposure to diverse sources and perspectives, deepening a pre-existing process of information intermediation concentration that had already displaced alternative, community-based, and non-hegemonic media, as well as other decentralized knowledge spaces.
This shift consolidates the transition toward search environments mediated by opaque and difficult-to-audit artificial intelligence systems. What was once a list of links ordered by Google’s PageRank algorithm is now becoming a single version of knowledge generated by models trained with non-transparent criteria. The centralization of access to information has political and democratic implications: it limits plurality, reinforces dependence on private infrastructures, and hinders accountability for errors or distortions.
Given this scenario, there is an urgent need for a public debate on the role of these new forms of intermediation in access to information and knowledge. Regulation, transparency in the design of the models, and the promotion of a pluralist and open digital ecosystem are necessary conditions to prevent technological evolution from further concentrating informational power.
RELATED LINKS:
US imposes antitrust measures on Google and Europe fines millions for abuse of dominant position
Differences between ChatGPT and Google.
The Antitrust of Answer Engines
Google’s AI leaves media outlets without clicks: only 1% access original sources