Fact or Fiction: A New Paradigm of Trust


Published on January 29, 2024

In a world where digital content is driven by artificial intelligence (AI), separating fact from fiction becomes an unsolvable puzzle.

Alarmingly, the most trusted information servers will probably let us down first: only a little more than a year after the public release of ChatGPT, Google struggles heavily with its artificially generated content and, as a result is now surfacing an increasing amount of unreliable forum answers instead.

Once lost, rebuilding confidence in digital information won’t be easy. However, a small startup from Europe is now working on a solution that could serve as a blueprint for the future.

The Fight for Interpretive Authority

Naturally, the demise of the old guard opens a window of opportunity for competitors to find their footing.

In this case, the emerging large language model (LLM) and their smart chatbots are trying to become the aggregators of truth when it comes to online content. Ironically, these LLMs won’t really solve the credibility issue either since they’re relying on community inputs from Reddit and co. just as much as Google does.

Of course, one could argue that Reddit has its own quality measures in place and referencing it as a source should be sufficient (e. g., Perplexity includes references in the results). But this really misses the essence of the problem; garbage in, garbage out.

Think about it: what is the value of a quote or reference to a piece of content if the very thing that is referenced is a community note, which itself might be inspired by the answers of an LLM that was trained by yet another forum comment?

At this point it should be quite obvious that soon enough such a practice would lead to an endless cycle of self-referencing without any piece of new information ever created.

New Paradigm of Trust

Even if the future of reliable online content looks grim at first, there is a solution looming on the horizon: co-creation with credible experts.

It is clear that smart chatbots won’t go away anytime soon. Also, what comes out of an LLM cannot be controlled with certainty. Luckily, we are still able to control the quality and credibility of the input.

Of course, this is easier said than done but also far from unfeasible as a small Swiss company proves with its unique approach.

Kampadre, a tiny tech startup from Switzerland, is building a digital marketing expert that leverages the know-how of 160 advertising agencies as well as its users’ feedback. With their help they try to fine-tune an existing LLM to a point where the majority of the answers will be provided by experts.

Certainly, this simple approach might not work in every case (e.g., generalizing across domains is tricky), though it effectively solves the issue of low quality input. In addition to that, it leaves plenty of room for attribution to the creators, making it more likely for professionals to contribute while adding a layer of trust.

In summary, innovative examples like these show that there is still ample opportunity for search engines and intelligent chatbots alike to serve trustworthy content without compromising on ease of use or attribution.

Technology Reporter