“Don’t believe everything you read online” has been a mantra of the internet since the days of AOL, MySpace, Ask Jeeves, and Bebo. It is, of course, healthy to be skeptical about what you see on social media, particularly in an age of deep fakes and AI (more on that later). Yet, it’s also clear that we increasingly put our faith in software and algorithms, and those behind them, each day. For many of us, this is blind faith. Skepticism exists, sure, but there has largely been an outsourcing of authority that most of us are happy with. This, however, is increasingly being challenged.
Consider Googe as an example. The company’s eponymous search engine and (to an extent) Chrome browser are arguably the most influential products in the history of the internet. Search, in particular, became viewed as a tool of authority. The brand name Google has even become a verb, synonymous with the idea of discovery. “Google is your friend” and “Just google it” are common retorts to people who have posted incorrect information.
But many of us have been guilty of erroneously conflating the idea of authority and discovery tools. Google is the latter, not the former. The point, as such, is that performing a Google search is supposed to provide links to websites where you might find the answer to your question, not the answer itself. And that’s the rub. Increasingly, it became clear that search engines were more preoccupied with advertising products than providing answers. Moreover, businesses have learned how to game the system through SEO. This is not a criticism; it’s simply the reality.
ChatGPT challenged Google
When ChatGPT started to gain traction in late 2022, this caused a lot of consternation at Google. Unlike Search, the AI chatbot would provide specific answers without trying to nudge you in the direction of buying products. Many considered ChatGPT to be the “Google killer”. Google rushed out its competitor, Bard, but some of the damage had already been done. ChatGPT’s integration with Google’s rival Bing allowed Microsoft to steal a march on its rival.
And yet, some of the limitations of AI are already showing up. Regardless of what you are told, AI cannot reason. It depends on data, and it scrapes that data from the internet. ChatGPT might be able to write you a college-level essay on a subject, but it can only obtain that information from the data it is allowed to see. It cannot make judgments. That is crucial. It is not an arbiter of truth, and there is largely a failure to understand that. Yes, we will add the caveat that ChatGPT will improve, but the leap from presenting data to reasoning has not yet been made. In short, ChatGPT is a superior version of Google Search that can present information in a different way, i.e., in the form of a conversation.
Different ways to verify
In a sense, there are three types of truth arbiters on the internet: Consensus proofs, third-party verification, and public verification. These are all broad concepts, but we give examples. A consensus proof is something like Community Notes on Twitter (now called X, of course), where “truth” is accomplished via adding context and consensus. Elon Musk has taken a lot of flak since taking over Twitter, but he should be lauded for his extension of the Community Notes program. Third-party verification is just that, having an independent authority declare that something is valid. This could be something like Apple’s approval for a gaming app listed on the App Store, or some kind of audit by a trusted third party, such as company accounts.
The third area is perhaps the most interesting, however. By public verification, we refer to immutable public ledgers, and in practice, that means the blockchain. While most of us automatically think of cryptocurrency when we hear of blockchains, it is only a small part of it. Indeed, while the future of cryptocurrency is uncertain, the future of blockchain cryptography is not. The public ledger offers irrefutable proofs. This is backed up by features like smart contracts. In the simplest terms, they can tell us that “A” happened, and that can cause “B” to be verified, and that, in turn, can cause “C” to happen. It sounds simple, but eventually blockchains can get us to a trustless version of the internet.
World App wants to prove you are human
Recently, we saw the launch of the World App and Worldcoin projects, coincidentally backed by the founders of OpenAI and ChatGPT. The World App is designed to foster verifications of humans in the age of AI and bots. It uses blockchain, of course, and the idea is to be preemptive over a “near” future where AI will be indistinguishable from humans online. Social media bots are already quite adept at this, but it’s going to get worse.
Now, the point here is not to laud the World App project. In order to take part, you’ll have to have your irises scanned for verification – not everyone is comfortable with giving over that biometric data. But we can say that the team behind the project is looking to solve a problem that the mainstream media has failed to anticipate. It’s important in the quest for online truth, even if not everyone agrees with the way the project goes about it.
These are just some examples of how trust in software has evolved. And to be clear: they can be criticized. Blockchain is not the answer to everything, and it also has flaws and limitations. In the end, you are right to remain skeptical. The rapid pace of technological adoption has left us in a tailspin, mostly through information overload, a fact exacerbated by social media. Truth has almost become subjective. That issue, in itself, has changed the internet as technology rolls out to make it more objective.