Epigraph
Whoso makes a righteous intercession shall have a share thereof, and whoso makes an evil intercession, shall have a like portion thereof; and Allah is Powerful over everything. (Al Quran 4:85)

Presented by Zia H Shah MD
Introduction
We live in an era where the collective memory of humankind seems only as expansive as the internet. Every day, billions of questions are answered by search engines or AI assistants scouring online data. In this digital whirl, there’s a growing assumption that if something isn’t on the web, it might as well not exist. Even a former U.S. archivist observed that “if it’s not online, it doesn’t exist — or at least if you can’t find something online, you don’t know if it exists” publicknowledge.org. This mindset, increasingly commonplace, hints at an unsettling truth: valuable knowledge in books, archives, oral traditions, and other offline sources risks being overlooked or forgotten in an AI-dominated age. The convenience of instant information, while empowering, may inadvertently narrow our view of the world to only what is digitally available. In this reflective essay, I explore how good information in domains like religion, science, history, and literature can fade into irrelevance if it remains offline – and what that means for society’s understanding and memory. I also examine how this trend especially impacts marginalized communities whose wisdom traditions are underrepresented online, and consider some hopeful paths to bridge the gap between offline wisdom and the AI-driven web of knowledge.
When Offline Wisdom Becomes Invisible
The dominance of digital search and AI language models means that knowledge not digitized is effectively hidden from the most-used tools of learning. Researchers note a “gravitation towards digitised sources and topics at the expense of the (still) ‘great undigitised’” blog.royalhistsoc.org. In other words, if a historical document, a piece of literature, or a religious commentary hasn’t been scanned or posted online, scholars and AI systems alike are far less likely to find or use it. Whole swaths of information remain buried on library shelves or in people’s memories, absent from the internet’s “indispensable treasure trove” uclpress.co.uk. This isn’t just a hypothetical problem; it’s observable in our behavior. Students writing research papers will gravitate to sources they can Google. AI models, trained on vast internet text, will confidently answer questions based only on what their web-based training data contains. The result is a subtle bias: knowledge that is not online gets minimal airtime, so to speak, and can slowly drift into obscurity.
Consider the domains of religion, science, history, and literature. In religion, for example, sacred teachings and local spiritual practices are often preserved in community storytelling or print-only manuscripts. If those teachings aren’t echoed online, an AI might summarize a religion using only the more ubiquitous doctrines found on major websites, ignoring rich local variations. In science, important research findings from decades past may live only in printed journals or paywalled databases; consequently, an internet-trained AI might overlook older evidence and reinforce a bias toward more easily accessible (often recent and English-language) studies. In history, the letters and diaries that provide texture to our past frequently reside in archives that are not digitized. As historians have pointed out, much of the world’s archival heritage – even in Europe – remains offline despite the surge in digitization blog.royalhistsoc.org. And in literature, countless novels and poems – especially those from small presses or oral traditions – have never been scanned into an e-book or quoted on a blog. These works risk being forgotten by a generation that finds its reading recommendations via algorithms mining online texts.
The danger here is not willful malice by AI or search engines, but an accidental narrowing of collective knowledge. The algorithms simply can’t highlight what isn’t there. As one technology commentator noted, AI doesn’t need to overtly censor information to cause harm; it can “forget you exist … because it never included you to begin with” undark.org. In an information economy where existence is often proven by search results, anything not indexed might as well be invisible. This reality should prompt us to ask: what wisdom are we missing out on, simply because it hasn’t been uploaded or digitized?
Read further in Microsoft Word file:






Leave a comment