Guest Column - Don’t Kill Our Robot Overlords…Yet - Part 1
AI, The Great Translator of Human Knowledge
Hey there, Mad Mother Readers! You are in for a treat. My brilliant daughter, Lindsay, is a guest contributor today. She’s one of those rare minds that can devour a ton of information, dissect it, question it and then share it insightfully and clearly - with just the right amount of quirkiness thrown in. Thanks for reading!
This article is Part 1 of a (for now) two-part series exploring AI as a reflection of human knowledge—and what it means for how we create, remember, and relate to technology.
My mom writes her Substack with a rule: no AI unless there’s a human-written first draft or a solid outline. It’s like summoning an oracle who “yes, ands” like a seasoned improv comedian. This approach feels almost quaint now—like a callback to when we moved from hand-copied manuscripts to the Gutenberg press, cautiously and with some side-eye.
Full disclosure: This post was born from a healthy mix of human-first effort and digital collaboration, lest you be fooled by the topic: AI.
Drop Some Knowledge
Artificial intelligence has jettisoned (its way? their way?) into every corner of human progress, on a scale we haven’t seen since the Industrial Revolution.
Some of the things AI is doing right now:
- Diagnosing rare diseases faster than doctors
- Making fake movie trailers that look real
- Writing shockingly poetic haikus about capybaras
AI is now baked into our progressive, elitist lives whether folks like it or not. We even have a government that thinks it’s fine to run itself on ChatGPT with no thought to consequences. But I digress.
Something important is happening here. And with any great shift, we get to collectively choose how we meet it. Do we embrace it head-on, the thing we did not know before? Or do we ignore it—leaving it to be wielded by a select few in their ceaseless grasping for power?
But before we can choose, we need to understand what we’re dealing with.
Education, curriculum, and society haven’t quite landed on a definition of AI, let alone what it means for humanity. As someone who successfully rode the smartphone app wave—and generally considers herself a self-anointed anthropologist—I’d like to explore it with you.
We’ll start with two ideas:
Part 1 (this article): AI, the Great Translator of Human Knowledge
Part 2: Now You See It, Now You’re Screwed: Voyeurism and How We Choose to Look at AI (read: those critical thinking skills are more important than ever, donchyaknow!)
The Great Translator of Human Knowledge
Time for a quick anthropology lesson.
One of humanity’s greatest strengths has always been its collective knowledge. We’ve passed it down through generations, refining our methods of sharing. First oral tradition, then written lore and mythology. Eventually, we developed tools to translate language, unlocking global knowledge exchange. The printing press accelerated that, and the internet exploded it. Knowledge didn’t just grow—it quantumly coalesced.
Enter AI.
It’s not an oracle, or a floating eye of God. It’s not the Great Zoltar. (Though it is fun to imagine ChatGPT in a crystal ball whispering your next big idea.) AI isn’t some unknowable force—it’s us. It was made by humans.
AI isn’t some unknowable force—it’s us.
It’s a mirror of our collective output, trained on the internet—the greatest and worst archive of everything we’ve shared, built, ranted about, and meme’d into existence.
You’re thinking: Okay, but I asked it to write a poem about a capybara on a bicycle, and it came back with a hauntingly accurate haiku and it scared the poop out of me.
Hold the phone, Odysseus.
Even if it’s not the apex of civilization, that poem is still a product of us. AI only knows what it’s been taught, and what it’s been taught is everything we’ve uploaded- or distributed from our own brains. So, in the short history of our meme-filled internet, it makes sense that AI can spit out a poem about a miscreant mammal with suspicious ease.
In the short history of our meme-filled internet, it makes sense that AI can spit out a poem about a miscreant mammal with suspicious ease.
It’s important to understand that humans are built to offload knowledge. Ask my husband. He knows when the cats are due at the vet, which ingredients go into pizza dough and in what order, and the exact text that gets a response from the HVAC woman. That knowledge doesn’t live in my brain—it lives in his. And when that kind of offloading is disrupted—through divorce, death, or even your youngest kid heading off to college—it can feel like part of you is missing.
Because it is. Not metaphorically. Functionally.
You’ve been storing pieces of your mind—your knowledge, which is part of who you are—in someone else. That’s not just a poetic idea; it’s how human brains work. Scientists call this a transactive memory system1: the shared web of knowledge we build inside relationships, households, and communities. And when we offload that knowledge onto tools—like notebooks, calendars, calculators, or search bars—it has another name: cognitive offloading2

We do this all the time. It’s natural. It’s necessary. And when you scale it up, you get something bigger: the internet. A massive, tangled braid of distributed human knowledge.
And AI? It’s just the latest interface for that knowledge—one that feels new, but is built entirely from us.
It’s natural. It’s necessary. And when you scale it up, you get something bigger: the internet. A massive, tangled braid of distributed human knowledge.
And while a lot of that collective knowledge is stored and shared through language—words, ideas, syntax—it’s not just linguistic. We’ve also offloaded our visual culture: photographs, symbols, gestures, aesthetic preferences, memes. AI has access to that too, and it’s learning to speak image the way it learned to speak text.
Most of my examples are from written language, because it’s an easy on-ramp. But the same rules apply to images. AI images are also generated from a data slurry and plenty are what we call AI slop—the image is sorta kinda real-ish in that you understand what it is, but something is just off. It’s like speaking a foreign language you sort of know. You get by and generally people get the gist of what you are saying. But you might not always nail the pronunciation and you might really butcher a word. Badly.

So what happens when AI starts speaking fluently?
We don’t get to choose whether or not it’s got a green card by the way. They’re already here.
Thanks for reading!
AI doesn’t just reflect what we’ve built—it also reflects how we see it. Part 2, we’ll get into the weirdness of perception, the Kuleshov Effect, and how observing AI changes what we think it is (whether we mean to or not).
Transactive memory systems were first introduced by psychologist Daniel Wegner in 1985. They describe how groups collectively encode, store, and retrieve knowledge by distributing it across individuals in a social system. Read more here.
Cognitive offloading refers to the use of physical action or external tools to reduce the cognitive demands of a task. It’s how we shift memory tasks onto things like calendars, to-do lists, and now—AI. More on that here.
I think these are the same thing. Read up, prove me wrong, and leave a comment.