
Humans have trust issues. ChatGPT isn’t helping.
In fact, it’s accelerating the spread of wild theories, erroneous beliefs, and false information. Maybe the widespread damage shouldn’t be a surprise, given ChatGPT’s founder casually compares his creation to the atom bomb.
Months ago, my main fear was that AI would replace me. Now, thanks to an extended trek for ChatGPT’s legitimate source material, my concerns are far less personal.
Here’s what happened.
I tried to use ChatGPT for last week’s post about Terry Pratchett. All was going well until…
Todd: “Hey will you give me the sources for your work?”
ChatGPT: “Certainly! Here are some sources that provide examples of pop culture celebrities who have expressed their admiration for Terry Pratchett:”
Todd: “That link to the Stephen King article is outdated. Can you get me a better source?”
ChatGPT: “I apologize for the outdated source. Here's a more recent article that quotes Stephen King's praise for Terry Pratchett:”
Todd: “That link is also broken. Is it possible there's something wrong with the link?”
ChatGPT: “I apologize for the inconvenience. It seems that the link I provided is no longer available on The Guardian website. However, here's another article from The Guardian that quotes Stephen King's praise for Terry Pratchett:”
Todd: “Why do none of the links in your sources work?”
ChatGPT: “I apologize for any confusion. It's possible that some of the links I provided may no longer be working due to changes on the websites they were hosted on.”
I opened the links in every browser. I tapped the Way Back Machine. I chased down The Guardian, asking for the original article.
No record of existence. Not on my end. Not at the Guardian.
The errors piled up.
Rolling Stone link: broken.
BBC link: broken.
Telegraph link: broken.
Forbes link: broken.
Celebrity Net Worth link: broken.
At the time, I didn’t connect the dots. Maybe ChatGPT pulled from articles that all happened to be removed. It pulls from a fixed data set. What never occurred to me was that the program could have been… you know… MAKING UP FAKE URLS.
The Guardian, which ChatGPT loves to cite for some reason, recently pointed out the error.
“This specific wrinkle - the invention of sources - is particularly troubling for trusted news organizations and journalists.”
But journalists won’t misuse it. Another group will. More on that in a second.
I started with ChatGPT because it’s the most well-known. But at least one lesser-known AI platform is just as bad. My friend Zulie rightfully dragged an AI tool called Jenni.
Jenni is a cash grab driven by undisclosed affiliate deals preying on desperate students. Their “testimonials” are free stock images. Jenni is even less capable than ChatGPT at match voice tone. However, it is perfectly capable of matching ChatGPT in making up facts.
Just… making up facts.
Facts that will find their way into the minds and papers of 20-something-year-old college students. Students who likely have underpaid professors with no time to verify each line.
Which leads me to a new epiphany. The third big one I’ve had in this era.
(Wait… is an epiphany always positive? Seems like it could be. What should I call this instead? A DE-piphany? Sure. Let’s go with that)
The first de-piphany, 2016. I ghostwrote for a British entrepreneur. He said things like “9 out of all 10 companies fail.” and “75% of internal performance issues go away with monthly conversations.”
As a doe-eyed and dough-brained 26-year-old, I swallowed his words like a third grader eats a Pixy Stick. Head back. Eyes shut. Mouth open.
Then came this:
“Oh, Todd and we need to have a bibliography for this. We need sources.”
(Looking back, it’s suspicious that I was finding sources based on beliefs, and not the other way around.)
Digging up these alleged truths got hard, fast.
In a matter of hours, I spiraled into a tunnel of doubt. Were we telling the truth in this book? Or were we repeating things that sounded true? And — how much had the actual truth been stretched over the time?
Second depiphany, 2020.
I was relatively famous on Medium (an open blogging site). My friends and I collectively made up several million of the site’s billion page views. We wrote mostly innocent self-improvement stuff and got paid well for it.
Then Coronavirus hit.
The demand for life hacks, etc went way down.
All our statistics plummeted.
Statistics show attention. Attention means money.
What’s an online writer craving a fat royalty check to do?
Ah yes: write about the coronavirus.
In an instant, writers with no health experience made national news for half-baked theories about how to quash the virus. (when asked about it, one of them literally said: “You should definitely not trust me.”)
Many of my friends felt pulled to write about the disease as well, even without a medical background.
Why?
Because when you are rewarded for attention, truth takes a backseat.
Like I said, good journalists won’t misuse ChatGPT. They have mechanisms to ensure truth-telling.
Creators will commit the worst crimes.
Creators, with a content schedule to hit and an audience to reach. Creators, who are desperate to escape their soul-sucking job. Creators, who have been taught that quantity beats quality and that the truth is relative. Creators, who see AI as a way to produce stuff faster, not better.
Creators, who are looking for SOURCES for their BELIEFS, and not the other way around.
For the record, I think ChatGPT will right the ship. Until then, let me beg you — if you’ve ever pressed publish on an article, an essay, a Facebook post, a LinkedIn Pulse article, a video, or a tweet — please validate your sources.
Doing so probably makes the world a calmer place, sure.
But it certainly builds your own credibility and trustworthiness, in a world where both are terribly rare and valuable.
Much love as always <3
-Todd B from Tennessee
Thanks, Todd, for the warning. I haven't tried ChatGPT. Not sure I want to!
Thanks, Todd, for the warning. I haven't tried ChatGPT. Not sure I want to!