A while back, I took a discovery call with a prospective client for the agency. One of their questions took me aback: “How will you use AI in the fact-checking process?”
I was so taken aback that I feared my response (“we don’t”) inherently had “you idiot” tacked on the end.
I’ve never used AI — generative or not — to help with fact-checking.
Let me take you behind the curtain.
When nonfiction creators don’t give us a source for a certain fact, fact-checkers look for other primary sources to verify claims during pre-publication fact-checking. Usually, that involves the age-old act of conducting a Google search.
But on Google, those primary sources are getting obscured.
The first thing you’ll notice after you query Google is an AI-generated response. Google’s AI function is annoying, ubiquitous, and hard to get rid of, and it feels like we need to scroll for ages to find a good primary source. (Hot tip: if you add -ai at the end of a Google search query, you WILL get rid of the terrible, inaccurate AI-generated results)
So, as ever, it really helps when writers provide sources. Because fact-checkers might turn up a different primary source from the one the writer used and suggest a change for accuracy—all because the two individuals are not looking at the same thing.
ChatGPT, though, offers a different route to finding sources. Depending on how you query it, and if you have the paid version or not, it can lead you to a primary source that helps corroborate a claim, or offer a counterpoint. That’s awesome! When GPT does that, please make sure it’s not hallucinating and check that the source they gave you is a) real b) a good source and c) actually provides the answer to the search query that you provided.
One note about ChatGPT is that it tends to be a ‘yes man,’ which means the algorithm is more apt to encourage and amplify the worldview that you prompt it for. So instead of saying “Is TKTK true?” it’s likely a better use of computational resources to ask “What is the X of Y? Please provide links to primary sources” . Karen Hao, who just published EMPIRE OF AI (which was fact checked; remember that I don’t recommend nonfiction books that were not fact-checked), discussed this on book tour in Seattle last week.
Technology is a tool, and I think it’s important to know how the tools at our disposal can help solve specific problems. Adding quotation marks around search queries and limiting Google searches with site: and -ai triggers memories from my computer programming class in college that I thought I erased by now, but I guess that’s what we have to do now to unshittify the enshitification that’s ensued.
Ultimately, though, fact-checking requires critical thinking and true discernment around fact, fiction, and context (so much context) that I am doubtful it is something machines can ever replace. It might not surprise you that places are trying to train AI models to conduct fact-checking, however…
If you’re at the health care journalists conference this weekend in LA, come to my Saturday session on how to best use AI as a freelancer (and no, throwing your entire draft in AI to have it ‘fact-check’ it for you is absolutely not anything worth doing)
Thanks for the tip on using -ai to shut off the AI-generated response on Google searches. I agree they are almost comically bad. One example: I was researching the shipwreck of the Carroll Deering, a ship that was mysteriously abandoned off the coast of North Carolina in 1921. I needed to check "did the Carroll Deering pass through the Bermuda Triangle." "No!" reported Google's AI confidently. But in fact, the Carroll Deering did pass through the Bermuda Triangle, which is as large as India BTW, on the way to NC. Just one more example of AI confidently being wrong.