Photo by Google DeepMind on Unsplash

Running to Catch Up with AI

The box is open; the genie is out, and there’s no putting it back. OK. What do we do next?

Our umbrella company, Storyboard EMP, includes several education and media assets besides Pursuit and PI Education. Not that you necessarily care, but I tell you this to set up my story: in one of our other publications that’s NOT private investigator-related, we got a submission recently that was 100% generated by ChatGPT.

The article author was a colleague and a friend, and also a tech entrepreneur who’s not only widely respected in his industry but is also just a damn good guy. He wasn’t trying to slip one by us with the ChatGPT submission. He did it with full transparency and included a disclosure statement about how he created the post. Most of all, he “wrote” the piece this way to make a very specific point about the perils of knee-jerk skepticism whenever a breathtaking, industry-changing new technology comes along.

This was a conundrum: We saw the point he was trying to make. It’s an important and timely one. His profession (and also Hal’s first career), real estate appraisal, is at a crossroads. New tech has swiftly transformed appraisal from a clipboard-and-measuring-tape to a laser-and-mobile-app process, and incoming regulatory changes are about to render the old clipboard ways entirely obsolete. These changes feel existentially threatening to many folks, especially the mid- to late-career old guard. Meanwhile, there’s a debate raging about how much of an appraiser’s work could be automated and subbed out to algorithms.

So, it’s easy to understand why appraisers are scared, mad, and very vocal in their resistance. Their livelihood seems in jeopardy. But what’s perhaps even worse is the tacit message that their expertise doesn’t matter, that the knowledge and experience they’ve spent decades honing can be easily replaced by AI and algorithms.

Long story short, our colleague’s article was meant to reassure readers that transformative new technology doesn’t have to be scary and threatening, and to gently warn them that they ignore it at their peril. It was breezy and entertaining, a quick roundup of historical crux moments through history, e.g., printing presses, digital photography, light. bulbs, and streaming services. The message: The turning points are clear in hindsight; the real trick is to spot them in the moment and react accordingly.

With all that in mind, I still had reservations about publishing the piece. I thought we needed to pause a moment to consider the precedent we were setting if we published a wholly AI-generated article. I was pretty sure it would open the floodgates for other contributors to send articles “prompted by a human” but composed by AI. I’m sure we already HAVE gotten some submissions like this, and it’s no longer easy to tell whether a breezy, bland, and vague, submission is human- or AI-generated. I realized we needed a policy, and we needed to think very carefully about whether to publish this colleague’s experimental AI article.

My Own Knee-Jerk Skepticism Enters the Chat

Full disclosure: I felt ambivalent about my own resistance. I’ll confess my bias right here: As a longtime writer and journalist, I’m repulsed by the AI slop that’s now flooding all kinds of media, including social media and even, on occasion, legacy media (e.g., that outrageous story about Sports Illustrated and other big-media publications posting AI-generated articles by “writers” who do not exist).

Obviously, it’s unethical and embarrassing to post an AI-generated reading roundup of made-up books. But my reservations go beyond the very real possibility of publishing a bot’s hallucination and wrecking your publication’s credibility: I object to the very premise of using AI to produce more media faster, because the more-is-better, ad-and-click driven media revenue model is spectacularly broken. Too often, it rewards outrage-inducing, deceptive headlines and hot takes that try to ride the coattails of a viral moment — tendencies that add fuel to our polarized, destructive societal discourse. And AI makes it easier than ever to flood the zone with too much of everything, a more-is-better mindset. PC Magazine recently reported that more than half of online articles are now generated by AI. I find that deeply discouraging. Because to me, more is not better.

I imagine that someone will eventually slide an AI-generated piece by us. Maybe they already have. But I’ll always wonder: Why do you want to?

As a writer, editor, and avid reader, I don’t want to swim in a slop sea every time I go online. I’d rather double down on real writing and journalism: Give me multi-year investigations that involve thousands of documents and scores of interviews. These longform pieces exist because an actual human has a genuine question about the world and chases the leads wherever they go. Sometimes, they may even be surprised by the answers and adapt the hypothesis accordingly.

How likely is a chatbot to do that? Not very. AI wants to please you. It’s more likely to tell you what you want to hear than challenge your thinking.

I’m sure some journalists out there (who are a lot smarter and more forward-thinking than I am) have already figured out ways to use AI as an assistant for organizing research, shaping ideas and structure, looking for logical fallacies, and reviewing troves of documents faster. I get that. But as far as I know, the chatbots cannot think for us just yet. They can’t get curious or go out into the world and talk to people. They can’t experience the exhilaration of discovery. That’s why, even though AI can assemble words into an order that is legible, I contend that it cannot truly write. Because writing is thinking. The value of an article or a book is in questioning, searching, finding, learning, changing your mind, and putting the puzzle pieces together. Skip that, and you’ve robbed yourself and readers of what’s truly essential: the process of discovery, both external and internal. And if someone can’t be bothered to take the time to research and write something, I don’t see why I should be bothered to read it.

A shorter way of saying all that is this: AI slop is making me feel that my writing skills have no place in our near future. Like appraisers, I feel devalued. So, my resistance is/was deep and personal. And as I considered our colleague’s article, I wondered whether I could be neutral about whether to publish it. I wasn’t sure what to do. So I got on the phone with Hal.

Human vs. Bot

Hal felt the same way I did, more or less. First, he dug into the piece and spent a few hours fact-checking the historical anecdotes. (DUH. That’s the first thing I should have done.) He then called back to let me know what he’d found: All those fun, simple, plausible stories that felt vaguely familiar and true were at best, oversimplified, and at times, factually wrong.

In a way, this made the decision easier: We can’t publish because there are factual errors. There was our answer. But it didn’t address the larger question: What’s our policy on AI articles going to be moving forward?

I considered that while Hal got on the phone with our colleague and explained our position: We respect you, and we absolutely want your voice in our publication. But we want it to be YOUR voice. Then he broke the thornier news: We also want the facts to be accurate.

To his immense credit, this colleague was open to getting rid of the AI-generated historical anecdotes and rewriting the piece using his own personal experiences. The result was a marvelous piece that got his point across much more effectively — because it was true, engaging, and personal. We published that piece and got excellent feedback on it from other appraisers.

Then we turned our attention to our AI policy for Pursuit and other publications. We wanted to be realistic about the fact that people are going to use AI as a writing assistant, while making a plea for transparency and humanness. Here’s what we added to our submissions guidelines and FAQ page:

Do you accept AI-generated articles?

We want real stories written by real humans. We want our readers to enjoy the benefit of your thoughts and ideas, experiences and hard-won lessons, arguments and positions. We want our readers to benefit from your thought process. So please don’t send us articles that are wholly AI-generated.

That said, some use of AI is acceptable. For example: Did you use AI to copy edit or smooth your style? Did it help you with research and organization? Did you prompt it to generate a first draft, which you then fact-checked and rewrote significantly? Those uses are, to varying degrees, valid. What we will reject is an article entirely written by AI.

Simply disclose to us how you wrote the piece, to what extent you used AI, and whether you double-checked facts and sources. Details matter. Dates matter. Factual information and resulting inferences or conclusions matter. Never trust that AI is telling you the truth.

If we run a story that has your byline, we expect you to have written that story. We will always consider whether any written piece has value for our audience and what kind of disclosure we might need to add.

Rethink. Revise. Repeat.

There’s where things stand now. I imagine that we will revise this policy many times. Meanwhile, a recent contributor disclosed that he’d used AI to polish and copy-edit a piece, and I was incredibly grateful for his candor — we gladly published his article. And as AI improves, I imagine that someone will eventually slide an AI-generated piece by us. Maybe they already have. But I’ll always wonder: Why do you want to?

Yes, writing is hard. I can’t do it both fast and well. Sometimes it makes me want to scream and pound the desk. But I get immense satisfaction from figuring out what I think about a thorny issue by spending several hours, days, weeks, or years wrestling with it.

The true value of writing isn’t about the result. Writing is thinking, and the finished work is a travelogue of your inquiry. I wasn’t 100% sure what I thought about AI’s role in media until I wrote this piece, and I’ll probably change my opinion a thousand times over the next year or two. But what I’ll never do is ask ChatGPT or Claude to think or write for me. Granted, we shouldn’t define ourselves by our work or professions. But I do define myself as a reader and writer who finds joy and purpose in wrestling with ideas. These are the things that make me human. I refuse to give them up.


About the author:

Kim Green is a writer, public radio producer, and occasional flight instructor. She’s produced stories for NPR and Marketplace and was the co-translator of Red Sky, Black Death and co-writer of Slow Noodles: A Cambodian Memoir of Love, Loss, and Family Recipes.