Editing For Robots
It's becoming increasingly more difficult to tell real writers from artificial intelligence. Thats bad news for the humans.
I’m not necessarily sure I underestimated, prior to Trails, the number of emails that editors get every day. But three years in, it still overwhelms me. I spend a truly significant part of my day—pretty much every day—reading them, responding to them, and organizing them. Essentially all of the stories we run in Trails come from the minds of our contributors (that is, it’s relatively rare that we have a specific story idea and ask someone to write it for us), so it’s obviously an important part of my job. And we clearly get a lot of really great ideas. But for every story we actually assign, I probably read at least 30 emails that go nowhere.
Maybe 10% or less of our pitches go immediately from the pitch email to our pitch review meeting.
Something like 60% of the emails I get are obvious nos and for those I have a generic “Thanks but no thanks” email that I’ll send in response to it.
And another 30% require some back-and-forth. “Interesting idea, but tell me a little bit more about this one part.” These are the ones that take real time. Maybe I just want to know how they’re going to organize it or who they’re going to talk to. Maybe I want to see more of the writer’s clips to gauge their experience. Maybe I liked some smaller part of their pitch and want to reframe it to make it a better fit for us. Back and forth, back and forth, working out the kinks until we feel good enough about the story and the writer to give them a contract to write it.
I was somewhere in this refinement stage, a few weeks ago, when I realized I was actually talking to ChatGPT.
In this case, it took a few emails to figure out. Their initial pitch:
The narrative follows a multi-day backpacking route through Arizona’s remote Sky Island ranges — the Atascosas, Pajaritos, and Santa Ritas — where jaguars have quietly returned…Part wilderness trek, part wildlife mystery, the piece explores how we connect — physically, emotionally, and ecologically — to wild landscapes we may never fully understand.
Honestly, I’m intrigued. Cool destination. Obvious backpacking connection. Interesting environmental angle. And unique framing. But having never worked with this writer before, I wanted to prod him a bit to get a fuller picture. From me:
What was the trip like? What were the “plot points” of your adventure that you’d use in this story? And have you done any reporting about the jaguars’ return yet? I’m curious if its more than just “they’re making their way back.” What makes all of this unique?
His response was decent as well. He told me more about the trip and what made it interesting, as well as the specific anecdotes from the adventure that he’d write about. He also gave me more information and context for the environmental aspect of the story. I was actually starting to feel like this could be a feature we might actually want. Until I got to the bottom of his email. Pasted below his signature was a very lengthy conversation this writer was having with ChatGPT. It opened:
It got worse. Down toward the end of this errant-paste, ChatGPT had written the exact email he had sent me above. He went back and forth with the robot a few times, asking it to make the reply shorter, or emphasize different angles. And when he got a response he liked, he copied it and pasted it in an email to me, without realizing he had also accidentally copied and pasted his entire conversation with AI.
After 10 back and forth emails with this writer (and who knows how much time spent thinking about this story, discussing it with my team, and talking with the writer about it), I finally passed on it.
I wish I could say this is phenomenon is unique. This particular situation—where a presumably real writer was using AI to pitch a potentially real story (it should go without saying, if you can’t write a simple pitch email without plagiarizing AI, how can I expect you’ll be able to write the actual story without it?)—is just one genre of AI-influenced pitches we get. On the other end of the spectrum, are pitches like this:
I’m a backpacking and endurance writer with credits in [Your 2–3 best/relevant publications, e.g., The Trek, Backpacker, Outside Online, or even strong personal blog if no big clips yet]. I’ve thru-hiked the [name a long trail you’ve done] and regularly push weird self-supported experiments in the backcountry.
Or:
I’d like to pitch a 1,200–1,500 word feature for Issue 13, chronicling my multi-day backpacking trek across [specific desert, e.g., the Mojave].
OK, one more:
I’ve been reading the latest issues of Trails and was especially struck by how you weave immersive storytelling with practical insight for backpackers who live the trail—not just visit it. Your piece on [mention a recent feature, e.g., “the San Juan Mountains thru-hike”] reinforced how your readers value honest journeys and the subtle ways nature shapes us. Denver Westword+1
Yeah, in that last one, the robots actually included links to the sources they used to learn about us.
Emails like these are admittedly pretty funny, and easy to just ignore. But if it weren’t for the obvious fill-in-the-blank bits (which I’m sure the content farm that is sending them would have preferred didn’t make it into the email), I might have spent a bit more time considering them. Generally, these are simpler pitches for Places stories or other front-of-book rubrics that don’t require quite as much nuance and for which we’re generally more likely to work with writers we haven’t previously worked with. They’re easier pitches to write, even for AI, and at least in theory, it should be much more difficult for us to recognize that they weren’t written by a real person.
AI slop like this is showing up more and more in the media. Just last week, Bloomberg (which itself has had issues using AI) reported on how robot-written recipes are flooding the internet, making it difficult to tell which online recipes are actually recipes and which are some robot’s crowdsourced chemistry experiment. Similar to the fill-in-the-blank emails we get, these stories are relatively simple, two-dimensional, (theoretically) easier for a robot to write, and probably more difficult for an editor to catch.
It’s the feature pitches that scare me even more. Last week, RE:PUBLIC’s Chris Keyes shared his own story about interacting with a writer of dubious humanity. Her name is Victoria Goldiee. She pitched RE:PUBLIC’s editors a story, as Keyes wrote, about “the growing phenomenon of people on the fringes of poverty who are squatting on public lands, as well as the policy implications and challenges this issue presents for land managers and policymakers.”
That’s not a simple idea. Her pitch had nuance, was timely, and explored an element of public lands that not everyone knows exists. She gave the editors a reporting plan, somehow talked to them on the phone, and turned in a first draft that Keyes called “stellar.” But something indescribable still felt off: “Still, there was something of an uncanny valley quality to the draft. It was difficult for either of us to articulate, but mild doubt was beginning to take shape,” he wrote.
Another search for the writer’s name led RE:PUBLIC’s editors to this story, from The Local’s Nicholas Hune-Brown. Long story short, Hune-Brown was about to assign a story to the same Victoria Goldiee when he discovered the writer had fabricated quotes, lied about bylines, and her writing was very likely AI.
We’ve been getting pitches from Goldiee as well—for years. We actually printed one of them.
It was a simple backpacking meal recipe for “West African Sweet Potato and Peanut Stew” that we ran in Issue 3, back in August 2023, before anyone recognized the threat AI posed to magazines like ours. It was imperfect, but we chalked that up to an inexperience writing recipes, and potentially a language barrier—in the story she talked about eating a meal like this growing up in Nigeria and I suspected English was not her first language. But it was nothing we couldn’t edit. Our photographer made the meal and suggested a couple minor changes, but it turned out delicious.
The story didn’t quote anyone and was fact-checked as thoroughly as you can fact check a recipe (I actually ate it myself before going to print). We had no reason to believe there was a problem with it when she pitched us and no reason to think it was flawed when we got a draft. If I’m being honest, I’m not sure we would have ever suspected an issue if it weren’t for the reporting from RE:PUBLIC and The Local—a scary thought. I still feel good about the factual accuracy of the recipe and its usefulness for readers.
To be fair, we still don’t have any way of knowing if Victoria Goldiee wrote this herself, got some help from a large language model (similar to the jaguar pitch I got, above), or if the entire recipe was written by ChatGPT. But regardless of the factual accuracy, we want every story in Trails to be solely the product of human creativity. So if you’re a Trails reader, I apologize that a dubious story like this ever made it to you.
Recently, we’ve gotten more elaborate pitches from Goldiee.
The one that got the furthest was an essay about how we always want our outdoor gear to be “indestructible,” using that cultural observation as a way to explore how we equate toughness with virtue, in both marketing and our own self-image:
The larger question I want to explore is whether this mindset — that things (and people) should be built to last forever — might actually work against us. Maybe it encourages waste in some cases (when products overpromise and fail) or emotional burnout in others (when people try to embody the same “indestructible” ideal).
In short, I’d like the piece to use gear marketing as a mirror for how we think about resilience more generally — and to question whether we should be celebrating a bit more impermanence or softness instead.
Yet again, I exchanged a few emails with Goldiee to flesh out the pitch. In retrospect, her pitch got better and became more interesting as I prodded her with questions and critiques and she was better able to frame the story through the lens I was subconsciously applying to it. In trying to understand her pitch, I was telling her what I was looking for, and her AI was feeding it right back to me—I was talking to myself.
We ultimately passed on the idea, but not for lack of trying. We discussed it as a team several times and chewed on it for longer than I’d like to admit. And, honestly, I can’t completely put my finger on why we turned it down. So many of the pitches we get either do or don’t become Trails stories became of some vague, personal “feeling” we have about it. It just didn’t feel quite right for us—and there’s no better explanation than that. She didn’t accidentally paste her conversation with ChatGPT into the email—if she had, it would have been easy.
While it’s probably been going on longer, it genuinely feels like this influx of AI pitches has ramped up in only the last few months. In the past, we could send out a call for pitches and expect to get a healthy mix of both good and bad pitches back. A few weeks ago I made the mistake of posting a fairly specific call for pitches to my personal LinkedIn. Since then, we’ve been inundated with spammy, fake, useless emails. The ratio of bad pitches to good ones has skyrocketed.
We can keep on top of it. We’re still going to read every email and respond to all the ones that are clearly human. Chris Keyes’ solution is likely the simplest: fact check. While the number of publications paying someone to re-report a story and confirm the accuracy of every detail within it is declining, we still have that luxury. In fact, I want to invest in that stage of the editorial process even more at Trails.
The people I truly fear for during this age of AI writing are the actual writers. For starters, don’t expect to find our calls for pitches on LinkedIn or in freelancer newsletters anymore. Those spots are just too easy for the bots to pick up. But the bigger issue comes once you’re in my inbox. The influx of sketchy pitches had already created a chilling effect among my team and I, even before we found out about scammers like Victoria Goldiee. Frankly, we don’t know who to trust. I’m forced to read pitches—even ones I like—with a new level of skepticism. Instead of just convincing me of the story you want to write, you need to convince me it’s actually a person writing it. If you’re a new writer without a ton of clips, that’s going to be incredibly difficult. I undoubtedly will—and probably already have—passed on good pitches because I wasn’t completely sure the person pitching them was real. It’s suddenly far less likely that I take a chance on a writer I don’t know unless they can somehow prove to me that they’re human—a crazy thing most of us editors probably never thought we’d have to worry about.
Joking with my team earlier this week, I suggested feeling like I needed to get on the phone with everybody we were considering working with to confirm that they’re real (though Goldiee somehow circumvented that safeguard anyway). “Yeah, unless we meet for coffee, we’re not assigning anything to you,” commented our managing editor, Stasia. “FaceTime me and send me a copy of your drivers license or no deal,” I said. In the moment, I was joking. But after thinking about it for a moment, I wondered if I wasn’t.
AI has come a long way in an incredibly short amount of time. It felt like just last week we were making fun of it for giving people extra fingers and now we’re fighting to determine which emails came from humans and which came from robots, using vague gut feelings to tell the difference. AI is only going to get better. We haven’t found a long-term solution to this problem. If we as readers and editors only want to enjoy stories from real humans (as we should), we’re going to need to find new ways to guarantee the provenance of the words we’re reading. I’m not sure that means asking for an ID before signing a contract. But it’s not far off.




Once I had a quick Zoom with an editor where I had to show my driver’s license on camera. I had to show it close enough so she could read it. This was after I submitted the draft. She wanted to make sure I am a real person before publishing though. I felt at the time that it was too invasive but after reading your post I feel differently. And now I would do it again if need be. Though I am sad we got to this point. But I understand now.
I imagine the recipe you printed from Goldee was truly her writing... I recall in that essay how the editor comments about her writing being more raw and human in the beginning. Likely, she hadn't discovered how to fully harness AI yet in early 2023.