Are There Scenarios Where AI Should Be Used to Create Content?

By Jacob Cohen Donnelly May 24, 2024

Any technological innovation that lasts has the ability to do good and evil, often at the same time. Social media has allowed us to connect with people worldwide (I’ve built friendships on social media) while also negatively affecting people’s emotional well-being. As a society, we are better because of social media, I think, but we will be simultaneously being worse off.

The advent of generative AI is precisely the same. It will likely destroy many media companies as more content is consumed in the answer versus on a publisher’s website. At the same time, it introduces a lot of potential value to those same publishers from a content perspective.

But how far is too far when it comes to using AI in content?

Some will say you should create content from start to finish using AI. I’ve seen quite a few in the SEO community trumpeting their ability to rank for thousands of keywords thanks to this technology. It’s not a long-term strategy, I suspect, because Google won’t want that sort of content in its SERPs. But there are still some who believe it’s a viable strategy.

Then, there are those who are fundamentally opposed to using AI in creating content. In January 2023, Dotdash Meredith’s CEO, Neil Vogel, told Axios’ Sara Fischer:

We will never have an article written by a machine. We are not denialists. We actually think it’s an incredible opportunity for us.

On Thursday, Morning Brew’s Chief Content Officer, Devin Emery, tweeted:

Execs in media talking about AI like this betrays a lack of strategy. Morning Brew doesn’t and won’t use generative AI for any work core to producing our content. Integrating AI into your content creation outside of workflow & productivity tools is to commoditize your brand.

The bolded “this” was a reaction to the Washington Post’s internal strategy announcement where the chief technology officer told staff that the paper needed to have “AI everywhere in our newsroom.”

Emery is a former colleague and an AMO Pro member, so I shot him a Slack to discuss this. I have warmed to the theory of using AI for content creation in very specific places, whereas he has obviously taken a clear stance against it. But there’s nuance.

It’s “not that these tools cannot be used,” he Slacked, “but once you take the leap into using them to create core elements of your content, you’re on the path to commoditization. If everyone takes this route, content will drive towards a singularity.” And in many respects, he’s not wrong.

And, frankly, I don’t think users will want artificial intelligence content either—at least not in its current iteration. There have been loads of comical examples where Google’s AI tool recommends people use a little bit of glue to prevent cheese from sliding off a pizza and that tobacco has positive health benefits.

But I think there is a middle ground between Emery’s stance and the free-for-all that could occur if AI was allowed to create content willy-nilly. And to be clear, I think that middle ground is still miles closer to Emery’s than the others.

To articulate it, imagine the creation of a piece of content. There are, in many respects, three major steps:

  1. Research and sourcing
  2. Writing the article
  3. Editing & Rewriting

The quality of #1 dictates the entire outcome. If you have lousy sourcing or don’t have the time to research, you won’t deliver a good product. But in an era of decreased resources and increased expectations, we must publish faster than ever. This is why reporters tend to return to the same sources. It saves time.

Then comes the writing. You take all that research and sourcing and put it into a format readers will like. For many, that’s the long-form article. For Axios, that’s SmartBrevity. For me, it’s the rambling chaos that is an AMO newsletter. But whatever the case, you likely shouldn’t do step two without first doing #1.

I’ll pause here to ask a question: what does the reader care about when reading an article? Is it the:

  • Information?
  • Writing?

At least in the context of business news, I would argue that it is the information. So, if you can spend more of your time on #1 and reduce the time spent creating the actual article, the output should be better for the audience. However, AI is not perfect by any means. I took the Slack chat between Emery and me and put it into Perplexity to see how the article came out.

“One paragraph in. It’s not really right,” Emery said, having read the output. “Editor Jacob has his work cut out for him!”

And he was right. The article confused key points and assigned opposing positions to us. You can read the full exchange I had with Perplexity here. I likely did not give it a good prompt, a skill I have not yet developed. So, I scrapped the draft.

But that’s the point… no draft is sufficiently good enough to publish without going through step 3, which is editing and rewriting. Again, we can take a theoretically okay draft and make sure it is factually correct and enjoyable to read. But instead of writing from scratch, you’re editing and fixing along the way. Maybe editors might disagree, but I suspect that takes less time than the original creation of the article.

So, by focusing more energy on steps one and three—only possible if you cut time in step two—you wind up delivering a product that could be better for the audience. But nothing is black and white. In some cases, using AI might make sense.

Take Axios, for example. It believed—rightfully, some might say—that as long as it could get the best information as quickly as humanly possible, people would not only tolerate but perhaps appreciate the bulleted style of article writing. And so, couldn’t its team of great reporters spend more time sourcing, putting that information into a SmartBrevityAI tool, and then editing the output? I suspect the audience would benefit immensely from that.

Compare that to The New Yorker, which doesn’t have a need for speed in the slightest. There, you’ve got people reading it for good information and because they love long-form writing. Because the brand is so tightly coupled with that style of writing, relying on AI might hurt the brand. Or, if it doesn’t hurt the brand directly, the editing might take so long that simply writing from the get-go may have just been smarter.

What I am describing here is an augmentation of people, not a replacement. You still need people to source and research. And you still want a person to ensure the output from the AI is up to standards. However, as one editor told me, a journalist has two primary jobs: reporting and writing. Some are very good at the former but awful writers. Others write beautifully, but the reporting is garbage. This could help those that are very good reporters, even if they’re not good writers.

I will end with this as a repeated statement: if you think you can replace people with AI to create content, you will race toward a world of commodity content. That is useless to the user. No piece of content I’ve ever asked AI to create for experimentation has ever been worthy of AMO’s audience. And as I close this 1,300-word article, I can tell you that I wrote it all starting with the word “Any.”


Thanks for reading today’s AMO. If you have thoughts, hit reply or join the AMO Slack to discuss this and other media topics. I hope you have a wonderful weekend.