You’ve either heard that existing providers of generative AI can’t deliver high-quality long-form content, or you’ve tried them yourself and you’re the one telling everybody.
Well, you’re wrong.
But you’re also right: some of the biggest providers of generative AI simply can’t produce the quality you need for your long-form content. It’s fair to say that many thousands of users have already found a use for the likes of Jasper and Copy.ai; but you haven’t.
McKinsey estimated in June 2023 that generative AI would add at least $2.6 trillion worth of value across 63 use cases and 16 business functions. Notably, around 75% of that would fall under marketing and sales, customer operations, software engineering, and R&D.
Yet you can’t seem to find a provider who can generate a blog post at the quality you require. So what’s the issue?
No, It’s Not Prompt Engineering
The Harvard Business Review published a compelling piece last June with a simple premise: prompt engineering is not the future. Prompt engineering merely represents the application of a more subtle, high-level, yet critically lacking strategic skill — problem formulation. The context you use in your prompts is irrelevant. It doesn’t matter if you use a persona or provide few-shot examples. The issue is that you might not understand the actual problem.
Often, the people doing the prompting think the problem is: “I need a blog post on this topic.” That’s unhelpfully sparse. Worse yet, proponents of prompt engineering tactics actually get some things right when they recommend adding context, examples, and output patterns. However, they lack the strategic principle that should underpin these tactics.
Generative AI is like a potentially capable but often airheaded virtual assistant who makes things up sometimes. You need to thoroughly understand whether it can actually solve your problem. Often, it can just solve parts of it. Likewise, you need to understand what it can do with the problems you present to it. This is why problem formulation — which involves decomposition of complexities, reframing them, and designing constraints — is part of the solution.
This is also why many generative AI providers might be too generic for your purposes or desired quality. These are often just a step up from free generative AI systems like ChatGPT and Bard. If you feel like they can’t deliver the quality you want, that’s because you need solutions derived from well-defined, case-specific problems and processes.
In short: if you work on it enough, you WILL get high-quality output from today’s generation of generative AI.
Solving the Wrong Problems
Now, there are excellent, specialist writers who can write unique, high-quality, in-depth articles in a couple of hours. They’re often rightfully expensive to hire, especially compared to generative AI.
So if you want to reach their level of quality, and you need to spend a ton of time on AI to do so, won’t you be losing the competitive advantage you wanted in the first place? If this is a huge issue, you might be trying to use generative AI to solve the wrong problems.
A few telltale factors indicate whether generative AI can apply to your problem:
- You can solve it with repetitive, templated, or patterned content or content tasks.
- You got enough well-defined, resource-rich material for it.
- You possess the expertise to develop everything you need and collectively use your resources for custom generative AI prompting.
Let’s say you want each blog post to be completely unique from one another. In that case, patterns and templates are out of the question. Or, suppose you want to publish about an emerging or obscure field with not much information about it. That also greatly reduces the usefulness of generative AI.
Even if you satisfy the first two bullet points, you still need to meet the third.
Human Expertise Remains Irreplaceable
Any form of automation needs a “human-in-the-loop,” either before the automated process begins, or after it finishes. In generative AI, the former is prompt engineering, and the latter is post-editing. Of course, other human-led processes can exist outside of that loop. Some examples would be model training and data annotation.
So, the bottom-line is that expert writers aren’t out of a job. They’ve got new ones.
You need your experts to work as content strategists who can:
- Identify opportunities for generative AI to automate and improve content-related tasks and processes,
- Wrangle generative AI to guide and optimize its performance, and
- Help you connect generative AI-driven tasks to other parts of your organization.
Naturally, not every expert writer can pull this off; sometimes you may need a small team to successfully do all three.
It’s a much more involved process than simply buying a provider subscription and generating blog post after blog post. And it will probably stay this way for many businesses that are after genuine quality. The fact is, at this stage of generative AI (and in the foreseeable future, actually), you probably require a consultative phase where actual expertise can point you in the right direction before you jump in and implement. Generative AI is a transformative technology. It takes much more than just the right prompt.
Indeed, current generative AI capabilities (GPT-4 is the de facto standard at the time of writing) can already develop high-quality long-form content. But in order to harness it effectively, you need to fully understand the problem you want it to solve. You need to apply it to the right tasks and content. Last but not least, you need to implement sound human-in-the-loop policies in the right places.