Sarah Packowski
2 min readFeb 25, 2025

--

No, this isn't training the model. Technically, this pattern is called in-context learning. But the model weights are not changed when this sort of prompt is submitted.

My team works on customer-support content for software. That includes product documentation, for example. As our developers add new features to the software, the documentation is updated almost daily. There's no time to try to do more training on the model (and it would be prohibitively expensive.) So, RAG or agentic solutions using that documentation as a knowledge base must always pull in the latest information to these prompts.

Further, there's not a strong case for trying to use generative AI to write that documentation that gets pulled in. Because information about new and updated features cannot have been in any data sets used to pre-train any LLM, you cannot just prompt a model to write documentation for a new feature. Instead, you have to feed the LLM information about the feature by including it in the prompt. Basically, your prompt is like "Feature X does A,B,C. Write the docs for feature X." By the time you've put all the info into the prompt, you might as well have just written the documentation. Some generateive AI use can streamline the writing process; but it's AI fixing human content, not the other way around.

Right now, people are building RAG solutions using content like Wikipedia as their knowledge base to try to avoid paying writers. But using that kind of source opens yourself up to problems with quality in the content or even malicious activity like someone intentionally putting false information in a Wikipedia article to cause a RAG solution to give a bad answer (called data poisoning.) I believe large organizations who need accurate RAG and agentic solutions will soon realize they must hire writers to produce knowledge base content and they must hire content strategists to optimize that content for RAG and agentic success.

On the other hand, crappy spam, pop-up ads, click-bait videos, drivel on low-quality news sites... That will all be created by generative AI with a handful of sad writers doing the soul-destroying tasks you've written about. Another reality is that many organizations will fire writers because they've seen the hype about generative AI and think they can save money by firing people. Sure, they'll wind up having to hire those people back again.. but those writers might have lost their house in the mean time.

The hard truth is: It's up to us writers to learn about this stuff and support each other so we don't get swept away by terrible AI.

--

--

Sarah Packowski
Sarah Packowski

Written by Sarah Packowski

Design, build AI solutions by day. Experiment with input devices, drones, IoT, smart farming by night.

Responses (1)