A large language model (LLM) can be an effective devil’s advocate: listing arguments against a proposed plan as a way to explore the plan’s weaknesses and risks.
Large language models
Large language models (LLMs) are neural networks with billions of nodes. Using massive amounts of text (eg. pages scraped from the internet) as training data, these models build up a vocabulary of words and internalize the statistical relationships between those words.
From wikipedia, playing the role of devil’s advocate is:
A situation where someone takes a position they do not necessarily agree with (or simply an alternative position from the accepted norm) for the sake of debate or to explore the thought further
In the context of business, team members might feel they cannot argue against a proposed plan because they’ll be seen as difficult, not team players, or even unloyal or lacking in dedication. Asking people to play devil’s advocate gives them permission — indeed encourages them — to disagree. Empowering people to safely disagree can prevent groupthink and identify potential problems with a plan.
However, being the first person to speak up can be tough. So in this post, we’ll explore using a large language model to play devil’s advocate — to start the conversation off.
The following image demonstrates prompting an LLM to play the role of devil’s advocate and argue against a proposed plan.
*Black text is what I typed, highlighted text is output from the model.
The image above shows a new tool for experimenting with prompting LLMs, called Prompt Lab, in watsonx.ai.
Mural is an online tool that is like a virtual whiteboard: you can draw shapes, stick notes, and move things around. It’s a fabulous tool for visually organizing ideas, designing solutions, and collaborating with teammates — in real time or asynchronously. I love using MURAL for team collaboration!
The team is considering some ideas to grow our business. We want to use the devil’s advocate approach to identify potential problems and risks with our ideas. We want to perform this activity collaboratively in MURAL. We want to seed our mural with sample arguments against our plans generated by a large language model.
Step 1: Create the sample notebook in watsonx.ai
In watsonx.ai, create a Python notebook from this url:
Devil’s advocate notebook
See: Creating a notebook
Step 2: Section A of the notebook
Follow the instructions and run through the cells in Section A to discover the best prompt.
A prompt is the text and parameters you send to a large language model to cause it to generate the desired output.
The easiest way to figure out the most appropriate model and the most effective prompt text and parameters is to experiment in a tool like the Prompt Lab in watsonx.ai. After you have discovered the best combination, then you can move on to using the foundation models Python library to build your solution.
Step 3: Section B of the notebook
Follow the instructions and run through the cells in Section B to prepare a mural to work with.
Step 4: Section C of the notebook
Follow the instructions and run through the cells in Section C to generate arguments against each of the four proposed plans and then post the arguments on sticky notes in the appropriate quadrant of the mural.
What can you do with a tool that generates text as coherent as a human might write, but that doesn’t understand what it’s writing? One that doesn’t know true from false, fact from fiction, or right from wrong? You could fine-tune the model, build guardrails to avoid egregious errors, and add fact-checking functionality to wrangle output. But the simplest use cases would be ones that require creative story-telling that is informed by common knowledge, where factual accuracy isn’t required, and where indeterminate or wacky content is a feature not a bug.