When to use prompt chaining for LLMs

Share this article

Are you wondering what prompt chaining is in the context of LLMs? Or maybe you want to learn more about when you should employ prompt chaining for LLMs? Well either way, you are in the right place! In this article, we tell you everything you need to know to understand when to use prompt chaining for LLMs.

We start out by talking about what prompt chaining is and what type of data that you need in order to use prompt chaining for LLMs. After that, we discuss some of the main advantages and disadvantages of prompt chaining. This provides useful context that will inform later conversations about when to use prompt chaining. Finally, we will provide specific examples of when you should and should not use prompt chaining.

What is prompt chaining for LLMs?

What is prompt chaining in the context of LLMs? Prompt chaining is a method that is intended to be implemented when your model is dealing with a large, complex problem that can be decomposed into smaller problems. LLMs generally provide the best answers with the fewest hallucinations when they are faced with narrow, specific questions that they have specific context on. When a model can easily look up information that it already has access to, there is generally not a need to get creative and produce responses that may or may not have any grounding in reality.

The way that prompt chaining works is by breaking down the problem that your model is going to address into smaller problems that have clear, factual answers then feeding those problems to the model one at a time. In many cases, you may also need to feed the model information about a solution that it provided for another problem. By breaking the problem into small problems, the model is able to produce more accurate answers.

What data is needed for prompt chaining?

What data is needed for prompt chaining? When you are using techniques such as prompt chaining, you do not need to incorporate data into your model and prompt generation process in the same way that you do when you are using methods like fine tuning and RAG.

Advantages and disadvantages of prompt chaining

What are some of the main advantages and disadvantages of prompt chaining? In this section, we will discuss some of the main advantages and disadvantages of prompt chaining. In particular, we will focus on advantages and disadvantages that these methods have compared to other methods that can be used to enhance LLMs.

Advantages of prompt chaining

What are some of the main advantages of prompt chaining? Here are some of the main advantages of prompt chaining that you should keep in mind when deciding whether to use this technique.

  • Can improve predictive performance. By breaking a problem down into smaller problems that are more straightforward and easier for a model to address, you can often improve the predictive performance of your model. That means that you can generate responses that more appropriately address your users’ prompts.
  • Relatively low effort to implement. One of the main advantages of prompt chaining is that it is relatively low effort to implement. That means that it is something that is worth a shot, even if you are not certain that it is going to solve your problem. Some of the other methods that we mention are more labor intensive to implement, so they are only worth it if you are confident that they will serve you.
  • May be used for state of the art models. Prompt chaining is something that you can implement fully on your side without having to make any modifications to the model itself. That means that it can be used with any model and it should be able to be used with the latest state of the art models. One caveat here is that if you are using a specific third party tool to enable prompt chaining, there may be situations where that specific tool does not support a state of the art model.
  • Does not introduce privacy concerns. Since you are generally not exposing much internal data to the model when you implement prompt chaining, there are not legal and privacy concerns that you need to worry about related to exposing sensitive data. The model should not have access to sensitive data without you realizing that it does.
  • Does not require large computational resources. Since you are not making any modifications to the model itself when you implement prompt chaining, you do not need to have access to large computational resources in order to implement prompt chaining.
  • Does not remove guardrails that are built into base foundation models. Since you are not making any modifications to the model itself when you implement prompt chaining, the model maintains any guardrails that were enforced on it to ensure that it acts in an appropriate manner. This is primarily a concern when working with models that are hosted by third party vendors.
  • Easier to troubleshoot. If you have a prompt that is inducing a bizarre response, using chained prompts sometimes makes it easier to understand what is going on and troubleshoot. Since the problem is broken down into multiple stages, each of which have their own output, you can look at the output of each stage to understand where the problem is being introduced.

Disadvantages of prompt chaining

What are some of the main disadvantages of prompt chaining? In this section, we will lay out some of the main disadvantages and pitfalls of prompt chaining.

  • Can introduce additional latency. When you introduce prompt chaining into your system, you are introducing more steps that you need to go through before you can respond to a user. These additional steps are bound to introduce some additional latency into your system.
  • Adds additional prompts and tokens. The you are using prompt chaining, you generally feed more text into the model than you would have if you had only used a single prompt. That means that there may be more costs associated with generating responses to your prompts.
  • Cannot address situations where the model simply does not know the answer. While prompt chaining can improve the responses that a model provides in situations where the model has the right context to be able to solve the problem at hand, it cannot help in situations where the model does not have the right context to solve the problem at all. In these situations, you need to look into solutions that allow you to provide additional context to a model.
  • All problems proposed need to have a relatively similar structure. In order to implement prompt chaining, you need to have an understanding of how your complex problem will be broken down into smaller problems that are easier to solve. You need to do this ahead of time before the model is put into production. That means that you need to be able to assume that all of your prompts or problems that you put the model up to will have a similar structure. If they do not, there is no way to know how the prompts should be broken down into smaller pieces ahead of time.
  • An incorrect response to an early prompt can cascade and wreak chaos in later prompts. Since you are chaining multiple prompts together and feeding information from previous prompts to future prompts, that means that if one of the initial prompts returns an incorrect answer then it will feed the wrong information to later prompts. If the initial prompt returns a whacky response, it can cause later prompts to return really bizarre responses. This is something to be on the lookout for as you monitor model performance.

When to use prompt chaining

Are you wondering when you should use prompt chaining to improve the performance of your LLMs? Here are some examples of situations where it makes sense to use prompt chaining.

  • When your prompts contain complex problems that can be broken down into simpler steps. Prompt chaining is designed for situations where a model is struggling with addressing a large, ambiguous problem that can be broken down into smaller parts. If the problem can be broken down into smaller parts, these smaller parts may be easier for the model to address. This can improve accuracy and reduce the chances of hallucinations.
  • When you want to check a previous response for accuracy. Prompt chaining is also used in workflows where you want a model to create a response, then go back and recheck that response for accuracy. Sometimes the model will split out some bogus information, but then recognize that information is correct when reviewing it.

When not to use prompt chaining

When should you avoid using prompt chaining to improve the performance of your LLMs? Here are examples of situations where you should look into other avenues.

  • When there is not a unified structure for how prompts should be broken down into steps. When you are using prompt chaining, you need to determine how to break the problem into smaller steps ahead of time. That means that there needs to be a clear structure that all of the problems that your model is faced with takes. If this is not the case, then you will not be able to set the prompt chains up appropriately ahead of time.
  • When a model hallucinates because it does not have context on topics it is asked about. If you find that your model is hallucinating specifically in situations where it does not have context about the topic that it was asked about, then implementing prompt chaining might not help your case. In these situations, you may be better off looking into a method like retrieval augmented generation or fine tuning that has the ability to introduce new context to the model.
  • When you cannot afford to add any additional latency. If speed is an absolute must have for you and you cannot afford to add any additional latency into your system, then prompt chaining might not be right for you. Prompt chaining introduces more steps and more calls to your model into your system, which can slow it down to some extent. In these cases, fine tuning a model is often a good way to go.

Related articles


Share this article

Leave a Comment

Your email address will not be published. Required fields are marked *