Are you wondering when you should use basic prompt engineering techniques to improve the performance of your large language models (LLMs)? Or maybe you are wondering what advantages these techniques have over other techniques that can be used to improve LLM performance? Well then you are in the right place! In this article, we tell you everything you need to know to understand when to use basic prompt engineering for LLMs.
We will start out by describing what basic prompt engineering is and how it is applied. After that, we will discuss whether you need any special data in order to apply these techniques. Next, we will discuss some of the main advantages and disadvantages of basic prompt engineering. Finally, we will discuss some situations where it is and is not a good idea to invest in basic prompt engineering.
What is prompt engineering?
What is basic prompt engineering? For the purposes of this article, we will classify basic prompt engineering techniques as any techniques that work by modifying the prompts that are fed into a model without adding additional steps and routines into the system. We will not include techniques like prompt chaining and retrieval augmented generation in this definition because they introduce additional calls that need to be made to models and databases into your system.
Here are some examples of techniques or changes that would fit into this category.
- Providing examples of what a good response looks like in a prompt
- Providing instructions for how to handle edge cases in a prompt
- Providing context on what point of view a model should respond from
- Manually entering data or context in the prompt
- Providing context on the expected response format
- Specifying steps the model should take to craft a response (without actually chaining prompts)
What data is needed for prompt engineering?
What data do you need in order to implement basic prompt engineering techniques? One of the largest advantages of basic prompt engineering techniques is that you do not need to curate any special datasets to provide context to the model. This is a large advantage because curating datasets can be both time and labor intensive.
Advantages and disadvantages of prompt engineering
What are some of the main advantages and disadvantages of prompt engineering? In this section, we will discuss some of the main advantages and disadvantages of prompt engineering. In particular, we will focus on advantages and disadvantages that prompt engineering has compared to other techniques that are commonly used to improve LLM efficiency and performance.
Advantages of prompt engineering
What are the main advantages of basic prompt engineering compared to other techniques that can be used to enhance LLMs? Here are some of the main advantages of basic prompt engineering.
- Easy to implement and iterate on. One of the main advantages of basic prompt engineering is that it has a very low barrier to entry. There is no need to curate complex datasets or build out databases to support your model. It is very easy to implement, and even easier to iterate on. This is especially true if you have a good model evaluation and performance measuring system set up.
- Lower latency. Another advantage of basic prompt engineering is that it does not introduce much latency into your system. You are not introducing new steps that need to be taken or function calls that need to be made, which is a large benefit if latency is a concern.
- Improve predictive performance for specific tasks. There have been many situations where basic prompt engineering has been shown to improve predictive performance and accuracy for LLMs. This is a great first step to take if you are trying to improve the model that you are using to tackle a specific use case.
- Can be applied to any model. Another advantage of basic prompt engineering is that it can be applied to any model that you could dream of. As long as you are able to send basic text prompts to a model, you will be able to apply these techniques. This is not true of all techniques that can be applied to improve LLM performance.
- Does not introduce privacy concerns. Another advantage of basic prompt engineering is that since you are not exposing internal data to the model, there are no concerns related to privacy and legal risks. You will not be in a situation where you are at risk of exposing sensitive information and personally identifiable information (PII) to your users.
- Does not require large computational resources. Similarly, since you are not using any large datasets to train your model or look up context for your model, there is no need to access large computational resources.
- Does not affect guardrails that ensure they behave in a reasonable way. Since these methods do not require you to train or fine tune a model on your own data, they do not interfere with any guardrails that are baked into models to ensure that they behave in a reasonable manner. This is primarily a concern if you are using third party vendors that specifically enforce guardrails to ensure that models perform in a reasonable way.
Disadvantages of prompt engineering
What are the main disadvantages of prompt engineering? Here are some of the main disadvantages of techniques like basic prompt engineering.
- Cannot provide domain specific context. One of the main disadvantages of basic prompt engineering techniques is that they will fall short if your model is lacking domain specific context that it needs in order to complete a task. Unless the context that is needed is small enough that you can introduce it into every prompt in plain text, these techniques will not provide a pathway to feed context to your model.
- More tokens needed for prompting. Another disadvantage of these techniques is that they increase the number of tokens that you need to feed in each time that you prompt the model. This can be costly if you use a third party vendor that charges based on the number of tokens that are introduced into a model.
- More of an art than a science. Another disadvantage of this approach is that it is more of an art than a science and there are a lot of different techniques that can be applied or changes that can be made. Sometimes just rephrasing a question in a slightly different way can have surprisingly large impacts on model performance. This can make it difficult to feel confident that you have done the right amount of prompt engineering.
When to use prompt engineering
What are some of the main situations where it makes sense to invest in basic prompt engineering? Here are some of the main situations it makes sense to invest in basic prompt engineering.
- When you are getting started with a new model. If you are just getting started with using a model for a new application, then basic prompt engineering is one of the first tools you should reach for in order to improve model performance. It is arguably the most low effort change that you can make and does not introduce any engineering work that needs to be done before your changes can be pushed to production, such as building out a production database.
When not to use prompt engineering
What are some examples of situations where basic prompt engineering may not be appropriate or sufficient? Here are some examples of situations where basic prompt engineering may not be your best option.
- When the model is lacking crucial context. If you are in a situation where your model is not performing well because it is lacking crucial context or domain knowledge that it was not exposed to during training, basic prompt engineering may not get you far. In these situations, you should look into methods that allow you to enrich the model with additional context like retrieval augmented generation or fine tuning.
Specific types of prompt engineering
- How to improve LLM performance
- When to fine tune an LLM
- When to use retrieval augmented generation for LLMs
- When to use prompt chaining for LLMs