Are you wondering whether you should use large language models (LLMs) to solve a business problem you have? Or maybe you want to learn more about the types of problems that large language models perform well on? Well either way, you are in the right place! In this article, we discuss what types of problems LLMs should and should not be used to solve.
We start out by discussing the types of problems that LLMs perform well on. In this section, we will discuss high level characteristics of the types of problems that LLMs perform well on. After that, we will provide similar context on situations where LLMs do not perform well. Next, we provide examples of tasks that LLMs are commonly used to perform. This section will contain more specific and concrete examples of tasks that LLMs perform well on.
Characteristics of problems LLMs perform well on
How do you determine which types of business problems could be solved using LLMs? In this section, we will provide high level characteristics of the types of problems that LLMs perform well on. This is intended to help you understand what types of situations LLMs should and should not be used in.
- All data is text data. First and foremost, you should think about the type of data that you want to run through your model when you are determining whether using an LLM would be appropriate. LLMs are designed to handle situations where your input data and your output data are both text data. This text data could be freeform text or more structured text that is packaged into a format like a JSON document, but both the input data and the output data should be some form of text data.
- Grounded in existing information. In general, LLMs do well when they are performing tasks that are grounded in real information that they have already been exposed to at some point. They may have been exposed to this information during model training or later on during prompting. For example, LLMs do a good job at ingesting large chunks of text then regurgitating that information in a different tone or format. They also do a good job of taking large chunks of information and extracting specific details from that text.
- There is not a specific right answer. In general, LLMs are a good solution when there are a range of appropriate outputs that could be returned to the user. If you are working on a task where there is one very specific output that should be returned to the user and no other output would make sense, you may be better off looking into another type of model. These types of situations are often better served by supervised classification models that choose one option from a list of a few different options.
- Model outputs do not block users from performing tasks. LLMs are known to hallucinate and you should anticipate that they may do so when exposed to users. That means that they should not be used in ways where the end user of your product is blocked from moving forward if the model does not return an appropriate response. It generally makes the most sense to implement LLMs in situations where users have an alternative escape path that does not involve interfacing with LLMs that they can use instead.
Characteristics of problems LLMs do not perform well on
And what types of problems are LLMs not well suited to solve? In this section, we will provide characteristics of problems that LLMs are not as well suited to solve.
- Some data is numeric data. LLMs are specifically designed to mimic human language and human speech. They are not designed to process numeric information in the same way that more traditional supervised classification models are. That means that they are rarely appropriate for problems where either the input data or the output data is numeric data.
- Complex reasoning is required. LLMs are designed to mimic human speech and therefore do the best job when they are using specific information they have been fed to create text that resembles human speech. That means that they do not do well on tasks that require complex reasoning and decision making. They may produce an answer that sounds like appropriately structured human speech, but the content may be nonsensical.
Examples of tasks that LLMs can perform
Are you looking for examples of tasks that LLMs are commonly used to perform? In this section, we will provide some examples of tasks that LLMs are commonly used to perform. This is not meant to be an exhaustive list of all tasks that can be performed by LLMs. Instead, it is intended to be a curated list that mentions only the most popular use cases.
- Summarizing documents. Document summarization is one of the most popular use cases for LLMs. The basic idea here is that you can feed a bunch of documents that you do not have the time to read into an LLM, then ask the LLM for a concise summary of the content of the documents. LLMs generally do a good job with summarization tasks because they involve ingesting some information then outputting that information in a different format (ie. a shorter, more concise format). If the summary contains the most important information, then you can save yourself and your team a lot of reading time.
- Extract relevant information from a document. In other cases, you may want to feed a model a large document and have it return one or two specific pieces of information. For example, you may feed a LLM a large excerpt from help documents intended to help users debug issues with a piece of software then ask it how to solve a specific problem that you are seeing. This is another place where LLMs do well because it involves feeding the model information that it can return back to the user. If you can get a model to perform this type of task well, it can save you a lot of toil and reading time by identifying the most pertinent information for you.
- Responding to messages. LLMs also often do a good job at responding to messages in a reasonable way that mimics a human response. For example, you can set up a system that automatically drafts a followup to an incoming email using a system that leverages LLMs. This can be an easy way to ensure that customers receive some sort of followup in a timely manner, even if that followup just sets expectations around when someone will be able to provide a more thorough follow up.
- Translation. Translation is another task that some LLMS are proficient at. This is yet another example of a task where you feed a model some information in one format then ask the model to return that information in another format (or another language). Depending on how high fidelity your translations need to be, this may reduce the need for a human translator or enhance the efficiency of human translators.
- Rewriting content in another tone or format. Another place where LLMs shine is in cases where you want to rewrite a piece of content in a slightly different tone or format. In some cases, you may want to rewrite a piece of content in a few different ways with different audiences in mind. For example, you may want to write one version of the content that is targeted at marketing teams and highlights details that are relevant to them and another that is targeted at IT teams and contains more technical terminology. This can help you scale sending personalized messages to a variety of different audiences.
- Content categorization. LLMs are sometimes able to help with tasks like content categorization. For example, if you want to look at all of the support tickets that have come into your organization and apply appropriate labels to those tickets based on the type of issue that was mentioned, a LLM may be able to help with this. We will note that depending on the complexity of the categorization task, this can be the type of task that sometimes requires high level reasoning skills. That means that it may be the type of task where LLMs are prone to hallucinate. This is especially true if you are operating in a situation where it is difficult to assign the right label and human reviewers sometimes disagree with each other. If you need a model with high accuracy, you may be better off using a dedicated model that was trained for your specific categorization task.
- Analyzing the sentiment of a response. Large language models can also be used to analyze the sentiment of a sentence or paragraph. That is to say, it can be used to help understand whether a document has a positive tone or a negative tone. For example, imagine you had a database of reviews from users who had used one of your products. Now imagine that you wanted to be able to automatically flag users who left a negative review so that you could follow up with them on a phone call. LLMs provide an easy way to take a piece of text to identify whether it has a negative sentiment. On average, we would say that LLMs do an okay job with this task. That being said, this is another task that has some ambiguity and goes beyond regurgitating previously read information in another way. That means that you can expect a fair amount of hallucination. If you need a model with high accuracy, you may be better off using a dedicated model that was trained for your specific sentiment analysis task.
Related articles
- How to improve large language model performance
- When to fine tune an LLM
- When to use retrieval augmented generation for LLMs
- When to use prompt chaining for LLMs
- When to use function calling for LLMs
- When to use basic prompt engineering for LLMs