How to Get Better Results From an LLM

When it comes to using large language models, there are effective approaches — and less effective ones. Here are some tips to get the answers you need from an LLM.

Written by Omer Rosenbaum
Published on May. 17, 2024
How to Get Better Results From an LLM
Image: Shutterstock / Built In
Brand Studio Logo

Large language models are only as powerful as the data they’re trained on. 

While LLMs excel in natural language processing and generation, they struggle with context-specific knowledge. And developers, regardless of whether they’re from a small startup or a large enterprise, require knowledge that is specific to their organization.

Take this example. Ross Lazerowitz attempted to use GenAI for social media content, but the results didn’t match his writing style. In an effort to remedy this, he uploaded four years worth of his Slack messages (around 140,000 messages) so the LLM could mimic his style of writing.

The outcome was amusingly off-target: When he requested a 500-word blog post on prompt engineering, the response was simply, “Sure. I will work on that in the morning.”

Tips for Improving LLM Output

  • Use retrieval augmented generation (RAG).
  • When seeking a result, only feed the LLM relevant information.
  • Be strict when evaluating the LLM’s output to ensure it meets your project’s standards and specific needs.

More on LLMsHere’s What You Need to Know About Llama 3

 

Use RAG

Retrieval augmented generation, also known as RAG, is a technique that combines a retriever and a generator in order to enhance an LLM’s output.

RAG improves LLM outputs by accessing additional, context-specific knowledge that extends beyond the initial training data of the model. This data is the difference between a generic response and a relevant one.

 

Give Your LLM Just the Right Amount of Context

You might be thinking, “I’ll just give the LLM all the context it could possibly need.” Logical — but this can backfire quickly. LLMs’ context windows are limited. Technically, you can’t send all of your information if it doesn’t fit into the window size.

But even with large context windows or cases where all of the relevant context can fit within the window, sending the LLM all the data at your disposal can create issues. 

Let’s compare this to a human processing information. If you overwhelm someone with information on a topic and then ask them a niche question about it, they might struggle to pinpoint the right response. In contrast, if you give someone just the right amount of relevant information before asking them a question about it, they’ll be better able to answer calmly and accurately.

Flooding LLMs with too much content leads to a decrease in the quality of answers as well as hallucinations. It also exponentially increases computational costs.

 

Keep an Informed Human in Charge

LLMs are knowledgeable, but when it comes to your organization, you know more.

For example, AI tools equipped with a general LLM might provide accurate code completions for generic functions that sort a list, but falter with code that relies on internal APIs. Similarly, bug detection tools might miss errors that are unique to the specific architecture or logic employed in a custom codebase.

In the context of enterprise development, there’s no universal solution for effectively using LLMs. Misapplication or lack of understanding can lead to organizations inefficiently using time and resources.

Developers need to critically evaluate LLM suggestions, making sure they meet the specific standards and needs of their project.

More on LLMsHow to Fine-Tune LLMs

 

Stay on Top of Your LLM Results

LLMs have changed the way developers work. To truly capitalize on their potential, however, we must tailor these models to individual codebases. 

This approach enhances tool efficacy and propels us beyond the current local maximum — a point where existing solutions provide optimal results within known constraints but are far from achieving their full potential.

With customization and continuous innovation, we can break through this local maximum, ushering in a new era of more sophisticated, precise and personalized development tools.

Hiring Now
IGNA - tasty
Fintech • News + Entertainment • Software • Financial Services
SHARE