AI in Data Science: How Large Language Models Help Companies Grow

AI continues to be at the center of the tech conversation — especially for these six companies.

Written by Mia Goulart
Published on Feb. 29, 2024
Person working on laptop with AI interface.
Shutterstock
Brand Studio Logo

AI continues to be at the center of the tech conversation, and language models and large language models are an important and quickly developing facet of work for many data scientists. 

While they may seem similar, AI acronyms like LM and LLM can't be used interchangeably. To get an accurate picture of how these systems are transforming work for many teams, it’s important to first understand what they are and how they differ. 

 

AI DEFINED: LM VS. LLM

According to data marketing provider TechTarget, LMs are commonly used in natural language processing applications where a user inputs a query in natural language to generate a result. An LLM is the evolution of the language model concept in AI that dramatically expands the data used for training and inference. While there isn't a universally accepted figure for how large the data set for training needs to be, an LLM has at least 1 billion or more parameters.

 

Built In recently sat down with six companies to learn more about them — including how their data scientists are using them in their work and their stance on what’s on the horizon. 

 

 

Charlie Natoli
Senior Data Scientist • Klaviyo

Klaviyo is a marketing automation and email platform designed to help grow businesses.

Can you briefly describe how your team uses LLMs in your work?

Klaviyo uses LLMs to enable our customers to get things done using natural language. So far, we’ve enabled customers to create marketing content, set up customer segments by various attributes and answer technical support questions.

 

Are there any projects using LLM that you find particularly compelling in your own work? How did you train these models?

My team just released EmailAI, which I am very excited about. The premise is that if a user is creating an email in Klaviyo, they can describe what their email is about, and the AI will design it for them using Klaviyo’s email template editor. 

This has been valuable for users who have an idea of what to write but aren’t sure how or don’t have enough time. EmailAI helps by pulling together different ways to do layout, copy and formatting that a user can then run with. 

To train these models, I made extensive use of few-shot prompting, steering the tool toward greater creativity by showing it a range of creative designs. I also took inspiration from chain-of-thought prompting and asked the tool to generate supporting information that makes the final product more accurate and creative before designing a handful of post-processing steps for things like accessible color schemes and good design practices. 

 

As changes in AI continue to develop rapidly, what do you see as potential opportunities for LLMs in the coming months? 

AI agents and multimodality are two big potential opportunities for 2024. AI agents like OpenAI’s GPTs are systems where the LLM can decide to use outside tools or data in addition to conversing with the user. This unlocks opportunities for LLMs to perform increasingly open-ended and domain-specific tasks for users.

Multimodality is the ability of models to handle multiple modes of input, such as images, which could unlock several opportunities for creators that use Klaviyo. Imagine if an LLM could critique an email or give you ideas for how to lay out some of your product photography. 

Even with these major advances, a clear focus on customers is critical. We need to know we are solving a real need and that our tools are accurate and useful. Otherwise, we risk building chatbots that aren’t useful or reliable. Klaviyo strives to put the customer first and brings this mindset into every AI tool we build.

 

“Even with these major advances, a clear focus on customers is critical.”

 

 

 

Delaram Behnami
Senior ML Applied Scientist • Lily AI

Lily AI is a product attributes platform that injects the language of the customer across your existing retail stack, connecting shoppers with the relevant products they’re looking to buy. 

Can you briefly describe how your team uses LLMs in your work?

Lily AI primarily uses LLMs in product copy generation and label quality assurance, like verifying labels used for training product attribution models and triaging poor labels to be reviewed by domain experts.

 

Are there any projects using LLM that you find particularly compelling in your own work? How did you train these models?

Recently, we launched a generative LLM-powered content generation that creates tailored product and marketing copy that highlights each item’s notable benefits and functions. The solution combines our industry-leading computer vision models, specialized in deriving both subjective consumer-centric terms and objective attributes with LLMs enhanced with few-shot learning from examples created by our domain experts. This solution accounts for SEO, brand voice and audience, empowering merchants and marketers to launch product assortments quickly and reliably.

In addition to textual generative tasks like content generation, we’ve seen promise in using LLMs for visual tasks conventionally performed by discriminative models, such as object detection. We are engaged in rigorous testing to determine the degree to which the predictive qualities of such LLM-based approaches are on par with the discriminative models. As a pioneer in retail AI, we are committed to substantiating our enthusiasm for LLMs via a thorough analysis of our findings to consistently deliver value to our clients and their customers.

 

As changes in AI continue to develop rapidly, what do you see as potential opportunities for LLMs in the coming months? 

Factors that might impact the LLM market include the shift from proprietary to open-source LLMs, impacted by players such as Meta, and a Gemini launch, considering Google’s tensor processing unit infrastructure. These developments will drive down costs and lower market entry barriers. We encourage healthy competition in the AI industry while focusing on delivering value to retailers and their end consumers. 

We expect an increased focus on multimodal generative AI with LLMs, with advancements in explainability, interpretability and transparency of LLM decision-making processes. We anticipate a push for LLMs to source and verify the information and stay grounded in facts.

 

“We expect an increased focus on multimodal generative AI with LLMs, with advancements in explainability, interpretability and transparency of LLM decision-making processes.”

 

With increased media coverage and consumer literacy, concerns about the ethical implications of LLMs, especially regarding data privacy and protection, are expected to be scrutinized by the community to ensure accountable and responsible development, governance and stewardship.

We suspect applied AI groups that focus on domain expertise and customer experience will be at the forefront of their industry, and we expect to see the development of specialized LLMs focused on specific domains.

 

 

Chris Tanner
Head of R&D • Kensho Technologies

Kensho leverages S&P Global’s world class data to research, develop and implement leading AI and machine learning capabilities that drive fact-based, objective decision making. 

 

LLMS USED BY KENSHO

Tanner’s team conducts fundamental research that centers around LLMs for finance and business, including:

  • Tokenization 
  • Long-form document question answering
  • Numeric reasoning
  • Domain knowledge
  • Quantity extraction
  • Program synthesis 
  • Alignment/factuality 

 

Are there any projects using LLM that you find particularly compelling in your own work? How did you train these models?

All of them — that's why we are trying to push the envelope with state-of-the-art results. 

Despite its huge role in LLMs, tokenization is relatively under-researched within the NLP community. It's the first stage of any NLP model, as it determines how all words are represented and processed. This project has been particularly compelling, and we've created our own tokenizer.

We have also put forth significant effort over the past year in creating a suite of challenging tasks to evaluate LLMs’ abilities to understand and work with finance and business. We call this suite BizBench, and it encompasses a wide range of problems, like those previously mentioned. We believe it to be incredibly valuable to the research community, as it provides a fair assessment to anyone working in the field of NLP for finance and business.

 

As changes in AI continue to develop rapidly, what do you see as potential opportunities for LLMs in the coming months? 

The rate of innovation is accelerating at an unprecedented rate. It's impossible to predict the future, but we anticipate the field to continue to make progress on developing highly performant LLMs. These are smaller, domain-specific information-retrieval-based approaches that combine with principled ways of leveraging application programming interfaces and more. 

Another golden question is whether transformers will continue to remain the core model architecture for LLMs. Since their inception in 2017, they have yielded unfathomable capabilities and while there have been many relatively small improvements over the last seven years, it's astonishing that no other model architecture has been developed to rival its performance. There is a growing interest in state-space models like Mamba, but it's unclear how powerful such models can become.

 

 

 

Chenguang Zhu
AI Science Manager • Zoom Video Communications

 Zoom is the leader in modern enterprise video communications, with a reliable cloud platform for video and audio collaboration across mobile devices, desktops, telephones and room systems. 

Can you briefly describe how your team uses LLMs in your work?

We use LLMs to facilitate various Zoom AI Companion capabilities, such as summarizing meetings, answering questions about meetings, and drafting chat and email responses.

 

Are there any projects using LLM that you find particularly compelling in your own work? How did you train these models?

Meeting questions are a very compelling and important use case to help participants answer their various questions during a meeting, including what they missed if they joined late. While the answers are based on what’s been discussed in the meeting, there’s no limit to what a user can ask, so it’s both challenging and very useful.

We adopted a federated AI approach to leverage multiple LLMs for Zoom AI Companion tasks. In doing so, we help lower the cost while still offering high-quality results to users.

 

“We adopted a federated AI approach to leverage multiple LLMs for Zoom AI Companion tasks.”

 

As changes in AI continue to develop rapidly, what do you see as potential opportunities for LLMs in the coming months? 

I foresee that open-source LLMs will soon catch up with proprietary LLMs such as Chat GPT-4 in terms of quality and general capability; multimodal models will be commonly used in business scenarios; and AI-based next-generation operating systems and personal computers/phones will start to appear.

 

 

 

Brian Bunker
Staff Data Scientist • IMO Health

Intelligent Medical Objects is a healthcare data enablement company embracing AI to transform many of healthcare.

Can you briefly describe how your team uses LLMs in your work?

At IMO, we’re in the medical language business, so LLMs are far more than a tool for us. They are an essential technology in the new products we are developing. LLMs have enabled new capabilities for working with language, but they also raise the bar. We’re putting a lot of effort into using them in the most effective and creative ways to increase the data quality and precision that set our products apart.

 

Are there any projects using LLM that you find particularly compelling in your own work? How did you train these models?

IMO built its business model on providing the most precise and reliable medical terminology available, and our products are incorporated throughout the medical system wherever language is needed to be exact. Our products are based on the accumulated effort of our medical experts over the three decades. 

Previous technologies have been useful in storing and curating this content. LLMs finally enable reasoning about medical language in roughly the same way as humans do. This opens up many ways for our engineers to align their technology with the medical experts’ thinking. 

 

“LLMs finally enable reasoning about medical language in roughly the same way as humans do.”

 

But it’s not enough to ask GPT free-form questions; that’s far too imprecise. Some of our most compelling work involves using LLMs to probe medical data and literature and reason about the results. So-called agent-based methods, in which LLMs use external tools to query knowledge sources, are particularly useful for this.

 

As changes in AI continue to develop rapidly, what do you see as the potential opportunities for LLMs in the coming months? 

Considering how fast new ideas and paradigm shifts arrived in 2023, there’s no doubt we’ll be adapting to even more disruptive developments in 2024 and beyond. These are hard to predict, but we can certainly assume there will be larger models and more infrastructure to build advanced applications. 

We are also going to see new products emerge, potentially open-sourced, that turn previously hard problems into commodity off-the-shelf solutions. Companies have to be aggressive about using the technology in the most effective way to create unique products that solve specialized problems and make clear business sense to customers.

 

 

Responses have been edited for length and clarity. Images provided by Shutterstock and listed companies.