by Meeta Dash Posted in Technical | October 22, 2024 3 min read In recent years, machine learning operations (MLOps) have become the standard practice for developing, deploying, and managing machine learning models.MLOps standardizes processes and workflows for faster, scalable, and risk-free model deployment, centralizing model management, automating CI/CD for deployment, providing continuous monitoring, and ensuring governance and release best practices.However, the rapid rise of large language models (LLMs) has introduced new challenges around computing cost, infrastructure needs, prompt engineering, and other optimization techniques, governance, and more.
This requires an evolution of MLOps into what we now call “large language model operations” (LLMOps).Let’s explore some key differentiating areas where LLMOps introduce novel processes and workflows compared to traditional MLOps.Expanding the Builder Persona: Traditional ML applications largely involve data scientists building models, with ML engineers focusing on pipelines and operations.
With LLMs, this paradigm has shifted.Data scientists are no longer the only ones involved—business teams, product managers, and engineers play a more active role, particularly because LLMs lower the barrier to entry for AI-driven applications.The rise of both open-source models (e.g.; Llama, Mistral) and proprietary services (e.g., OpenAI) have removed much of the heavy lifting around model building and training.
This democratization is a double-edged sword.While LLMs can be easily integrated into products, new challenges like compute cost, infrastructure needs, governance, and quality must be addressed.Low-Code/No-Code as a Core Feature: In MLOps, tools were primarily designed for data scientists, focusing on APIs and integrations with Python or R.
With LLMOps, low-code/no-code tooling has become essential to cater to a broader set of users and make LLMs accessible across various teams.A key trend is how LLMOps platforms now emphasize user-friendly interfaces, enabling non-technical stakeholders to build, experiment, and deploy LLMs with minimal coding knowledge.More Focus on Model Optimization: When using LLMs, teams often work with general-purpose models, fine-tuning them for specific business needs using proprietary data.
Therefore, model optimization techniques are becoming central to LLMOps.These techniques, such as quantization, pruning, and prompt engineering, are critical to refining LLMs to suit targeted use cases.Optimization not only improves performance but is essential for managing the cost and scalability of LLM applications.
Prompt Engineering: A completely new concept introduced by LLMOps is prompt engineering—the practice of crafting precise instructions to guide the model’s behavior.This is both an art and science, serving as a key method for improving the quality, relevance, and efficiency of LLM responses.Tools for prompt management include prompt chaining, playgrounds for testing, and advanced concepts like meta-prompting techniques where users leverage one prompt to improve another prompt, which should be part of an LLMOps stack.
Techniques like Chain of Thoughts and Assumed Expertise are becoming standard strategies in this new domain.The Emergence of Retrieval-Augmented Generation (RAG): Unlike traditional ML models, many enterprise-level GenAI use cases involving LLMs rely on retrieving relevant data from external sources, rather than solely generating responses from pre-trained knowledge.This has led to the rise of Retrieval-Augmented Generation (RAG) architectures, which integrate retrieval models to pull information from enterprise knowledge bases, and then rank and summarize that information using LLMs.
RAG significantly reduces hallucinations and offers a cost-effective way to leverage enterprise data, making it a new cornerstone of LLMOps.Building and managing RAG pipelines is a completely new challenge that wasn’t part of the MLOps landscape.In the LLMOps life cycle, building and managing a RAG pipeline has replaced traditional model training as a key focus.
While fine-tuning LLMs is still critical (and similar to ML model training), it brings new challenges around infrastructure and cost.Additionally, the use of enterprise data in RAG pipelines creates new data management challenges.Capabilities like vector storage, semantic search, and embeddings have become essential parts of the LLMOps workflow—areas that were less prevalent in MLOps.
Evaluation and Monitoring is Less Predictable: Evaluating and monitoring LLMs is more complex than with traditional ML models.LLM applications are often context-specific, requiring significant input from subject matter experts (SMEs) during evaluation.Auto-evaluation frameworks, where one LLM is used to assess another, are beginning to emerge.
However, challenges like the unpredictability of generative models and issues like hallucination remain difficult to address.To navigate these challenges, many companies first deploy internal LLM use cases, such as agent assistants, to build confidence before launching customer-facing applications.Risk Management and Governance: Model risk management has always been a critical focus for MLOps, but LLMOps introduces new concerns.
Transparency into what data LLMs are trained on is often murky, raising concerns about privacy, copyrights, and bias.Additionally, making LLMs auditable and explainable remains an unsolved problem.Enterprises are beginning to adopt AI risk frameworks, but best practices are still evolving.
For now, focusing on thorough evaluation, continuous monitoring, creating a catalog of approved models, and establishing governance policies are essential first steps.AI governance will be a central pillar of LLMOps tooling going forward.As enterprises adopt LLMs, the shift from MLOps to LLMOps is essential for addressing their unique challenges.
LLMOps emphasizes prompt engineering, model optimization, and RAG.It also introduces new complexities in governance, risk management, and evaluation, making LLMOps crucial for successfully scaling and managing these advanced models in production.For enterprises interested in learning more about leveraging LLMs, click here. Meeta Dash Sr.
Director, Product Management
More by this author
Editor's Choice
Business
Acquisition of Verta’s Operational AI Platform Will Transform Cloudera’s AI Vision to Reality
Business
Bringing Financial Services Business Use Cases to Life: Leveraging Data Analytics, ML/AI, and Gen AI