MLOps and the evolution of data science
The advancement of computing power over recent decades has led to an explosion of digital data, from traffic cameras monitoring commuter habits to smart refrigerators revealing how and when the average family eats. Both computer scientists and business leaders have taken note of the potential of the data. The information can deepen our understanding of how our world works—and help create better and “smarter” products.
Machine learning (ML), a subset of artificial intelligence (AI), is an important piece of data-driven innovation. Machine learning engineers take massive datasets and use statistical methods to create algorithms that are trained to find patterns and uncover key insights in data mining projects. These insights can help drive decisions in business, and advance the design and testing of applications.
Today, 35% of companies report using AI in their business, which includes ML, and an additional 42% reported they are exploring AI, according to the IBM Global AI Adoption Index 2022. Because ML is becoming more integrated into daily business operations, data science teams are looking for faster, more efficient ways to manage ML initiatives, increase model accuracy and gain deeper insights.
MLOps is the next evolution of data analysis and deep learning. It advances the scalability of ML in real-world applications by using algorithms to improve model performance and reproducibility. Simply put, MLOps uses machine learning to make machine learning more efficient.
What is MLOps?
MLOps, which stands for machine learning operations, uses automation, continuous integration and continuous delivery/deployment (CI/CD), and machine learning models to streamline the deployment, monitoring and maintenance of the overall machine learning system.
Because the machine learning lifecycle has many complex components that reach across multiple teams, it requires close-knit collaboration to ensure that hand-offs occur efficiently, from data preparation and model training to model deployment and monitoring. MLOps fosters greater collaboration between data scientists, software engineers and IT staff. The goal is to create a scalable process that provides greater value through efficiency and accuracy.
Origins of the MLOps process
MLOps was born out of the realization that ML lifecycle management was slow and difficult to scale for business application. The term was originally coined in 2015 in a published research paper called, “Hidden Technical Debts in the Machine Learning System,” which highlighted common problems that arose when using machine learning for business applications.
Because ML systems require significant resources and hands-on time from often disparate teams, problems arose from lack of collaboration and simple misunderstandings between data scientists and IT teams about how to build out the best process. The paper suggested creating a systematic “MLOps” process that incorporated CI/CD methodology commonly used in DevOps to essentially create an assembly line for each step.
MLOps aims to streamline the time and resources it takes to run data science models using automation, ML and iterative improvements on each model version.
How machine learning development works
To better understand the MLOps process and its advantages, it helps to first review how ML projects evolve through model development.
Each organization first begins the ML process by standardizing their ML system with a base set of practices, including:
- What data sources will be used.
- How the models are stored.
- Where they are deployed.
- The process for monitoring and addressing issues in the models once in production.
- How to use ML to automate the refining process into a cyclical ML process.
- How MLOps will be used within the organization.
Once defined, ML engineers can begin building the ML data pipeline:
- Create and execute the decision process—Data science teams work with software developers to create algorithms that can process data, search for patterns and “guess” what might come next.
- Conduct validation in the error process—This method measures how good the guesswork was by comparing it to known examples when available. If the decision process didn’t get it right, the team will then assess how bad the miss was.
- Use feature engineering for speed and accuracy—In some instances, the data set may be too large, have missing data, or include attributes not needed to get to the desired outcome. That’s where feature engineering comes in. Each data attribute, or feature, is managed within a feature store and can be added, deleted, combined or adjusted to improve the machine learning model. The goal is to better train the model for better performance and a more accurate outcome.
- Initiate updates and optimization—Here, ML engineers will begin “retraining” the ML model method by updating how the decision process comes to the final decision, aiming to get closer to the ideal outcome.
- Repeat—Teams will go through each step of the ML pipeline again until they’ve achieved the desired outcome.
Steps in the MLOps process
Where MLOps sees the biggest benefit is in the iterative orchestration of tasks. While data scientists are reviewing new data sources, engineers are adjusting ML configurations. Making simultaneous adjustments in real-time vastly reduces the time spent on improvements.
Here are the steps commonly taken in the MLOps process:
- Prepare and share data—ML teams prepare data sets and share them in catalogs, refining or removing incomplete or duplicate data to prepare it for modelling, as well as making sure data is available across teams.
- Build and train models—Here is where ML teams use Ops practices to make MLOps. Using AutoML or AutoAI, opensource libraries such as scikit-learn and hyperopt, or hand coding in Python, ML engineers create and train the ML models. In short, they’re using existing ML training models to train new models for business applications.
- Deploy models—The ML models are available within the deployment space and accessed via a user interface (UI) or notebook, like Jupyter notebooks. This is where teams can monitor deployed models and look for implicit bias.
- Improve models with automation—In this stage, similar to the error process above, teams use established training data to automate improvement of the model being tested. Teams can use tools like Watson OpenScale to ensure the models are accurate and then make adjustments via the UI.
- Automate the ML lifecycle—Once the models are built, trained and tested, teams set up the automation within ML pipelines that create repeatable flows for an even more efficient process.
How generative AI is evolving MLOps
The release of OpenAI’s ChatGPT sparked interests in AI capabilities across industries and disciplines. This technology, known as generative AI, has the capability to write software code, create images and produce a variety of data types, as well as further develop the MLOps process.
Generative AI is a type of deep-learning model that takes raw data, processes it and “learns” to generate probable outputs. In other words, the AI model uses a simplified representation of the training data to create a new work that’s similar, but not identical, to the original data. For example, by analyzing the language used by Shakespeare, a user can prompt a generative AI model to create a Shakespeare-like sonnet on a given topic to create an entirely new work.
Generative AI relies on foundation models to create a scalable process. As AI has evolved, data scientists have acknowledged that building AI models takes a lot of data, energy and time, from compiling, labeling and processing data sets the models use to “learn” to the energy is takes to process the data and iteratively train the models. Foundation models aim to solve this problem. A foundation model takes a massive quantity of data and using self-supervised learning and transfer learning can take that data to create models for a wide range of tasks.
This advancement in AI means that data sets aren’t task specific—the model can apply information it’s learned about one situation to another. Engineers are now using foundation models to create the training models for MLOps processes faster. They simply take the foundation model and fine-tune it using their own data, versus taking their data and building a model from scratch.
Benefits of MLOps
When companies create a more efficient, collaborative and standardized process for building ML models, it allows them to scale faster and use MLOps in new ways to gain deeper insights with business data. Other benefits include:
- Increased productivity—The iterative nature of MLOps practices frees up time for IT, engineering, devs, and data scientists to focus on core work.
- Accountability—According to the IBM Global AI Adoption Index 2022, a majority of organizations haven’t taken key steps to ensure their AI is trustworthy and responsible, such as reducing bias (74%), tracking performance variations and model drift (68%), and making sure they can explain AI-powered decisions (61%). Creating an MLOps process builds in oversight and data validation to provide good governance, accountability and accuracy of data collection.
- Efficiency and cost savings—Data science models previously required significant computing power at a high cost. When these time-consuming data science models are streamlined and teams can work on improvements simultaneously, it saves time and cost.
- Reduced risk—Machine learning models need review and scrutiny. MLOps enables greater transparency and faster response to such requests. When organizations meet compliance metrics, it reduces the risk of costly delays and wasted efforts.
MLOps use cases
There are countless business use cases for deep learning and ML. Here are some instances where MLOps can drive further innovation.
IT—Using MLOps creates greater visibility into operations, with a central hub for deployment, monitoring, and production, particularly when building AI and machine learning models.
Data science—Data scientists can use MLOps not only for efficiency, but also for greater oversight of processes and better governance to facilitate regulatory compliance.
DevOps—Operations teams and data engineers can better manage ML processes by deploying models that are written in programming languages they’re familiar with, such as Python and R, onto modern runtime environments.
MLOps vs. DevOps
DevOps is the process of delivering software by combining and automating the work of software development and IT operations teams. MLOps, on the other hand, is specific to machine learning projects.
MLOps does, however, borrow from the DevOps principles of a rapid, continuous approach to writing and updating applications. The aim in both cases is to take the project to production more efficiently, whether that’s software or machine learning models. In both cases, the goal is faster fixes, faster releases and ultimately, a higher quality product that boosts customer satisfaction.
MLOps vs. AIOps
AIOps, or artificial intelligence for IT operations, uses AI capabilities, such as natural language processing and ML models, to automate and streamline operational workflows. It is a way to manage the ever-increasing volume of data produced within a production environment and help IT operations teams respond more quickly—even proactively—to slowdowns and outages.
Where MLOps is focused on building and training ML models for use in a number of applications, AIOps is focused on optimizing IT operations.
MLOps and IBM
Watsonx.ai empowers data scientists, developers, and analysts to build, run, and manage AI models—bringing traditional AI and generative AI into production, faster. Build models either visually or with code, and deploy and monitor into production. With MLOps you can simplify model production from any tool and provide automatic model retraining.
Looking to scale the impact of AI across your business?
Explore watsonx.ai, our next-generation enterprise studio for AI builders