ubiai deep learning

ML Ops tools for NLP

Aug 25, 2022

In this article, we will present the different Machine Learning lifecycle approaches, from traditional pipeline to advanced MLOps, and in this context, we will present some best practices for NLP projects with suggested tools to apply them.

  1. Overview: From traditional ML to advanced MLOps
  2. MLOps tools for best practices in NLP projects
 
 

Overview: From traditional ML to advanced MLOps

 
 

The typical and most traditional way to do Machine Learning is basically a manual approach where the data scientist will conduct cycles of experimentation (EDA, pre-processing, training, validation…) to get a model that has a good predictive performance based on relevant and consistent hypotheses and validation metrics. Once we have such a model, the role of the data scientist ends, and it’s the engineers who will take it from here to deploy that model as a prediction microservice with a REST API, for example, so that we can make predictions in a production environment.

Once we deploy this model in a production environment, and like any environment, there is a tendency for it to change dynamically, and therefore the data regarding that environment will change, and here there will be a misalignment between the hypothesis and metrics used by the data scientists and the new data relative to the environment, as a result, the model will break when it fails to adapt to the changes in the real world.

We can note here that this way of doing ML is adapted to environments with limited and slow changes where they either did not require the model to be adapted to new data or require extremely infrequent fine-tuning (for example, once a year) and it will be the same process where the data scientist will do the work and then the engineer will entirely deploy the new model.

To solve this problem, as well as other challenges related to this approach, such as the tedious manual tasks of data acquisition, model training, testing, building, and deployment… the MLOps is increasingly adopted by companies with a fast-changing environment and large-scale applications.

MLOps is derived from DevOps, hence its name, however it incorporates some unique properties related to the nature of a data scientist’s work.

First, let’s start with the simplest kind of MLOps. In the traditional approach, we deploy a trained model as a prediction service in production. But now, we will deploy a full training pipeline, which runs automatically and in a triggered manner to provide a trained model as a prediction service responsive to emerging changes in the data (automatically gathered data).
This method is called Continuous Training, where the deployed pipeline is defined by the data scientist who will work with the data engineers to convert it into reproducible scripts.
Thus, this type of basic MLOps guarantees that the model is automatically trained in production using new data, that there is Model Continuous Delivery (automatically trained model deployment steps are automated), that experiment results are tracked, and that the training code and models are well versioned.

Secondly, as we automated the model training and delivery to fit the new data by deploying the entire training pipeline, you might now be wondering:
What if we needed to update the pipeline to fit the new business needs?
What if using a new algorithm and a new set of parameters would result in a pipeline that delivers better models?
So, the data scientist here will iteratively try new ML algorithms and techniques, which will result in the source code for the ML pipeline steps, and then we need to build that source code and run various tests related to the new pipeline updates, which is known as Pipeline Continuous Integration, resulting in a pipeline to be deployed in the target environment, and this is the Pipeline Continuous Delivery, the result of this step is a deployed pipeline with the new implementation.

Check out this Microsoft article to learn more about MLOps maturity levels.

Note: we need to collect real-time statistics on the model’s performance to decide if we should run the training pipeline or perform a new experimental cycle to update the pipeline (Monitoring).

 
 

MLOps tools for best practices in NLP projects

 

Now that we have an overview of MLOps, let’s make a list of MLOps tools that allow us to work with best practices when conducting an NLP project.

* Make sure you are able to trigger your pipeline based on data changes:

Pachyderm has automated versioning and data-driven pipelines that are automatically triggered based on the detection of data changes, enabling automatic scaling and parallel processing of petabytes of unstructured data (text in our NLP case).

 
Machine Learning Ops tools for Natural Language Processing
The pipeline can be triggered based on data changes but also on time. (Source: Pachyderm)
* Make sure that data labeling is carried out according to a carefully developed process:

UBIAI auto-labeling tools allow annotation and labeling of datasets that are used for training AI models, this feature uses different approaches (Dictionary-based, ML-based, Rule-based) and it ensures reliable labeling.

Machine Learning Ops tools for Natural Language Processing
UBIAI labeling tool: User-friendly and based on a variety of approaches. (Source: UBIAI)
* Make sure to track and make experiments reproducible, version control models and data:

DVC is an open-source experimentation management tool for ML projects. It is based on Git and replaces large files with small meta-files that point to the original data. Thus, it is possible to keep these meta-files with the project source code in a repository, while the large files are kept in a remote data store.

Machine Learning Ops tools for Natural Language Processing
Experiments tracking. (Source: DVC)
* Make sure you have relevant evaluation metrics and continuously monitor performance:

Neptune is a metadata store for MLOps that allows you to build dashboards that display performance metrics of production tasks and view the metadata of ML CI/CD pipelines, visualize how the model update has changed performance, and compare different models (Monitoring).

Machine Learning Ops tools for Natural Language Processing
Hardware consumption monitoring. (Source: Neptune)
 

Conclusion

 

Throughout this article, we have presented an overview of MLOps, as well as the associated tools for applying best practices in the execution of NLP projects. As a next step, you can consider exploring the different options for establishing your MLOps infrastructure. Whether you choose to build it yourself, buy a fully managed solution, or adopt a hybrid approach. We have already introduced you to some tools you can use, and all that’s left is to make the right decision based on your environment.