• /
  • EnglishEspañol日本語한국어Português
  • Inicia sesiónComenzar ahora

Introduction to model performance monitoring (MLOps)

Machine-learning operations is a set of practices designed to increase the quality, simplify the management process, and automate the deployment of machine learning models in large-scale production environments.

As more companies invest in artificial intelligence and machine learning, there's a gap in understanding between the data science teams developing machine-learning models and the DevOps teams operating the applications that power those models. As of today only 15% of companies deploy AI to encompass their entire activities. It doesn't help that 75% of machine learning models in production are never used due to issues in deployment, monitoring, management, and governance. Ultimately, this leads to a huge waste of time for the engineers and data scientists working on the models, a large net loss of money invested by the company, and a general lack of trust when machine learning models enable quantifiable growth.

Our model performance monitoring provides data scientists and MLOps practitioners visibility into the performance of their machine-learning applications by monitoring the behavior and effectiveness of models in production. It allows data teams the ability to collaborate directly with DevOps teams which creates a continuous process of development, testing, and operational monitoring.

How to monitor your machine learning models

To use model performance monitoring within New Relic alerts, you have a few different options:

  1. Bring your own data (BYOD): This is the New Relic recommended approach. Our ML model performance monitoring provides in-depth observability to how your ML models operate in production. BYOD can be used from any environment (Python script, container, Lambda function, SageMaker, etc.) and it can be easily integrated with any machine learning framework (Scikit-learn, Keras, Pytorch, Tensorflow, Jax, etc.). Using BYOD you can bring your own ML model telemetry into New Relic and start getting value from your ML model data. In just a few minutes, you can get feature distribution, statistics data, and prediction distribution alongside any other custom metrics you would like to monitor. Read more on BYOD in our docs.

  2. Integrations: New Relic has also partnered with Amazon SageMaker, giving you a view of performance metrics from SageMaker into New Relic, and expanding access to observability for ML engineers and data science teams. Read more on our Amazon SageMaker integration.

  3. Partnerships: New Relic has partnered with seven different MLOps vendors who offer specific use cases and monitoring capabilities. Partners are a great way to gain access to curated performance and other observability tools, providing out-of-the-box dashboards that give you instant visibility into your models.

    We currently partner with:

To start measuring machine learning model performance in minutes using either one of these options, check out the model performance monitoring quickstarts.

How to monitor OpenAI GPT Apps

With the GPT series application integration, you'll have the ability to monitor OpenAI completion queries and log useful statistics in a New Relic customizable dashboard about your requests. By adding just two lines of code, you can gain access to key performance metrics such as cost, response time, and sample inputs/outputs. The fully customizable dashboard also allows users to track total requests, average token/requests, and model names. Read more or install the integration by visitng our New Relic OpenAI Quickstart.

Copyright © 2024 New Relic Inc.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.