Back to Projects
2021

Inferring Concept Drift Without Labeled Data

MLOps
Inferring Concept Drift Without Labeled Data
Concept DriftModel MonitoringStatistical Testingscikit-learnmatplotlib

In practice, a trained machine learning model is never final because the complex relationships that it learns are likely to evolve over time - causing the model's performance to deteriorate if not accounted for.

To combat the divergence between static production models and dynamic inference environments, ML teams often adopt an adaptive learning strategy that is triggered by the detection of a drifting concept. This involves monitoring a performance metric of interest and alerting a retraining pipeline when the metric falls below some designated threshold.

While this strategy proves to be effective, it requires immediate access to an abundance of labels at inference time in order to quantify a change in system performance - a requirement that may be cost prohibitive or outright impossible in many real-world machine learning applications.

This research project focuses on approaches for dealing with concept drift when labeled data is not readily accessible.