The biggest problem in machine learning model accuracy is that all models degrade. They perform well initially but as the real world changes the models no longer capture the underlying structure. Both concept drift (the meaning and universe of values of the target dependent value being predicted) and data drift (the universe of values for independent value features) affect all predictive cases to varying degrees. The most important remaining problem in model lifecycles isn’t bad data or bad algorithms — it is failure to detect drift and enable the model to be efficiently retrained and adjusted.
One of the reasons for this gap is the difficulty of monitoring the feature and target value changes over time. Once the model begins to degrade, most machine learning practitioners are on to the next critical predictive problem. This problem cries out for a new category of enterprise IT and data science infrastructure to handle the challenge.
Enter MLRAM: machine learning review and monitoring (MLRAM). It fits into the general category of machine learning operations (mlops) but done properly is a complex enough issue that it needs to be considered on its own. While there are toolkits such as scikit-multiflow which help to monitor data drift in the features, such tools don’t address the bulk of work of this problem, including aggregating actuals versus predictions at various time intervals, providing visualizations of the problem and automating efficient retraining.
Auger’s MLRAM product performs the following functions:
Auger’s MLRAM product supports most of these features with any predictive model. Retraining is supported for predictive models created with Microsoft’s Azure AutoML and Auger.AI.
Go to the product page to see how MLRAM can be used to monitor your model’s accuracy and keep your model performing well. The product is free to try at a limited volume. We’d like to hear your feedback about MLRAM and what else you need in an accuracy monitoring tool.