Model prediction interpretation in a human-readable form is a key for making a great machine learning system. Understanding how object features affect model predictions helps debug and explain system behavior to stakeholders. Today Nikita shows how to use SHAP Values for understanding model predictions.
CatBoost Interpret CatBoost models: built-in tools for understanding model predictions
Карта навыков
Узнайте, какими навыками должен обладатьИИ-разработчик