Nicola Ferro: From Algorithmic Explainability to System Performance Explainability and Prediction

The unparalleled raise of machine learning in many areas of industry and society, not last search and information access, originated the need for transparency and explainability of the outcomes of such algorithms, which typically behave as black boxes.

However, when it comes to the performance and effectiveness of such algorithms, there is not the same level of concern for being able to explain how and why a given performance level has been achieved, which system components caused it, how they contributed to it, and how they interacted. There is a lack of rich explanatory models for system performance which, in turn, hampers the possibility for generalizing and predicting performance in new tasks and domains or, simply, over time.

This lack of performance explainability and predictability results in high economic and industrial costs since system performance can be determined only post-hoc, after building the system and putting it in production, instead of being estimated ahead and avoiding the need to actually develop solutions just for the sake of discovering that they do not meet the expectations.