Ricardo Baeza-Yates

Fellowships

Fellowships
-

Nowadays, machine learning (ML) models are evaluated based on their success. For example, a good model has an accuracy of over 90%. However, less attention is paid to the impact of the errors. They could even be just 1%, but if they harm people, that is a problem for the individuals affected as well as for society at large. Now, to properly evaluate the negative impact of ML, we need to understand its errors. This includes: (1) Understanding how to measure the harm of each error: If we can measure this, then we can minimize the harm instead of maximizing the success (which wrongly assumes that all errors cause the same harm); (2) understanding and characterizing the distribution of errors depending on different parameters of the problem, such as the complexity of the task; and (3) predicting harmful errors in order to be able to mitigate the possible harm. To answer the questions above, Ricardo Baeza-Yates will select a few use cases that allow the harm of errors to be automatically characterized, and that provide a good proxy for task complexity.