Divulgação - Defesa Nº 254

Aluno: João Victor Tinoco de Souza Abreu

Título: “Extracting Interpretable Classification Models via Readability-Enhanced Genetic Programming”

Orientador: Fernando Buarque de Lima Neto - (PPGEC)

Co-orientador: Denis Mayr Lima Martins

Examinador Externo: Diego Marconi P. Ferreira Silva - (UNICAP)

Examinador Interno: Cleyton Mário de Oliveira Rodrigues - (PPGEC)

Data-hora: 04 de Agosto de 2022 às 14:30h.
Local: Formato Remoto (https://meet.google.com/pro-eama-hjq)


Resumo:

         As the impact of Machine Learning (ML) on business and society grows, there is a need for making ML- based decisions transparent and interpretable, especially in the light of fairness, and to avoid bias, and discrimination. It is known that the high-level applications of complex scenarios require more powerful models, such as Deep Learning (DL) models. Since the user needs to understand the functional details of those models, that is, how the model’s produce their outcomes. This research aims at helping on that front. Even though the use of opaque ML models (OM) for decision-making support trends in many application fields, little is known on revealing how the iteration with the user is valuable and what features and parameters should be used to clarify such OMs. This need for transparency motivated this research. Moreover, the high level of empirical basis on how outcomes should be interpreted was also an important additional motivation aspect. This work has the goal of extracting interpretable, transparent models from selected opaque decision models via a new readability-enhanced multi-objective Genetic Programming (GP) approach. The proposed more interpretable decision models mimic the original OM, and yield similar classification outcomes for the same input data, while keeping model complexity low. Our proposition is grounded on the assumption that higher model complexity hinders interpretability. In light of that, we adapt text readability metrics into proxies to evaluate ML interpretability. Our results on benchmark data sets demonstrate that the readability-based metrics put forward are effective means for assessing interpretability when compared with the state-of-the-art approaches. Experimentally, we observed the practical ability of applying our approach, as we compared the results with our already-known competitors, considering that this study has used better reference of taking interpretability outcomes with a readable-enhanced evolutionary approach.

Banca

Go to top Menu