Divulgação - Defesa Nº 218

Aluno: Letícia Virgínia Netto Lapenda

Título: “Autoencoder latent space: an empirical study”.

Orientador: Prof. Carmelo José Albanez Bastos Filho

Data-hora: 25/Setembro/2020 (11:00h)
Local: Escola Politécnica de Pernambuco – Formato Remoto (http://meet.google.com/hvn-jtom-qpa)


Resumo:

“Feature extraction is essential to many machine learning tasks. By extracting features, it is possible to reduce the dimensionality of datasets, focusing on the most relevant features and minimizing redundancy. Autoencoders (AE) are neural network architectures commonly used for feature extraction. A usual metric used to evaluate AEs is the reconstruction error, which compares the AE output data with the original one. However, many applications depend on how the input representations in intermediate layers of AEs, i.e. the latent variables, are distributed. Therefore, additionally to the reconstruction error, an interesting metric to study the latent variables is the Kullback-Leibler divergence (KLD). This work analyzes how some variations on the AE training process impact the aforementioned metrics. Those variations are: 1. the AE depth, 2. the AE middle layer architecture, and 3. the data setup used for training. Results have shown a possible relation between the KLD and the reconstruction error. In fact, lower errors have happened for higher KLDs and less compressed latent variables, i.e. more neurons on the AE middle layers.”

Go to top Menú