Investigating Similarity Metrics for Convolutional Neural Networks in the Case of Unstructured Pruning

Type:

Conf

Authors:

Alessio Ansuini, Eric Medvet, Felice Andrea Pellegrino, Marco Zullich

In:

9th International Conference on Pattern Recognition Applications and Methods (ICPRAM), held in Valletta (Malta)

Year:

2020

Links and material:

Abstract #

Deep Neural Networks (DNNs) are essential tools of modern science and technology. The current lack of explainability of their inner workings and of principled ways to tame their architectural complexity triggered a lot of research in recent years. There is hope that, by making sense of representations in their hidden layers, we could collect insights on how to reduce model complexity—without performance degradation—by pruning useless connections. It is natural then to ask the following question: how similar are representations in pruned and unpruned models? Even small insights could help in finding principled ways to design good lightweight models, enabling significant savings of computation, memory, time and energy. In this work, we investigate empirically this problem on a wide spectrum of similarity measures, network architectures and datasets. We find that the results depend critically on the similarity measure used and we discuss briefly the origin of these differences, concluding that further investigations are required in order to make substantial advances.