Deep neural networks have become essential for numerous applications due to their strong empirical performance such as vision, RL, and classification. Unfortunately, these networks are quite difficult to interpret, and this limits their applicability in settings where interpretability is important for safety, such as medical imaging. One type of deep neural network is neural tangent kernel that is similar to a kernel machine that provides some aspect of interpretability. To further contribute interpretability with respect to classification and the layers, we develop a new network as a combination of multiple neural tangent kernels, one to model each layer of the deep neural network individually as opposed to past work which attempts to represent the entire network via a single neural tangent kernel. We demonstrate the interpretability of this model on two datasets, showing that the multiple kernels model elucidates the interplay between the layers and predictions.
Recommended citation: Alemohammad, S., Babaei, H., Barberan, C. J., Liu, N., Luzi, L., Mason, B., & Baraniuk, R. G. (2021). NFT-K: Non-Fungible Tangent Kernels. arXiv preprint arXiv:2110.04945. (To appear at ICASSP 2022)