Abstract
<jats:p>This article explores the use of Kolmogorov-Arnold Networks as interpretable machine learning models instead of the universal approach theorem used in machine and deep learning. Emphasis is placed on architectures based on B-splines, T-splines, and FastKAN using RBFs, which allow for transparent function approximation. The article discusses how symbolic representations emerge from trained models, the role of node pruning in simplifying structure, and the potential of these techniques to uncover latent physical models or aid in scientific modeling where interpretability is essential. Also, by pruning the neural model, it is possible to simplify the interpretable model.</jats:p>
Show More
Keywords
models
article
interpretable
machine
learning
PORE