We develop a method of detecting statistical interactions in data by directly interpreting the trained weights of a feedforward multilayer neural network. By structuring the neural network to statistical properties of data and applying sparsity regularization, we are able to leverage the weights to detect interactions with similar performance to the state-of-the-art without searching an exponential solution space of possible interactions. We obtain our computational savings by first observing that interactions between input features are created by the non-additive effect of nonlinear activation functions and that interacting paths are encoded in weight matrices. We use these observations to develop a way of identifying higher-order interactions with a simple traversal over the input weight matrix. In experiments on simulated and real-world data, we demonstrate the performance of our method and the importance of discovered interactions.
Download Full PDF Version (Non-Commercial Use)