Skip to main content

Faculty & Research

Close

Representing Random Utility Choice Models with Neural Networks

Journal Article
Motivated by the successes of deep learning, the authors propose a class of neural network–based discrete choice models, called RUMnets, inspired by the random utility maximization (RUM) framework. This model formulates the agents’ random utility function using a sample average approximation. They show that RUMnets sharply approximate the class of RUM discrete choice models: Any model derived from random utility maximization has choice probabilities that can be approximated arbitrarily closely by a RUMnet. Reciprocally, any RUMnet is consistent with the RUM principle. Their approach is closely related to ranking-based models and mixtures of multinomial logits proposed in previous literature in a more general contextual setting. The authors derive an upper bound on the generalization error of RUMnets fitted on choice data and provide theoretical insights on their ability to predict choices on new unseen data depending on critical parameters of the data set and architecture. The models are estimated by leveraging open-source libraries for training neural networks. They find that RUMnets are competitive against several choice modeling and machine learning methods in terms of predictive accuracy on two real-world data sets. They also conduct synthetic experiments that isolate the effects of each architecture component.
Faculty

Associate Professor of Technology and Operations Management