Skip to main content

Faculty & Research


Organizing for Permanent Beta: Performance Measurement Before vs Performance Monitoring After Release of Digital Services

Journal Article
Purpose: due to the complexity of digital services, companies are increasingly forced to offer their services “in permanent beta”, requiring continuous fine-tuning and updating. Complexity makes it extremely difficult to predict when and where the next service disruption will occur. The authors examine what this means for performance measurement in digital service supply chains. Design/methodology/approach: the authors use a mixed-method research design that combines a longitudinal case study of a European digital TV service provider and a system dynamics simulation analysis of that service provider's digital service supply chain. Findings: with increased levels of complexity, traditional performance measurement methods, focused on detection of software bugs before release, become fragile or futile. The authors find that monitoring the performance of the service after release, with fast mitigation when service incidents are discovered, appears to be superior. This involves organizational change when traditional methods, like quality assurance, become less important. Research limitations/implications: the performance of digital services needs to be monitored by combining automated data collection about the status of the service with data interpretation using human expertise. Investing in human expertise is equally important as investing in automated processes. Originality/value: the authors draw on unique empirical data collected from a digital service provider's struggle with performance measurement of its service over a period of nine years. The authors use simulations to show the impact of complexity on staff allocation.

Emeritus Professor of Technology and Operations Management