Skip to main content
Waiting is annoying, we know......
Waiting is annoying, we know......
Perofrmance Engineering

Performance is critical in today’s software systems—especially when user experience and retention are on the line. Companies invest heavily in performance testing and monitoring, yet issues still slip through: slow response times, frustrated users, and expensive firefighting measures, which boils down to negative business impact. At REALISE Lab, we focus on identifying these challenges early and designing methods to ensure systems stay fast, scalable, and reliable under pressure. We did this by engaging in academic and industrial collaborations.

  • We provided a tool that leverages static analysis to identify bad practices from a benchmarking used in performance testing.
  • We proposed an approach for detecting performance regressions early by propagating component-level anomalies across the architectural model.
  • We collaborated with Mozilla to provide a dataset of performance measurements, alerts, and bugs extracted from the Mozilla systems. We believe that this dataset will help researchers and professionals in different performance related areas.

Related Publications:

A Dataset of Performance Measurements and Alerts from Mozilla (Data Artifact)

Authors:

Mohamed Bilel Besbes, Diego Elias Costa, Suhaib Mujahid, Gregory Mierzwinski, Marco Castelluccio


Venue:

International Conference on Performance Engineering (ICPE)

Early Detection of Performance Regressions by Bridging Local Performance Data and Architectural Models

Authors:

Lizhi Liao, Simon Eismann, Heng Li, Cor-Paul Bezemer, Diego Elias Costa, André van Hoorn


Venue:

ICSE, 2025

What’s Wrong With My Benchmark Results? Studying Bad Practices in JMH Benchmarks

Authors:

Diego Costa, C. Bezemer, P. Leitner and A. Andrzejak


Venue:

IEEE Transactions on Software Engineering (TSE)