This article explains how to compare recent runs in Leapwork Performance using Trend analysis. Use it to understand whether performance is improving, staying stable, or regressing across comparable runs.
What this article covers
-
Generating a trend analysis for a sequence in a timeline
-
Narrowing the analysis by endpoint, region, metric, and recent-run count
-
Setting a baseline run for comparison
-
Interpreting the chart, summary cards, and comparison table
-
Exporting the analysis to CSV
Before you start
-
Open the Results page for a run in the project you want to analyze.
-
Make sure you can view run results for the project.
-
You need more than one comparable run to get meaningful trend data.
Filter options at a glance
|
Filter |
What it does |
|---|---|
|
Sequence |
Selects the sequence in the timeline that you want to compare across runs. |
|
Endpoint |
Lets you compare the full sequence or focus on one endpoint inside that sequence. |
|
Metric |
Lets you compare Avg response time, P90, P95, P99, or Error rate. |
|
Number of recent runs |
Controls how many recent runs are included in the analysis. |
|
Region |
Lets you compare all regions together or focus on one region. |
Generate a trend analysis
-
Open the Results page for a run.
-
Open the Trend analysis tab.
-
In Sequence, select the sequence you want to analyze.
-
Optional: in Endpoint, keep All endpoints or select a single endpoint.
-
In Metric, select the metric you want to compare.
-
In Number of recent runs, choose how many recent runs to include.
-
Optional: in Region, keep All regions or select one region.
-
Select Generate analysis.
Leapwork Performance then builds a view of the selected metric across the recent runs that match your filters.
Set a baseline
After the analysis is generated, you can set a baseline run directly from the comparison table.
-
Find the run you want to use as your reference point.
-
Select that row in the table.
When you select a baseline:
-
the selected row is highlighted
-
the baseline is marked in the chart and table
-
comparisons update to show how other runs differ from that baseline
-
the banner highlights how the latest included run compares with the baseline
Use a baseline when you want to compare a new build against a known-good run instead of only comparing against the average.
Interpret the results
|
Area |
What it shows |
|---|---|
|
Trend chart |
The selected metric across the included runs, so you can spot upward or downward movement over time. |
|
Performance banner |
A quick statement showing whether the latest included run is faster, slower, or equal compared with the selected baseline. |
|
Average card |
The average value for the selected metric across the included runs. |
|
Baseline card |
The metric value for the run you selected as the baseline. |
|
Std. deviation card |
How much the selected metric varies across the included runs. Higher variation means the run results are less consistent. |
|
Comparison table |
Per-run details including date, metric value, change versus the previous run, change versus the average, change versus the baseline, and peak virtual users. |
For response-time metrics and error rate, lower values generally indicate better performance.
Export the analysis
Select Export to download the current trend analysis as a CSV file.
The export includes:
-
run number
-
date
-
selected metric value
-
comparison versus the previous run
-
comparison versus the average
-
comparison versus the baseline
-
peak virtual users
If you change the filters, generate or update the analysis again before exporting so the CSV matches the view you want to share.
Tips for more useful comparisons
-
Compare runs for the same sequence under similar test conditions.
-
Use the Region filter when geography may affect latency.
-
Use a baseline run when you want a stable reference point for release-to-release comparison.
-
Select a single endpoint when you want to isolate one API instead of viewing the full sequence.
Troubleshooting
I see "No data available"
Try these checks:
-
confirm that the selected sequence has data in the runs you included
-
switch Endpoint back to All endpoints
-
switch Region back to All regions
-
increase the number of recent runs if too few runs are included
The comparison is not useful
Make sure you are comparing runs that are meaningfully alike. Large differences in traffic, regions, or the selected endpoint can make trend patterns harder to interpret.
I do not see the Trend analysis tab
Availability can depend on your workspace configuration and your access to run results. If the tab is not available, contact your workspace admin or Leapwork support.
My export does not match what I expected
Update the filters, select Generate analysis or Update analysis, and then export again.