Review your test results

We’re nearing the end of our initial tutorial, but we have saved the best for last. We need to transform all of our testing efforts into valuable insights that help answer business-critical questions. This is where Performance shines!

This guide explains how Performance presents insights during and after a timeline run. Insights help you interpret performance results, evaluate user experience, understand behaviour across regions, and review patterns detected by Performance’s AI analysis.

Review estimated usage before running

Before execution, Performance shows predicted consumption based on your configured load. These predictions include:

  • Expected virtual user minutes

  • Planned peak virtual users

  • Estimated runtime

This helps ensure that your run fits within your quota and provides transparency about how your load profile translates into consumption.

View the Timeline as it runs

Live Timeline Playback

The timeline animates as each track advances, showing progress across all sequences.

Real-Time Graphs

Two key curves appear:

  • Response time curve (white): The duration in milliseconds

  • Requests per second curve (blue): Throughput for the selected timestamp

These update continuously during execution.

Live Metric Tiles

At a glance, you can see:

  • Fastest response

  • Average and median response

  • Percentiles (P95 and P99)

  • Highest observed response

  • Request rate and bytes transferred

These values reflect the current moment in the run and update as the test progresses.

Inspect track-specific results

Clicking any track in the timeline filters the results to that specific sequence. You can then examine:

  • Count of requests executed for that sequence

  • Response times for each individual request

  • Performance characteristics per request, including percentiles and maxima

This is useful for isolating behaviour in different parts of a system when multiple services are under load.

Understanding AppDex-based (Application Performance Index) metric

Performance provides user experience indicators based on the AppDex framework. These appear as horizontal bands across the performance graphs.

  • Green: Satisfactory

  • Yellow: Tolerable

  • Red: Frustrating

The position of the response time curve relative to these thresholds indicates the quality of experience. When response times rise into the red zone, it signals that users are likely to abandon the session due to poor performance.

Each track’s AppDex scale reflects the accumulated experience for the virtual users active at that moment.

Selecting a specific point along the timeline reveals the recorded values at that exact moment. This enables temporal analysis, helping you determine when the system slowed or when throughput increased.

AI-powered performance analysis highlights

After a run completes, Performance provides an AI-generated summary under Performance Analysis Highlights. This section contains consolidated insights, including:

  • Reliability observations (for example, zero API errors)

  • Throughput consistency

  • Endpoint-specific notes (such as JavaScript or image performance)

  • Detected patterns and trends

  • Early warnings pointing to possible issues

  • Confirmation of areas performing consistently well

These insights help you quickly understand the overarching result of the run without manually scanning every metric.

Performance Insights provides a complete view of performance during and after a load test. With live curves, percentile metrics, regional comparisons, user experience thresholds, and AI-assisted summaries, you gain a full understanding of how your application behaves under stress. This enables informed tuning, early detection of bottlenecks, and accurate evaluation of service quality.