Print

The dashboard

This dashboard provides a quick overview of recently executed playbooks. It helps you identify trends, spot regressions and determine which playbooks require attention. It is not meant to replace detailed log analysis, but it prevents you from having to inspect every run manually.

Failed vs executed tests last month

This graph shows the trend of all tests executed during the last month.

Key points to watch:

  • A rising amount of failed tests indicates decreasing stability.
    Review recent changes to playbooks or underlying systems.
  • A high amount of “not run” tests usually indicates a configuration issue, such as missing test data, timeouts or incorrect triggers.
  • Spikes around releases can be normal, but persistent instability is not.

Test results of last 5 ran playbooks in percentages

This section shows the percentage of passed, failed and skipped tests for each of the most recent playbook runs.

How to interpret this:

  • A high percentage of passed tests indicates a stable playbook.
  • Consistent skipped tests often point to misconfiguration, incorrect tags or unavailable test data.
  • A noticeable percentage of failed tests signals that the playbook or a dependency needs investigation.

Test results of favorite playbooks in percentages

If you have a lot of different playbooks, and you’d like to keep a keen eye on a specific set, you can add them as favorites to your dashboard.

In this document