Viewing monitor results
Postman allows you to track the health and performance of your APIs. With Postman, you can stay up to date on what's happening across all monitors in your workspace and dive into individual monitors to examine test results and performance over time.
- Next steps
You can view your monitors in Postman by navigating to your workspace and selecting Monitors in the left sidebar. Select your monitor to open a tab detailing its latest performance.
Monitors in team workspaces are visible to all members of the workspace.
You can use the Monitor Summary to see how your APIs have performed over time. Each monitor run is represented by a bar in the graph.
The upper section charts your monitor's average response time for each run, while the lower section visualizes the number of failed tests for each run across all regions. To view the exact response time and failed percent, you can hover over each run individually.
A red bar indicates that either tests failed or errors occurred during the run. For more information, view your Console Log.
You can select Individual requests to break down your monitor summary into separate requests.
You can use filters to identify recurring patterns in your monitoring runs by selecting particular requests, run types, results, and regions (if applicable).
You can Clear Filters to return to your original dashboard view.
You can filter by request to compare an individual request's response time in different runs. Click to open the drop-down menu All Requests under Filter By, then select your request.
You can filter by run type to compare how the response time changes between manual runs, scheduled runs, and webhook runs. Click to open the drop-down menu Type: All, then select the type of run you'd like to analyze further.
Manual runs are initiated in Postman or are triggered by the Postman API. Scheduled runs are initiated by the schedule you set when creating or editing your monitor. Webhook runs are initiated by integrations you've created.
Each run is labeled based on its result:
- Successful: Your monitor completed the run with no issues and passed all tests.
- Failure: Your monitor completed the run, however one or more tests failed.
- Error: Your monitor was unable to complete its run due to an error. An error can occur if there is a syntax error in the code you've written, a network error, or for various other reasons. If you encounter one, your Console Log will help you identify what caused it.
- Abort: Your monitor was unable to complete its run within the allotted five minutes, at which point it timed out.
You can filter by run result to compare how your runs with the same result have differed. Click to open the drop-down menu Run result: All, then select one or more types of run results to view.
You can filter by region to compare how runs within different regions have varied. Click to open the drop-down menu All Regions, then select a region to view.
This feature is only available if you selected multiple regions when you created or last edited your monitor. To learn more about regions, see Adding regions.
You can filter by mathematical formula to view the average, sum, minimum, and maximum response time for each run:
- Average: The average of the total response time across all regions.
- Sum: The sum of the response time across all regions.
- Minimum: The minimum total response time for a run across all regions.
- Maximum: The maximum total response time for a run across all regions.
Click to open the drop-down menu Average, then select an option. To view the newly calculated response time value, you can hover over each run individually.
You can navigate through past run results to review what happened at a particular point in time. To do so, click Go to in the upper-left corner of the monitor summary or request split graph. Select the time and date, then click Apply to view a specific run.
To revert the view to your most recent runs, select the time and date you defined in the upper-left corner of the graph, then click Reset.
You can view Test Results below the monitor summary to find more detailed information on your tests, including which passed or failed, response codes, and response times.
If your monitor is configured to run in multiple regions, you can view the test results for a particular region by selecting that region from the dropdown to the right of the Test Results.
You can view the Console Log below the monitor summary.
This section logs monitor run details along with the
console.log statements that run as part of your pre-request and test scripts. Run details specify the various stages of a monitor run such as preparing run, running, rerunning (if applicable), and the run result, along with error and test failure information. Selecting a request in the Console Log will open it in a tab, allowing you to review and/or edit the request as needed.
If your monitor is configured to run in multiple regions, you can view the console logs for a particular region by selecting that region from the dropdown to the right of the Console Log tab.
You can use this console to both troubleshoot issues and learn more about an individual run's behavior.
You can view a monitor's activity logs by selecting the (clock icon) in the upper-right corner > View activity logs.
You can check these logs to learn when a monitor was created, edited, paused, or resumed running, and which team member performed each action.
You can view details about a monitor by selecting the info (i) icon in the upper-right corner. Here you can view a monitor's ID, creator, creation date and time, collection, environment, and integration options.