- Installing and updating
- Navigating Postman
- Sending your first request
- Managing your account
- Syncing your work
- Discovering templates
- Creating your first collection
- Creating a workspace
- Setting up your Postman app
- Importing and exporting data
- Troubleshooting app issues
- Building requests
- Authorizing requests
- Receiving responses
- Grouping requests in collections
- Using variables
- Managing environments
- Visualizing responses
- Specifying examples
- Using cookies
- Working with certificates
- Generating client code
- Troubleshooting requests
- Scripting in Postman
- Writing pre-request scripts
- Writing tests
- Using the Collection Runner
- Scheduling runs with monitors
- Building request workflows
- Importing data files
- Working with your team
- Defining roles
- Requesting access
- Sharing your work
- Your Private API Network
- Commenting on collections
- Versioning APIs
- Using version control
- Using the API Builder
- Managing and sharing APIs
- Validating APIs
- Monitoring your APIs
- Setting up a monitor
- Viewing monitor results
- Monitoring APIs and websites
- Set up integrations to receive alerts
- Running Postman monitors using static IPs
- Troubleshooting monitors
- Monitoring FAQs
- Analyzing with reports
- Documenting your API
- Authoring your docs
- Publishing your docs
- Viewing documentation
- Using custom domains
- Publishing templates
- Publishing to the API Network
- Submission guidelines
- Managing your team
- Purchasing Postman
- Configuring team settings
- Utilizing audit logs
- Onboarding checklist
- Migrating data between teams
- Intro to SSO
- Configuring SSO for a team
- Logging in to an SSO team
- Microsoft AD FS
- Custom SAML in Azure AD
- Custom SAML in Duo
- Custom SAML in GSuite
- Custom SAML in Okta
- Custom SAML in Onelogin
- Custom SAML in Ping Identity
- Migrating to the current version of Postman
- Developing with Postman utilities
- Postman API
- Echo API
- Collection SDK
- Postman Runtime library
- Code generator library
- Postman Collection conversion
Viewing monitor results
Your Postman Dashboard allows you to track the health and performance of your APIs. With the Dashboard, you can stay up to date on what's happening across all monitors in your workspace and dive into individual monitors to examine test results and performance over time.
- Viewing monitors in the Dashboard
- Next steps
Each workspace has its own monitoring space within the Postman Dashboard, which you can navigate to by opening your Dashboard and selecting your workspace > Monitors. Monitors in team workspaces are visible to all members of the workspace.
The Dashboard provides a high-level overview of the monitors you have available in your workspace, including status, average success rate, and average response time within the given timeframe.
A Healthy status indicates there were no failures in any of the runs during the specified timeframe. Failures will be noted here, as well as changes in the average success rates and response times.
Hovering over a monitor in the list allows you to run it outside of its predetermined schedule by clicking ▶. To pause, resume, edit, and delete monitors, select the ... icon.
You can view each monitor in more detail by selecting it from the Dashboard.
You can use the Monitor Summary to see how your APIs have performed over time. Each monitor run is represented by a bar in the graph.
The upper section charts your monitor's average response time for each run, while the lower section visualizes the number of failed tests for each run across all regions. To view the exact response time and failed percent, you can hover over each run individually.
A red bar indicates that either tests failed or errors occurred during the run. For more information, view your Console Log.
You can use Request Split to see how the response time varies for all requests made in a given run. To break this down into individual requests, you can utilize Filters.
You can use filters to identify recurring patterns in your monitoring runs by selecting particular requests, run types, results, and regions (if applicable).
You can Clear Filters to return to your original dashboard view.
You can filter by request to compare an individual request's response time in different runs. Click to open the drop-down menu All Requests next to Filter by, then select your request.
You can filter by run type to compare how the response time changes between manual runs, scheduled runs, and webhook runs. Click to open the drop-down menu Type: All, then select the type of run you'd like to analyze further.
Manual runs are initiated in the Postman Dashboard or are triggered by the Postman API. Scheduled runs are initiated by the schedule you set when creating or editing your monitor. Webhook runs are initiated by integrations you've created.
Each run is labeled based on its result:
- Successful: Your monitor completed the run with no issues and passed all tests.
- Failure: Your monitor completed the run, however one or more tests failed.
- Error: Your monitor was unable to complete its run due to an error. An error can occur if there is a syntax error in the code you've written, a network error, or for various other reasons. If you encounter one, your Console Log will help you identify what caused it.
- Abort: Your monitor was unable to complete its run within the allotted five minutes, at which point it timed out.
You can filter by run result to compare how your runs with the same result have differed. Click to open the drop-down menu Run result: All, then select one or more types of run results to view.
You can filter by region to compare how runs within different regions have varied. Click to open the drop-down menu All Regions, then select a region to view.
This feature is only available if you selected multiple regions when you created or last edited your monitor. To learn more about regions, see Adding regions.
You can filter by mathematical formula to view the average, sum, minimum, and maximum response time for each run:
- Average: The average of the total response time across all regions.
- Sum: The sum of the response time across all regions.
- Minimum: The minimum total response time for a run across all regions.
- Maximum: The maximum total response time for a run across all regions.
Click to open the drop-down menu Average, then select an option. To view the newly calculated response time value, you can hover over each run individually.
You can navigate through past run results to review what happened at a particular point in time. To do so, click Go to in the upper-left corner of the monitor summary or request split graph. Select the time and date, then click Apply to view a specific run.
To revert the view to your most recent runs, select the time and date you defined in the upper-left corner of the graph, then click Reset.
You can view Test Results below the monitor summary to find more detailed information on your tests, including which passed or failed, response codes, and response times.
If your monitor is configured to run in multiple regions, you can view the test results for a particular region by selecting that region from the dropdown to the right of the Test Results tab.
You can view the Console Log below the monitor summary.
This section logs monitor run details along with the
console.log statements that run as part of your pre-request and test scripts. Run details specify the various stages of a monitor run such as preparing run, running, rerunning (if applicable), and the run result, along with error and test failure information.
If your monitor is configured to run in multiple regions, you can view the console logs for a particular region by selecting that region from the dropdown to the right of the Console Log tab.
You can use this console to both troubleshoot issues and learn more about an individual run's behavior.
You can view a monitor's activity logs by selecting ... in the upper-right corner > View activity logs.
You can check these logs to learn when a monitor was created, edited, paused, or resumed running, and which team member performed each action.