- Installation and updates
- Sending your first request
- Navigating Postman
- New button
- Creating the first collection
- Postman account
- Keyboard Shortcuts
- Troubleshooting In-app Issues
- Authorizing requests
- Working with Tabs
- Visualize API responses
- Validating Requests Against Schema
- Generate code snippets
- Using GraphQL
- Making SOAP requests
- Capturing HTTP requests
- Debugging and logs
- Troubleshooting API requests
- Intro to collections
- Creating collections
- Sharing collections
- Commenting on collections
- Managing collections
- Version Control for Collections
- Using Markdown for descriptions
- Importing and exporting data
- Working with OpenAPI
- Collaborating in Postman
- Roles and permissions
- Managing your team
- Requesting access
- Team Settings
- Audit logs
- Your private API network
- Intro to scripts
- Pre-request scripts
- Test scripts
- Test examples
- Branching and looping
- Postman Sandbox API reference
- Intro to collection runs
- Starting a collection run
- Using environments in collection runs
- Building workflows
- Running multiple iterations
- Sharing collection runs
- Working with data files
- Debugging a collection run
- Command line integration with Newman
- Integration with Jenkins
- Integration with Travis CI
- Newman with Docker
- Documenting your API
- Authoring your documentation
- Publishing your docs
- Viewing documentation
- Custom documentation domains
- Intro to mock servers
- Setting up a mock server
- Mocking with examples
- Mocking with the Postman API
- Matching algorithm
- Intro to Monitoring
- Setting up a monitor
- Viewing monitor results
- Monitoring APIs and websites
- Set up integrations to receive alerts
- Running Postman monitors using static IPs
- Troubleshooting monitors
- FAQs for monitors
- Intro to Workspaces
- Creating Workspaces
- Using Workspaces
- Managing Workspaces
- Viewing changelogs and restoring collections
- Using the API Builder
- Managing and Sharing APIs
- Versioning APIs
- Viewing and analyzing APIs
- Validating API Elements Against Schema
- Customizing Postman
- Find and Replace
- Purchasing Postman
- Onboarding Checklist
- Intro to SSO
- Configuring SSO for a team
- Logging in to an SSO team
- Configuring Microsoft AD FS with Postman SSO
- Setting a custom SAML in Azure AD
- Setting up custom SAML in Duo
- Setting up custom SAML in GSuite
- Setting up custom SAML in Okta
- Setting up custom SAML in Onelogin
- Setting up custom SAML in Ping Identity
- Intro to Integrations
- Custom Webhooks
- Microsoft Flow
- Microsoft Teams
Viewing monitor results
Your Postman Dashboard allows you to track the health and performance of your APIs. With the Dashboard, you can stay up to date on what's happening across all monitors in your workspace and dive into individual monitors to examine test results and performance over time.
- Viewing monitors in the Dashboard
- Next steps
Each workspace has its own monitoring space within the Postman Dashboard, which you can navigate to by opening your Dashboard and selecting your workspace > Monitors. Monitors in team workspaces are visible to all members of the workspace.
The Dashboard provides a high-level overview of the monitors you have available in your workspace, including status, average success rate, and average response time within the given timeframe.
A Healthy status indicates there were no failures in any of the runs during the specified timeframe. Failures will be noted here, as well as changes in the average success rates and response times.
Hovering over a monitor in the list allows you to run it outside of its predetermined schedule by clicking ▶. To pause, resume, edit, and delete monitors, select the ... icon.
You can view each monitor in more detail by selecting it from the Dashboard.
You can use the Monitor Summary to see how your APIs have performed over time. Each monitor run is represented by a bar in the graph.
The upper section charts your monitor's average response time for each run, while the lower section visualizes the number of failed tests for each run across all regions. To view the exact response time and failed percent, you can hover over each run individually.
You can use Request Split to see how the response time varies for all requests made in a given run. To break this down into individual requests, you can utilize Filters.
You can use filters to identify recurring patterns in your monitoring runs by selecting particular requests, run types, results, and regions (if applicable).
You can Clear Filters to return to your original dashboard view.
You can filter by request to compare an individual request's response time in different runs. Click to open the drop-down menu All Requests next to Filter by, then select your request.
You can filter by run type to compare how the response time changes between manual runs, scheduled runs, and webhook runs. Click to open the drop-down menu Type: All, then select the type of run you'd like to analyze further.
Manual runs are initiated in the Postman Dashboard or are triggered by the Postman API. Scheduled runs are initiated by the schedule you set when creating or editing your monitor. Webhook runs are initiated by integrations you've created.
Each run is labeled based on its result:
- Successful: Your monitor completed the run with no issues and passed all tests.
- Failure: Your monitor completed the run, however one or more tests failed.
- Error: Your monitor was unable to complete its run due to an error. An error can occur if there is a syntax error in the code you've written, a network error, or for various other reasons. If you encounter one, your Console Log will help you identify what caused it.
- Abort: Your monitor was unable to complete its run within the allotted five minutes, at which point it timed out.
You can filter by run result to compare how your runs with the same result have differed. Click to open the drop-down menu Run result: All, then select one or more types of run results to view.
You can filter by region to compare how runs within different regions have varied. Click to open the drop-down menu All Regions, then select a region to view.
This feature is only available if you selected multiple regions when you created or last edited your monitor. To learn more about regions, see Adding regions.
You can filter by mathematical formula to view the average, sum, minimum, and maximum response time for each run:
- Average: The average of the total response time across all regions.
- Sum: The sum of the response time across all regions.
- Minimum: The minimum total response time for a run across all regions.
- Maximum: The maximum total response time for a run across all regions.
Click to open the drop-down menu Average, then select an option. To view the newly calculated response time value, you can hover over each run individually.
You can navigate through past run results to review what happened at a particular point in time. To do so, click Go to in the upper-left corner of the monitor summary or request split graph. Select the time and date, then click Apply to view a specific run.
To revert the view to your most recent runs, select the time and date you defined in the upper-left corner of the graph, then click Reset.
You can view Test Results below the monitor summary to find more detailed information on your tests, including which passed or failed, response codes, and response times.
If your monitor is configured to run in multiple regions, you can view the test results for a particular region by selecting that region from the dropdown to the right of the Test Results tab.
You can view the Console Log below the monitor summary.
This section logs monitor run details along with the
console.log statements that run as part of your pre-request and test scripts. Run details specify the various stages of a monitor run such as preparing run, running, rerunning (if applicable), and the run result, along with error and test failure information.
If your monitor is configured to run in multiple regions, you can view the console logs for a particular region by selecting that region from the dropdown to the right of the Console Log tab.
You can use this console to both troubleshoot issues and learn more about an individual run's behavior.