Monitoring and performance commands
This topic covers monitoring and performance testing commands for the Postman CLI.
You can use the postman monitor run command to trigger monitor runs within your CI/CD pipeline. You can also use the postman runner start command to run your organization’s APIs from your internal network. The postman performance run command enables you to run performance tests for your collections from your CI/CD pipeline.
postman monitor run
This command runs a monitor in the Postman cloud. Add the command into your CI/CD script to trigger a monitor run during your deployment process. Then your team can use your Postman tests to catch regressions and configuration issues. Learn more at Run a monitor using the Postman CLI.
The command also invokes the monitor and polls Postman for the run’s completion, returning the monitor results. Specify the monitor with its monitor ID.
To use this command, sign in to Postman with the postman login command.
You can find the monitor ID in Postman. Click the Services tab, click Monitors in the sidebar, and select a monitor. Then click the
Monitor details tab in the right sidebar to view or copy the monitor ID.
Usage
The unique identifier of the monitor to run.
Options
Specifies the time (in milliseconds) to wait for the entire run to complete.
Specifies whether to override the default exit code for the current run.
Example
Learn more at Run a monitor using the Postman CLI and Use the Postman CLI with GitHub Actions.
postman runner start
Private API Monitoring is available on Postman Enterprise plans.
With Private API Monitoring, you can use runners to monitor and test your organization’s APIs from your internal network, without publicly exposing your endpoints.
Run this command to start a runner from your internal network that regularly polls Postman for upcoming monitor runs. The collection’s tests run in your internal network. Then the test results are sent back to the Postman cloud, making them available in the monitor results. Provide the runner ID and key from the command you copied when you created the runner. Learn more about setting up a runner in your internal network.
Optionally, you can configure the runner to route HTTP and HTTPS traffic through a proxy server that enforces outbound request policies. You can use the --proxy option to provide the URL for the proxy server used by your organization. Or you can use the --egress-proxy option to enable the built-in proxy and use the --egress-proxy-authz-url option to provide the URL for the runner authorization service that evaluates outbound request policies. Learn more about configuring a runner to use a proxy server.
You can’t use the --proxy and --egress-proxy options together.
If the runner is running in the background, stop the runner using your system’s process control. You can also press Control+C or Ctrl+C to stop the runner.
To use this command, sign in to Postman with the postman login command.
Usage
Specifies the runner ID.
Specifies the runner key that authenticates your runner with the Postman cloud.
Options
Runs the runner with the built-in proxy enabled. This option requires --egress-proxy-authz-url.
Specifies a custom runner authorization service URL. Instead of specifying this option, you can define the URL using the POSTMAN_RUNNER_AUTHZ_URL environment variable. This is required with --egress-proxy.
Runs a metrics server available at the /health/live endpoint. Useful for health checks in orchestration environments, like Kubernetes.
Specifies a port number where the metrics server can expose health checks and metrics.
Specifies your organization’s proxy URL. Instead of specifying this option, you can define the URL using the HTTP_PROXY and HTTPS_PROXY environment variables.
Specifies the path to the file with one or more trusted CA certificates in PEM format. Used for custom SSL certificate validation.
Examples
postman runner start —id 12345678-12345ab-1234-1ab2-1ab2-ab1234112a12 —key 12345678-12345ab-1234-1ab2-1ab2-ab1234112a12
postman runner start —id 12345678-12345ab-1234-1ab2-1ab2-ab1234112a12 —key 12345678-12345ab-1234-1ab2-1ab2-ab1234112a12 —proxy http://example.com:8080
postman runner start —id 12345678-12345ab-1234-1ab2-1ab2-ab1234112a12 —key 12345678-12345ab-1234-1ab2-1ab2-ab1234112a12 —egress-proxy —egress-proxy-authz-url http://authz.example.com
postman runner start —id 12345678-12345ab-1234-1ab2-1ab2-ab1234112a12 —key 12345678-12345ab-1234-1ab2-1ab2-ab1234112a12 —metrics —metrics-port 12044 —ssl-extra-ca-certs /path/to/certs.pem
The unique identifier of the collection to run performance tests against.
Options
Specifies the path to a data file with custom values to use for each virtual user. The file must be in CSV or JSON format. Learn more about using a data file to simulate virtual users.
The duration of the performance test in minutes.
Specifies an environment by its ID. Variables in the collection are resolved from the environment.
Specifies globals by its ID. Variables in the collection are resolved from globals.
The load profile type to use for the performance test. Accepts fixed, ramp-up, spike, or peak.
- With
fixed, the number of virtual users is constant during the performance test. - With
ramp-up, the number of virtual users gradually increases from 25% to 100%, and then maintains at 100%. - With
spike, the number of virtual users starts at 10%, spikes to 100%, then drops back down to 10%. - With
peak, the number of virtual users gradually increases from 20% to 100%, maintains at 100%, then gradually decreases back down to 20%.
Specifies a condition that determines whether the performance test passes or fails. The condition must be in the function(metric, value) format.
Functions:
less_than(metric, value)— The test passes if the metric is less than the value.less_than_eq(metric, value)— The test passes if the metric is less than or equal to the value.greater_than(metric, value)— The test passes if the metric is greater than the value.greater_than_eq(metric, value)— The test passes if the metric is greater than or equal to the value.
Metrics:
avg— The response time of all requests averaged together, in milliseconds.p90— The 90th percentile of response times, in milliseconds.p95— The 95th percentile of response times, in milliseconds.p99— The 99th percentile of response times, in milliseconds.error_rate— The percentage of requests with an error. Errors indicate runtime issues such as timeouts, connection or TLS failures, or uncaught exceptions in user scripts.rps— The number of requests sent per second.
Examples:
--pass-if "less_than(p95, 500)"— The test passes if the 95th percentile of response times is less than 500 milliseconds.--pass-if "less_than_eq(error_rate, 5)"— The test passes if the percentage of requests with an error is less than or equal to 5%.
The number of peak virtual users that simulate traffic to your API.
Examples
Learn more at Run a performance test using the Postman CLI.