Connect with the API Catalog using Postman Insights Agent

View as Markdown

The Postman Insights Agent monitors your Kubernetes services and automatically populates your API Catalog. The recommended way to get started is Discovery Mode. Use it to deploy the agent once and it finds and registers your services automatically, with no per-service configuration needed. To collect telemetry from your services and their dependencies, the agent relies on W3C tracing headers. If your services propagate these headers, the API Catalog can connect the dots between them and show you a complete graph.

Deploy with Discovery Mode

Consider the differences between Discovery Mode and Workspace Mode to decide which approach is best for your team. Discovery Mode is the recommended way to get started because it requires minimal setup and automatically discovers services as you deploy them. If you prefer to set up workspaces and environments in Postman first, or if you want more control over which services are registered, then Workspace Mode may be a better fit.

Prefer to configure Postman first?

See Workspace Mode to create a workspace and environment in Postman before deploying.

Choose an approach

Discovery ModeWorkspace Mode
SetupDeploy once, agent handles registration.Configure Postman first, then deploy.
Best forMany microservices, zero-config onboarding.Per-service control, Postman-first teams.
New servicesPicked up automatically as they’re deployed.Must configure each service manually.
Required inputsAPI key + cluster name.Workspace ID + system environment ID.

For sidecar injection, Workspace Mode instructions, advanced filtering, environment variable reference, and troubleshooting, see Insights Agent deployment modes.

Discovery Mode Prerequisites

  • A Postman API key with required permissions
  • Kubernetes cluster v1.19+
  • kubectl configured for your cluster

Deploy the Insights Agent in Discovery Mode

To deploy the agent in Discovery Mode, do the following:

  1. Create the namespace and API key secret. The agent authenticates with Postman using your API key. Store it as a Kubernetes Secret so it never appears in plain text in the manifest:

    $kubectl create namespace postman-insights-namespace
    $
    $kubectl create secret generic postman-agent-secrets \
    >--namespace postman-insights-namespace \
    >--from-literal=postman-api-key=<YOUR_POSTMAN_API_KEY>
  2. Download the base DaemonSet manifest, postman-insights-agent-daemonset.yaml.

  3. Edit the manifest for Discovery Mode. The downloaded manifest requires three changes before it works with Discovery Mode. Open the file and apply the following edits:

    • Enable Discovery Mode. Find the args section and add --discovery-mode:

      1args:
      2- kube
      3- run
      4- --discovery-mode

      --discovery-mode and --repro-mode serve different purposes and are not alternatives. Keep any other flags in args that your deployment already uses. Just add --discovery-mode to enable service discovery.

    • Set your cluster name. Discovery Mode requires a unique cluster name to identify your services across environments. Add it to the env section:

      1env:
      2- name: POSTMAN_INSIGHTS_CLUSTER_NAME
      3 value: "<YOUR_CLUSTER_NAME>" # e.g. "production" or "us-east-staging"

      The cluster name is combined with the namespace and workload name to build a unique service identifier (cluster/namespace/workload). Choose a name that is stable and unique. Changing it later will cause services to appear as new entries.

    • Wire the API key. Reference the Secret you created in Step 1. Add this to the env section alongside the cluster name:

      1env:
      2- name: POSTMAN_INSIGHTS_CLUSTER_NAME
      3 value: "<YOUR_CLUSTER_NAME>"
      4- name: POSTMAN_INSIGHTS_API_KEY
      5 valueFrom:
      6 secretKeyRef:
      7 name: postman-agent-secrets
      8 key: postman-api-key

      After all three changes, the relevant part of your manifest should look like this:

      1containers:
      2- name: postman-insights-agent
      3 image: public.ecr.aws/postman/postman-insights-agent:latest
      4 args:
      5 - kube
      6 - run
      7 - --discovery-mode
      8 env:
      9- name: POSTMAN_INSIGHTS_CLUSTER_NAME
      10 value: "<YOUR_CLUSTER_NAME>"
      11- name: POSTMAN_INSIGHTS_API_KEY
      12 valueFrom:
      13 secretKeyRef:
      14 name: postman-agent-secrets
      15 key: postman-api-key
      16- name: POSTMAN_INSIGHTS_K8S_NODE
      17 valueFrom:
      18 fieldRef:
      19 fieldPath: spec.nodeName
      20- name: POSTMAN_INSIGHTS_CRI_ENDPOINT
      21 value: /var/run/containerd/containerd.sock
  4. Apply the manifest.

    $kubectl apply -f postman-insights-agent-daemonset.yaml
  5. Verify the deployment.

    Check that the agent pods are running on each node:

    $kubectl get pods -n postman-insights-namespace

    Then, inspect the logs to confirm services are being discovered:

    $kubectl logs -n postman-insights-namespace -l name=postman-insights-agent --tail=50

    You can expect to see log lines indicating pods are being discovered and services are being registered with Postman. If pods aren’t appearing, see Troubleshooting.

  6. Complete onboarding in Postman. From Home, click API Catalog > Service Discovery > Postman Insights Catalog. Then, select the service and the environment to integrate into API Catalog.

    The Postman API key must belong to a user with write access to target workspaces (workspace Admin or Super Admin). The agent creates and links applications on behalf of this user.

    Discovered services have a traffic capture window. If a service isn’t onboarded in the Postman app within that window, the agent pauses capture for that service. Completing onboarding lifts the restriction. See Traffic TTL for details.

Optional: Limit discovery to specific namespaces

By default, the agent discovers pods in all namespaces except a built-in exclusion list (for example, Kubernetes system, monitoring, GitOps tools). To capture only specific namespaces, add --include-namespaces to the args section in the manifest:

1args:
2- kube
3- run
4- --discovery-mode
5- --include-namespaces=production,staging

For the full list of filtering options — namespace exclusions, label filtering, per-pod opt-out — see the Pod Filtering reference.

Connect requests across services

Installing the Insights Agent collects telemetry from your services. To connect requests across services (service graph edges, trace discovery, end-to-end views), those services must propagate W3C tracing headers on incoming and outgoing HTTP requests. If outbound calls omit traceparent, downstream services appear disconnected from the same trace, and graph discovery is incomplete.

The agent doesn’t replace your application’s responsibility to forward trace context on each outbound call your code makes (or to start a root trace when there is no inbound header).

Tracing headers and the service graph

Tracing headers carry a distributed trace identifier across HTTP calls so that every hop in a request chain shares the same trace ID. The widely used standard is W3C Trace Context, which defines headers such as traceparent (and, optionally, tracestate) that describe the trace, the parent span, and sampling flags. For more information, see the W3C Trace Context specification.

When Service A calls Service B, Service B should receive the same trace context Service A is using. That continuity is what enables observability tools to relate spans, discover dependencies, and build a service graph from real traffic, and not just from a single process.

traceparent format

The traceparent header has a specific format defined by the W3C specification. It consists of four parts: version, trace ID, parent ID, and flags.

Format: version-trace_id-parent_id-flags

Example: 00-0af7651916cd43dd8448eb211c80319c-b7ad6b7169203331-01, with the following definitions:

  • trace_id - 32 hex chars, shared by the whole trace.

  • parent_id - 16 hex chars, parent span for this hop.

  • flags - 2 hex chars. For example, 01 sampled.

Validate your headers

Validate the lengths and hex according to the W3C Trace Context rules.

Inject and propagate headers (pseudocode)

The pattern of header propagation is the same in every language: Read on entry -> Preserve the trace ID -> Assign this service’s span ID -> Set traceparent on every outbound HTTP client request.

To implement this in your application, do the following:

  1. On each incoming HTTP request to your service, read the traceparent header if it exists. If it’s valid, keep the same trace ID and use the parent ID as the caller’s span. If it’s missing or invalid, start a new trace with a new trace ID and no parent span.

    on_request(request):
    header = request.headers["traceparent"] // may be absent
    if header is valid W3C traceparent:
    trace_id = parsed.trace_id // keep same trace across the chain
    parent_span_id = parsed.parent_id // caller's span (your parent)
    else:
    trace_id = new_random_trace_id()
    parent_span_id = null
    my_span_id = new_random_span_id()
    flags = parsed.flags or "01" // e.g. sampled
    store in request context:
    trace_id, my_span_id, flags
    optional: set response header traceparent for debugging
  2. On each outgoing HTTP call to another service (for example, Service A calling Service B), set the traceparent header with the same trace ID, this service’s span ID as the parent, and any relevant flags. Use your HTTP client’s hooks (middleware/interceptors/wrappers) to do this for every outbound call.

    before_send_outbound_http(client_request):
    ctx = current_request_context()
    traceparent_value = format_traceparent(
    version = "00",
    trace_id = ctx.trace_id,
    parent_id = ctx.my_span_id, // this hop becomes the parent of the next
    flags = ctx.flags
    )
    client_request.headers["traceparent"] = traceparent_value
    send(client_request)
    Use your HTTP client hooks

    Use your framework’s HTTP client hooks (middleware, interceptors, or wrappers) so every outbound call gets the header, including SDKs and internal libraries, not just hand-written requests.

  3. (Optional) Use tracestate if you want to encode extra vendor-specific data, but ensure you still forward the traceparent header on every hop.

    If your platform or vendor encodes extra data in tracestate, parse it on ingress and forward it unchanged on egress when you forward traceparent, per the W3C rules. Many setups work with traceparent alone.

Practical options

  • OpenTelemetry (recommended) — Configure the W3C propagator so traces use traceparent/tracestate automatically for supported HTTP servers and clients.

  • API gateways/ proxies — Ensure they preserve or inject trace context so the first app hop still participates in the same trace.

  • Non-HTTP work — Message queues and background jobs need the same trace ID carried in message metadata if you want those edges in the graph; HTTP header names alone don’t apply there.