Use datasets in Postman
Datasets are available on Postman Solo, Team, and Enterprise plans. For more information, see the pricing page.
After you create a dataset, you can use it across your API workflows in Postman. You can run data-driven collection tests, power dynamic mock server responses, and validate API responses in scripts. Datasets enable you to reuse the same data across workflows and work with consistent, queryable data instead of duplicating or hardcoding values.
Datasets are only supported in Local View on the Postman desktop app.
Use datasets in collection runs
You can use datasets as iteration data when manually running a collection. The data is organized into fields (columns) you can reference in your requests using variables. Each row is used to run your requests with different inputs. The number of iterations is determined by the number of rows returned by the selected view.
Use the following example to use datasets in your collection runs:
-
Create a dataset with a local file data source that includes
userId,email, andnamefields in CSV format. -
Click
Items in the sidebar.
-
Click Collections and select the collection you want to run against the dataset.
-
Reference the fields in your requests using variables that match the field names in your dataset.
-
Select the collection again and click
Run in the upper right.
-
Select Run manually.
-
Configure the collection run settings as needed, such as the delay.
-
Under Dataset, select the dataset you’d like to use for the run. Then select a view with the data you want to run against.
You can click
Create a new dataset or Create a new view to create and select a dataset or view without leaving the collection run configuration.
-
(Optional) Click the Data tab in the left pane to preview and edit the view you selected.
-
Click Start run.
During the run, Postman assigns values from each row to your variables, so each request runs with a different set of data. You can also view the data used for each iteration in the collection run summary.
Learn more about using variables and manually running collections.
Use datasets in mock servers
You can use the pm.datasets function in a local mock server to return dynamic responses based on queryable data. This enables you to use the same dataset across requests, filter data for specific endpoints, and simulate more realistic API behavior instead of returning only static responses.
Use the following example to use a dataset in a local mock server:
-
Create a dataset with a local file data source that includes
userId,email, andnamefields in CSV format. -
In your local mock server implementation file, load the dataset using
pm.datasets(). -
Run a query against the dataset in your request handler and return the matching row in the response.
-
Start the mock server and send a request to the endpoint. For example, you can send a GET request to the following:
The mock server queries the dataset at runtime and returns the matching data in the response. You can also use views with executeView() to reuse predefined queries across endpoints.
Learn more about local mock servers and managing and using datasets in scripts.
Use datasets in scripts
You can use the pm.datasets function in post-response scripts to validate response data against queryable data stored in a dataset. This enables you to compare API responses with expected values, test multiple scenarios, and reuse the same data across requests and workflows.
Use the following example to use a dataset in a post-response script:
-
Create a dataset with a local file data source that includes
userId,email, andnamefields in CSV format. -
Send a request that returns user data, such as:
-
In the request’s Scripts > Post-response tab, load the dataset and query it using a value from the response.
When the request runs, the script queries the dataset at runtime and compares the response data with the matching row. You can also use executeView() to validate responses against a predefined view instead of writing a custom query in the script.
Learn more about writing post-response scripts and managing and using datasets in scripts.