1. Support
  2. Knowledge base
  3. Synthetic Monitoring
  4. API Monitoring
  5. Multi-step API custom scripting

Multi-step API custom scripting

The API monitoring type has many useful built-in features: Performing checks (using Assertions), temporarily storing values for later reuse (using Variables), transforming values (using System functions) and even adding your own logic (using User-defined functions). These features make the API monitor a powerful, no coding required-solution.

While a no-code approach goes a long way in building useful monitoring setups, it may not always let you go as deep as you want. Particularly for doing in-depth functional correctness tests, you may need to add custom logic that just cannot be expressed in a UI-based setup. You need a scripting language to be truly expressive and descriptive about what you expect from your APIs. The API monitor type lets you do this!

In fact, you can mix classic no-code features like Assertions and Variables with their scripting counterparts. If you have existing API monitors that use Assertions and Variables but want to benefit from scripting, you don’t have to rewrite your monitors from scratch. You can start adding little bits of script and have them work alongside your existing checks and variables.

Two script editors: Pre-Request and Post-Response

An API monitor can have a single step, or multiple steps executed in sequence. But each step (with the exception of Wait steps) consists of a preparation part that sets up the HTTP request for that step, and a verification part that processes the HTTP response that comes back from the API. Both parts can have their own script:

  • The Pre-Request script is executed before the HTTP request is fully built and performed.
    Therefore, the Pre-Request script is very useful for preparing and calculating any values that you want to include in the request, such as URL parameters, request headers or body content.

  • The Post-Response script is executed after the corresponding HTTP response is fully received, and after any Assertions and Variables in the Response tab have been processed.
    The Post-Response script is where you perform your custom logic to check response headers, check completeness and consistency of your content, and use that content to prepare for any next steps.

As displayed in the following screenshot, both the Pre-Request and Post-Response scripts have their own tab in the API monitor editor. Each has its own code editor with line numbering, code highlighting and code completion, and a snippets panel right next to it. Click on any of the snippets to insert helpful code snippets into the code window to get you started.

Custom scripting tabs The Request, Pre-Request, Response and Post-Response tabs in Step 1

Javascript with monitoring extensions

The Pre-Request and Post-Response scripts in the API monitor configuration allow you to execute Javascript code. In addition to the full range of capabilities that standard Javascript offers, special functions are available to access the data that is relevant to the setup of requests (during Pre-Request) and processing of the responses (during Post-Response), for performing tests on that data, for executing log statements (for debugging or informational purposes) and for storing calculated data as custom metrics.

These special functions are available through a special object called ut. In the chapters below, a full description is given for each available function and attribute in the ut object. But let’s look at the overall structure first:

  • ut.request and ut.response give access to the API request and API response objects — the most important objects in each step.

  • ut.variables is the collection of variables that you can use across the entire API scenario, across allsteps. You use it to pass values from one step to the next. If you create predefined variables, they will be populated in this variables collection. Any classic (no-code) variables you use in the Response tab will also interact with this same variables collection.

  • ut.log() is a helper function that outputs text to a log window. It is useful for temporarily writing text and values to the log while you are writing or debugging your scripts.

  • ut.test() is the main function for capturing test output. Any test output you define inside each ut.test() call will be captured and listed as an Assertion result, right next to any classic (no-code) assertions you define.

  • ut.customMetrics is a collection that you can populate during your scripts with numeric values —directly taken from an API response, or a calculated value— you want to capture as your own metric. This value will be displayed in the monitor check details for each measurement, and can also be listed and charted in dashboards.


Attributes that describe the definition of the API request in the current step.

Attributes of ut.request:

  • .url (get or set of the request’s url)

  • .method (get or set the request’s HTTP method, e.g. GET, POST, etc.)

  • .body (get or set a raw text version of the request body)

Request headers

Functions of ut.request.headers:

  • .has(header): returns whether the header exists

  • .get(header): returns the value of the header, or an empty string if it doesn’t exist

  • .add(header, value): adds the header with the specified value


Attributes of ut.request:

  • .code (gets the numeric HTTP response status code, e.g. 200)

  • .status (gets the HTTP status description, e.g. OK)

  • .responseSize (gets the size of the response in bytes)

Functions of ut.request:

  • .text():returns a raw text version of the response body

  • .json(): returns an object by parsing the response text as JSON

Response headers

Functions of ut.response.headers:

  • .has(header): returns whether the header exists

  • .get(header): returns the value of the header, or an empty string if it doesn’t exist)


Functions of ut.variables:

  • .has(key): returns whether this variable exists

  • .get(key): returns the value of the variable, or an empty string it it doesn’t exist

  • .set(key, value): creates the variable if necessary and stores the specified value

Custom metrics

Functions of ut.customMetrics:

  • .get(key): returns the value of the custom metric, or an empty string it it doesn’t exist

  • .set(key, value): stores the value of the custom metric


  • ut.log(text): Outputs the specified log to the console log: the Request log if executed in the Pre-Request script; the Response log if executed in the Post-Response script.


We support the Expect and Should interfaces from Chai JS, see Chai - Should and Chai - Expect to read about how you can express various value tests and comparisons:

  • ut.expect(value) + various expressions

  • ut.should(value) + various expressions

Any .expect() and .should() expressions, if used by themselves, will generate an error if the desired criteria are not met, and will halt execution of the monitor. Any additional assertions defined in the remainder of the script will not be executed. However, usually you want to execute the full set of assertions, regardless of whether any of the earlier assertion fail. This can be achieved using ut.test():

  • ut.test(descriptionText, testFunction): The output (success or failure) of an .expect or .should defined inside the specified testFunction ends up in the monitor’s assertions output. In addition, when an assertion fails, ut.test() will make sure that execution of the script is not halted.
By using the Uptrends website, you consent to the use of cookies in accordance with our Cookie Policy.