Imagine connecting your Uptrends account to the operations management system your organization uses. Feeding Uptrends alert data into your existing incident management processes creates a seamless integration of Uptrends' external monitoring into the day-to-day procedures your teams already use.

If our predefined integration types don't include your DevOps software, you can use the custom integration option to build the integration yourself. The key to building a successful integration is knowing what kind of message the other system is expecting. The third party's API documentation tells you which URL to use and which content to post to that URL. Often called webhook-based integrations, they let Uptrends "hook" into the other system by allowing direct calls into them. Uptrends can initiate a call to the integration as soon as a relevant alert appears.

The content is usually JSON-formatted (but XML and other formats are also fine) and contains the different fields that have a particular meaning and importance in that system.

Building the right message content

To fill in relevant content for those fields in each outgoing alert message, the message body you define needs to contain so-called system variables. When you refer to a system variable in your message content, it is replaced by the appropriate content when Uptrends generates an alert. Using system variables allows you to write message definitions that meet the other system's expectations.

Let's look at an example. An obvious piece of information that should probably be part of any alert message is a plain text description of the error that was detected by Uptrends. Suppose that the system you want to connect to has a field called "errordescription". You could plug Uptrends' error description into that field by including this definition in your JSON-formatted message:

{ "errordescription": "{{@alert.description}}" }

In Uptrends' system variables, the text description of the error that triggers an alert is available in the {{@alert.description}} variable, so you simply put that variable right where you need it in your message. Similarly, you can use {{@alert.timestamp}} to refer to the time of the alert, {{@monitor.name}} to include the name of the monitor, and so on. All available system variables are listed below. Note how all system variables are named {{@...}}:

Variable Description Example value
@alert.alertGuid Unique id of this alert cbfc7769-edb2-46a7-89d0-1e1b1fb0815b
@alert.type The type of this alert message: Alert: a new error was detected. Ok: the original error has been resolved. Reminder: the original error is still ongoing. Alert | Ok | Reminder
@alert.timestampUtc The date and time of the alert, expressed in UTC time, and formatted as ISO 8601. 2018-11-08T16:26:58
@alert.timestamp The same date and time as @alert.timestampUtc, but in the timezone of your account. Also formatted as ISO 8601. 2018-11-08T10:26:58
@alert.firstErrorUtc The date and time of the original error that triggered this alert, expressed in UTC time, and formatted as ISO 8601. 2018-11-08T16:21:58
@alert.firstError The same date and time as @alert.firstErrorUtc, but in the timezone of your account. Also formatted as ISO 8601. 2018-11-08T10:21:58
@alert.description Text description of the error that triggered this alert DNS expected result not found
@alert.firstErrorCheckUrl The URL of a deep link that takes you to the details of the error that triggered this alert. https://app.uptrends.com/Report/ProbeLog/Check/30833627687
@alert.firstErrorCheckId The Id of the error that triggered this alert. 30833627687
@alertDefinition.guid The unique id of the alert definition that was used to generate this alert 2C97E464-6112-435B-8C8D-6DEF1E18273A
@alertDefinition.name The name of the alert definition that was used to generate this alert Default Alert
@escalationLevel.id The id of the escalation level that was used to generate this alert 1
@escalationLevel.message The custom message that was specified in the escalation level Please use checklist THX-1138 to investigate this issue.
@incident.key Unique id of the incident this alert belongs to. An error alert and an Ok alert share the same incident key. ba8ffcb7-5de0-489e-b649-f00f0b447e80-0-30099055746
@monitor.monitorGuid The unique id of the monitor in your account that triggered this alert 849b2046-213d-43ad-9efc-5af1faaeb222
@monitor.name The name of the monitor in your account that triggered this alert GalacticResorts.com - DNS
@monitor.notes Any custom notes that were filled in the monitor settings Please check Amazon Route53 DNS entries
@monitor.url The URL or network address this monitor is checking. https://galacticresorts.com
@monitor.dashboardUrl The URL of a deep link that takes you to the dashboard for this monitor. https://app.uptrends.com/Probe/Dashboard?probeGuids=fe1ad4a2-57c1-49db-af16-ff3a6beaa8d4
@monitor.editUrl The URL of a deep link that takes you to the settings for this monitor. https://app.uptrends.com/Report/Probe?probeGuid=fe1ad4a2-57c1-49db-af16-ff3a6beaa8d4
@account.accountId Your Uptrends account id 299840

Error messages, OK messages, and Reminders

When you create a message definition in the Customizations tab, Uptrends uses that message definition for all error types: an Error alert when the check first generated the alert, an OK alert when the check resolves the alert, and Reminder alerts (depending on your escalation level settings) in between.

The message contents are practically the same for all alert types, except for any timestamp values, and the {{@alert.type}} variable, which outputs the alert type itself.

Although, fine for a lot of situations, using the same message contents is not sufficient if you need different content for different alert types, or if you need to create a new incident in your system (based on an Error alert) requiring a different URL than resolving that same incident (based on an OK alert).

Separate messages for different alert types

To create separate message definitions for alert types, click the "Add steps" button at the bottom of the Customizations tab. The "Add steps" button creates an additional message definition which you can configure, for example, to only apply to OK alerts. For each alert type, you can now specify the appropriate HTTP method (GET/POST/PUT/PATCH/DELETE), URL, headers, and request body.

Click the Error alert, OK alert and Reminder alert checkboxes at the top of each step definition to create the desired setup. You can only check each alert type once, but OK alerts and Reminder alerts are optional. If you don't want to send OK alerts or Reminders at all, simply leave those checkboxes unchecked.

Error alerts and OK alerts belong together

Whether you're using separate messages for Error and OK alerts or not, it's probably useful for the external system to know which alerts belong together. After all, each incident starts with an Error alert and ends with an OK alert. To help the external system understand this, you can use the {{@@incident.key}} variable in your messages. Error and OK alerts share the same incident key, but each new incident has a unique key. In some systems, the incident key is called a deduplication key or incident Id.

Using variables

When the Customize option is active in an integration, you can maintain one or more variables for that integration on the Main tab. The default setting for predefined integration variables (as indicated by Specify value here) is that the value for those variables is defined as a fixed value in the integration. You can subsequently refer to those variables in the message definition in the Customizations tab. For more information on how to define and use variables, please read this knowledge base article about using variables in a multi-step API setup. The exact same approach applies to integrations.

For integrations, however, you have one additional option that adds even more power. Suppose you've created an integration that connects to your IT management system. The integration sends information based on the monitor and alert that triggered the alert message. But, is that enough information for the IT management system to take appropriate action? You can send some additional information across about how to handle the new incident. You can typically express this information as: how should the incident be routed through the external system? Different alert definitions (each escalation level within them, in fact) can specify unique routing information, which you can include in the outgoing alert message.

To do this, define a variable in the Main tab of the integration, and choose Specify value in escalation level. Notice how you can no longer give it a value in the integration itself. Instead, when you use this integration in the escalation levels of your alert definitions, you can specify values for this variable there. As a result, you only need to create one single integration definition for your IT management system, while maintaining flexibility in the way that all alerts are handled there.

Checking an integration using test messages

Once you've created or modified a customized integration, it's useful to test it first before using it in production. The customization tab in the integration editor has a button titled Send test message that allows you to send a test message to the third party system using the HTTPS step(s) you've created. When you use this test function, you can select which alert type (an Error alert, an OK alert or a Reminder alert) Uptrends should use for this particular test message. You can fill in any other appropriate values if necessary, and the remaining data (which would normally be relevant monitor data and alert data) will be filled with fictitious values.

Once Uptrends generates the message and sends it out to the third party system or API, the full message content, the server's response code, and the content is displayed. You can expand the request headers and content and the response headers and content to inspect the outgoing and incoming traffic that was involved in sending this test message. 

Checking an integration using live data

While the test function described in the previous section is useful for static testing of your message and variables and establishing that the communication channel to the external system works correctly, it's good to have the option to verify that the integration works correctly in a live situation as well.

First, make sure that one of your alert definitions actually uses your integration. Otherwise, Uptrends never triggers the integration to send out messages. For more information on how to activate integrations in your alerting setup, please read this Academy lesson about escalation levels.

Next, an error situation needs to happen so that your monitoring generates a real alert. As soon as you see an alert in your Alert status or Alert log dashboard, click on it to reveal the details for that alert. The Details tab lists all key properties of the alert; the Messages tab contains the information you need to inspect the message traffic between Uptrends and the external system.

On the Messages tab, locate the integration you want to inspect; it may display other integrations that were also triggered by this alert. Expand the integration panel and the requests and responses within it. You'll see the full content of the outgoing message(s), the responses sent back by the external system, and any error messages that occurred if there was a problem in sending the alert message. 

Including external IDs or custom data

When you're integrating Uptrends with a third party system, it's good to consider if there is a relationship between your Uptrends monitors and the resources (sometimes called components or services) you've defined in the third party system. The monitors in your Uptrends account have a name and a unique identifier (a monitorGuid), but those are typically not known in the third party system. The resources defined in the third party system probably have their own identifier too, which Uptrends doesn't know about either.

If you want a monitor in Uptrends to trigger an incident for a specific resource on the other side, you need to define some sort of relationship between the two. In Uptrends, you can define that relationship by taking the identifier (or other important information) of the external resource/component and adding it as a custom value in the settings of a monitor.

As a result, the alerting data sent to the external system by Uptrends can include that identifier, so the receiving system knows which resource or component is affected by the incoming alert.

You can add custom fields in the Metadata section in the Main tab of a monitor. Aside from the external value you want to store, each custom field must also have a unique name so that we can refer to it in an alert message. For example, suppose your third-party system has the concept of Components, and each component has a ComponentId as its unique identifier. You'll want to specify that ComponentId in the monitor settings in Uptrends, so the two can be linked together.

To do this, locate the Custom fields section in the settings of your monitor. Add a custom field by filling in "ComponentId" as the field name, and the appropriate external ID value (e.g., 7149488f-0b33-460d-85eb-210c0e80d7ba) as the field value. Click Save to store the new settings.

We can now make sure that the external value is sent across as part of the alert message by including it in the Request body of the outgoing message. You can use the {{@CustomField()}} function to refer to the custom field you just added. As an example, you could add this fragment to the request body:

{ "Component": "{{@CustomField(ComponentId)}}" }

Notice how the field name "ComponentId" we used in the monitor settings is literally included in the @CustomField() function call. When a real alert is triggered, this will generate the following content:

{ "Component": "7149488f-0b33-460d-85eb-210c0e80d7ba" }

The external system can use this value to create an incident for the appropriate component. This example uses just one custom field, but you can use multiple custom values if you choose.