Alertgram Save

Easy and simple prometheus alertmanager alerts on telegram

Project README

Alertgram Build Status Go Report Card

Alertgram is the easiest way to forward alerts to Telegram (Supports Prometheus alertmanager alerts).

alertgram

Table of contents

Introduction

Everything started as a way of forwarding Prometheus alertmanager alerts to Telegram because the solutions that I found were too complex, I just wanted to forward alerts to channels without trouble. And Alertgram is just that, a simple app that forwards alerts to Telegram groups and channels and some small features that help like metrics and dead man's switch.

Features

  • Alertmanager alerts webhook receiver compatibility.
  • Telegram notifications.
  • Metrics in Prometheus format.
  • Optional dead man switch endpoint.
  • Optional customizable templates.
  • Configurable notification chat ID targets (with fallback to default chat ID).
  • Easy setup and flexible.
  • Lightweight.
  • Perfect for any environment, from a company cluster to home cheap clusters (e.g K3S).

Input alerts

Alertgram is developed in a decoupled way so in a future may be extended to more inputs apart from Alertmanager's webhook API (ask for a new input if you want).

Options

Use --help flag to show the options.

The configuration of the app is based on flags that also can be set as env vars prepending ALERTGRAM to the var. e.g: the flag --telegram.api-token would be ALERTGRAM_TELEGRAM_API_TOKEN. You can combine both, flags have preference.

Run

To forward alerts to Telegram the minimum options that need to be set are --telegram.api-token and --telegram.chat-id

Simple example

docker run -p8080:8080 -p8081:8081 slok/alertgram:latest --telegram.api-token=XXXXX --telegram.chat-id=YYYYY

Production

Metrics

The app comes with Prometheus metrics, it measures the forwarded alerts, HTTP requests, errors... with rate and latency.

By default are served on /metrics on 0.0.0.0:8081

Development and debugging

You can use the --notify.dry-run to show the alerts on the terminal instead of forwarding them to telegram.

Also remember that you can use --debug flag.

FAQ

Only alertmanager alerts are supported?

At this moment yes, but we can add more input alert systems if you want, create an issue so we can discuss and implement.

Where does alertgram listen to alertmanager alerts?

By default in 0.0.0.0:8080/alerts, but you can use --alertmanager.listen-address and --alertmanager.webhook-path to customize.

Can I notify to different chats?

There are 3 levels where you could customize the notification chat:

  • By default: Using the required --telegram.chat-id flag.
  • At URL level: using query string parameter, e.g. 0.0.0.0:8080/alerts?chat-id=-1009876543210. This query param can be customized with --alertmanager.chat-id-query-string flag.
  • At alert level: If alerts have a label with the chat ID the alert notification will be forwarded to that label content. Use the flag --alert.label-chat-id to customize the label name, by default is chat_id.

The preference is in order from highest to lowest: Alert, URL, Default.

Can I use custom templates?

Yes!, use the flag --notify.template-path. You can check testdata/templates for examples.

The templates are HTML Go templates with Sprig functions, so you can use these also.

You can use also the notification dry run mode to check your templates without the need to notify on telegram:

export ALERTGRAM_TELEGRAM_API_TOKEN=fake
export ALERTGRAM_TELEGRAM_CHAT_ID=1234567890

go run ./cmd/alertgram/ --notify.template-path=./testdata/templates/simple.tmpl --debug --notify.dry-run

To send an alert easily and check the template rendering without an alertmanager, prometheus, alerts... you can use the test alerts that are on testdata/alerts:

curl -i http://127.0.0.1:8080/alerts -d @./testdata/alerts/base.json

Dead man's switch?

A dead man's switch (from now on, DMS) is a technique or process where at regular intervals a signal must be received so the DMS is disabled, if that signal is not received it will be activated.

In monitoring this would be: If an alert is not received at regular intervals, the switch will be activated and notify that we are not receiving alerts, this is mostly used to know that our alerting system is working.

For example we would set Prometheus triggering an alert continously, Alertmanager sending this specific alert every 7m to the DMS endpoint in Alertgram, and Alertgram would be configured with a 10m interval DMS.

With this setup if Prometheus fails creating the alert, Alertmanager sending the alert to Alertgram, or Alertgram not receiving this alert (e.g. network problems), Alertmanager will send an alert to Telegram to notify us that our monitoring system is broken.

You could use the same alertgram or another instance, usually in other machine, cluster... so if the cluster/machine fails, your is isolated and could notify you.

To Enable Alertgram's DMS use --dead-mans-switch.enable to enable. By default it will be listening in /alert/dms, with a 15m interval and use the telegrams default notifier and chat ID. To customize this settings use:

  • --dead-mans-switch.interval: To configure the interval.
  • --dead-mans-switch.chat-id: To configure the notifier chat, is independent of the notifier although at this moment is Telegram, if not set it will use the notifier default chat target.
  • --alertmanager.dead-mans-switch-path To configure the path the alertmanager can send the DMS alerts.
Open Source Agenda is not affiliated with "Alertgram" Project. README Source: slok/alertgram
Stars
55
Open Issues
6
Last Commit
1 year ago
Repository
License

Open Source Agenda Badge

Open Source Agenda Rating