All applications hosted on Scalingo generate a lot of metrics made available to the owner through beautiful graphs on the web dashboard. The next step many of you were expecting is alerts about these metrics. Today we announce the release of alerts about any metrics collected by an application.
When you host an application on Scalingo, there are many scenarios where you may want to be alerted. Imagine your application uses most of the time around 30% of its allocated memory. Suddenly the memory usage grows and your application starts filling up the memory. It eventually starts swapping and your app drastically slows down. By setting up an alert on the RAM usage or on the swap usage, you would be notified and can react by either looking for a memory leak, or using bigger container.
And there is many other use cases you can imagine to help you monitor your application using alerts on the metrics gathered by your application: you might be wondering whether your application is receiving an abnormally huge amount of requests, or if it raises 5XX HTTP errors.
With today’s release, Scalingo now lets you create alerts on an application metric. When the metric’s value goes above or below a user-defined limit, Scalingo sends a notification on some specified notifiers: email, Slack channel, Rocket.Chat…
Creating an alert for your application
The alerts are configurable through the Scalingo dashboard, in the Notifications section. This page contains two parts. The first part whhich already exists for a few months is about the notifiers of your application.
The second part is what matters today: Alerts! This card presents a list of existing alerts configured for your application and a button to create a new one:
When creating a new alert, a list of all the container types of your application is displayed. When
web container type, a list of 7 metrics to monitor is available:
- CPU, RAM and swap: percentage of this resource consumption
- Response time: 95th percentile of the requests response time
- 5xx errors: amount of HTTP errors (status code ranges from 500 to 599)
- RPM and RPM per container: requests per minute (RPM) received by your application. If your application is scaled on multiple containers, the RPM per container divides the RPM of the application by the amount of containers.
Finally, you need to give a threshold above/below which the alert is triggered.
On the next step, you need to select which one of your application’s notifiers will be used when an alert is triggered:
When done, the alert is configured and ready to notify you when your application requires your attention!
Alert related events
Every time an alert is triggered, an event is created. This event appears on the application’s
timeline. The user responsible for the operation is labeled
If notifiers are defined for this alert, the event is also forwarded to all the notifiers. On a Slack channel, the notifications look like this:
An event is also generated when an alert is created or deleted:
If an important metric is missing for your app, feel free to reach us on the support and ask for it.
Finally, the release of the Alerter service is the last piece of software infrastructure needed for the long awaited auto-scaling feature. Stay tuned.