Skip to main content

· 3 min read
Andres Cidel

Breaking down applications from a monolithic architecture into decoupled microservices is a common practice in the industry. There are some engineering teams that find the monolithic approach much suitable for their needs and context, but we are not discussing what is better here.

I would like to discuss here a checklist to start the process of containerizing an existing application.

Identify relevant environment variables

There are just a few cases when applications don't make use of environment variables. In most of the cases they play a very relevant role when configuring the behavior of the application.

First, you can identify the variables that handle sensible information, a.k.a. secrets. Depending on the level of security required by the company's rules, you can keep using those secrets via environment variables or you can retrieve them programmatically with a secretless approach, just to mention an example. But this is a conversation you need to have internally within your team.

You can identify the variables that won't change across environments (dev, stg, acc and prod), those can be placed in the Dockerfile as defaults in order to make your code more DRY.

Identify the volumes

Especially if the container will run with a read-only filesystem. Most of the official images already add a directive to use the required volumes. For example, the Dockerfile of the mysql image has a directive to indicate that the directory /var/lib/mysql should be treated as a volume. If the application you are wrapping needs to perform i/o operations on a file or directory, most likely that should be declared as volume.

Common cases are:

  • Configuration files that change across environments
  • Assets
  • Database files
  • PID files

Document dependencies

Does your application make use of a database, an API or other services? Then create an inventory to document the required credentials, URLs and any network-related information. The most important point to consider here is that any dependency managed by you or your team should be running in a different container or host. As a rule of thumb, never ever run two applications in the same container, it is actually considered an antipattern and can lead to unpredictable shutdowns.

Choose the right base image

I recommend to run the very same version of the OS, with the very same version of the runtime as a starting point. Once the application is up and running, then you can make improvements, step by step. This will allow you to troubleshoot any possible issue related to version incompatibilities, missing packages, etc.

The final image should use a minimal OS or distroless base and be as light as possible. There are other recommendations to improve the security and optimization of your images that you can consider.

Do you have automatic tests?

If the answer is yes then you are on the other side. Tests are very important when transitioning to cloud native environments, specially when you plan to implement CI/CD. A set of tests will guarantee that the application is working as expected. If you don't have tests at this moment, try to talk with the developer(s) to come up with an initial set of tests to start automating this process. If you are the developer I think you already know what to do.

Have a happy containerization process!

All applications have different challenges and complexities, even though these topics always come as challenges when people start the transition from the monolith to the microservices architecture.

· 2 min read
Andres Cidel

AlertManager offers the possibility to configure receivers to forward grouped alerts to multiple notification integrations.

I recently needed to send the notifications to a webhook receiver. The required configuration is well documented.

The Openshift Monitoring stack offers the possibility to add some custom configuration to Prometheus and Alertmanager components. In this case, the AlertManager needed to be configured and Openshift offers that possibilty, for more details check the documentation.

But the problem I had was that the front endpoint that would receive the alerts had some strict requirements:

  • A TLS client certificate.
  • A custom HTTP header needed by the gateway.

The first issue was that the Alertmanager is controlled, configured and managed by the Openshift Monitoring Operator and it does not allow to add volumes to the Alertmanager StatefulSet because the controller will force its state back, and this is a problem because the TLS configuration requires the location of the certificate and the key, since I cannot mount a Secret as volume in the filesystem of the pod then I cannot provide the credentials.

The second problem was the HTTP header. Even if I have the full control of the configuration of Alertmanager I cannot add HTTP headers or even URL params to the URL of the host that receives the notifications.

The only solution I found was to use an intermediary proxy to receive the payload sent by Alertmanager and re-send the request to the desired receiver with the custom TLS configuration and the custom header. I had to add the webhook receiver to the configuration and provide the name of the service that would dispatch the notifications and forward them to the special receiver.

This pattern is more or less what a Service Mesh would do with sidecar proxies since they intercept the HTTP requests and rework them if necessary.