Two infrastructure layers for distributed systems

It looks like, that separation between two infrastructure layers is increasing.

First layer is a pure classical infrastructure. You receive bare metal servers and install basics there, like operation system and networking. Typically automation is done using ansible scripts. Alternatively, you receive basic infrastructure directly in cloud, where using, for example, terraform scripts you create virtual machines. In past, you deploy your applucations directly here. Conflict between libraries and tools, plus thousand of mounts and links, contradictionaly application peoperties and so on, were a part of life.

Second layer of infrastructure makes live easier, but only if applied correctly. Second layer consists of lightweight virtualization wuth docker and orchestration with kubernetes.

Second infrastructural layer should be completely separated from first layer. Then your applications will be portable. It does not matter, what is behind the scene: cloud or bare metal servers.

To achieve this you need declarative configuration. This is very powerfull feature, which becomes popular with k8s. Need installation, create docker script with k8s deployment yaml and put it in git. Need application properties, do not experiment with mounts, simply add property files to git and create k8s ConfigMap.

Moreover you should avoid physical mounts in containers. I see, that it is difficult to switch to new way of thinking. Every time, when you have files, you try to put it outside of containers, but this overcomplicates setup. Especially, when you have more than one server or uses hybrid cloud model. Every time you need to be sure, that these mounts are available in your hosts.

Two separate teams should handle these two infrastructural layers. Responsibility of first team is to provide basic infrastructure with installed k8s cluster. Responsibility of second team is configuration what is inside of containers and how to deploy application.