DevOps in practice
DevOps Engineers. Who are they and what they do in a project team? Is it just a fancy synonym for a developer? The term “DevOps” became a buzzword. That’s why I decided to explain what it actually means and how we put this paradigm in practice.
The way we deliver software changed. In traditional approach, the process consisted of two stages. Firstly, development team (Dev) selected and pre-configured tools and applications. Then developers prepared installation and configuration manuals and handed the project over to the administrator team (Ops). Manuals were a contract defining division of responsibilities between developers and administrators.
Under such conditions, Dev and Ops teams were usually working separately, although obviously they had to communicate with each other. But developers perceived administrators as an obstacle on the path leading to launching a given application. Administrators, in turn, saw developers as a source of infinite changes and new requirements, threatening high levels of availability, security and stability of the system.
The situation was difficult, but manageable. At least in the cascade model, and until the applications depended on a relatively low number of components, and new versions of the application itself and of its elements were released once every few months.
And then agile methods rocked the foundations of this paradigm. Agile has brought about a quicker pace of change and more frequent releases of new application versions, which were debuting not every few months, but once every week or two. Each new release could mean that requirements applicable to other components were changing as well.
Software is produced faster than before, so the bottle neck of the process moved towards its “last mile”, i.e. supplying the solution to the production environment. That’s where installation manuals would have to be faced, or at least manuals stating how to update the application, or potentially other components, to the next version. With the traditional separation of roles between developers and administrators, the process of supplying the applications would be time consuming and prone to human error.
How to ensure that the software can be implemented as fast as it is developed? By expanding agile approach to the Ops area. In other words, we need agile management of requirements and infrastructure changes.
No other solution could be adopted. In order for software to be delivered to the production environment in a cycle of frequent, agile releases, Ops specialists had to become part of the Dev team. In other cases, some developers had to leave their favorable IDE and become familiar with the Ops domain - clusters, systems, high availability techniques, firewalls and security rules.
Such new approach has been defined as DevOps.
What does DevOps mean for us
For us, DevOps is, on the one hand, the awareness of and the knowledge about infrastructure within the team of programmers (declarative infrastructure coding). On the other hand, it means that the administrators are supplying an environment that will be capable of efficiently deploying not only the application itself, but also all the components it requires, preferably in an automated manner, with no or minimum involvement of humans.
The entire life cycle of an application, with its key areas overlapping, is illustrated by a classic drawing resembling the symbol of infinity.
DevOps in practice: Docker, Kubernetes and OpenShift
To take full advantage of the DevOps method, we need technologies that will serve as a space for cooperation and will enable the developers to impact new environments in which the application will be working - from the developer workstations, all the way to the production environment.
Following an analysis and tests of the solutions available, we have decided to implement the following technology stack at e-point:
The first technology required is containers (docker). Containers themselves are not a full-scale DevOps operation, but their use allows us to approach the installation procedures in an agile way. Instead of an installer and installation manual, DevOps specialists are creating full images (containers) defining the needs of the application. They not only contain entire binaries, but also define the following parameters:
- What resources are required by the container
- Which configuration values should be paid attention to
- Which other tools and containers are required for operation
- What network services are made available by the application.
Containers make it possible to launch the same dockers on the workstation, in the CI and QA environments, and, finally, in the production environment. That is how they minimize the risk of installation errors and reduce the lead time for preparing a new version.
Containers operate on a single server instance, while a production launch is linked with the requirement of high availability and scalability, which means that numerous containers, communicating with each other, need to be launched on various servers. In other words - a method for defining and launching applications in clusters is required. This level of management is addressed by Kubernetes.
However, that’s not all. We needed a technology in which we would be able to launch applications safely, managing the rights to access applications’ repositories and infrastructure resources. All that is made possible by OpenShift: a tool for advanced management of dockers and Kubernetes - both in the form of images, and of numerous image instances launched simultaneously.
In the new model, administrators manage the infrastructure, resources and security. They no longer have to spend time installing each separate tool using the methodology prescribed to the specific solution, as the container definition contract is the sole method that applies. DevOps specialists, in turn launching applications and their subordinated components, use up only a specific amount of resources that are available for their project.
To sum up: firstly, thinking about DevOps starts with the code itself and with defining a correct software production process that not only supports, but even forces the existence of an automatic building and testing mechanism.
Secondly, the process of assigning infrastructure resources has to be automated. It is very difficult to pursue the DevOps approach in an efficient manner without an intermediary layer between the infrastructure and the containers, have the form of a private or a public cloud, or - directly - of OpenShift.
When is it worth deploying this methodology?
By introducing the DevOps approach, we increase the speed at which new versions are released, minimize the number and duration of technical down-times related to launching new versions, and reduce the number of errors related to the differences between individual environments. DevOps makes it also possible to better understand the functional requirements throughout the entire project team, and allows to use, in an optimized manner, both the time of the developers, and the infrastructure resources (if DevOps involves the use of containers).
DevOps is worth launching whenever you work in the agile methodology. If the application is an entire system, and has the form of a set of applications or microservices, I recommend relying on containers combined with the cluster design technology.But nothing is worth doing just for the “art’s sake”. The same applies to implementing DevOps just because the approach has recently become very trendy.