Success Stories

Instantiating DevOps

2 de January de 2019


By Jorge Pais

When reading about DevOps, we often see phrases like “DevOps is not a tool rather a Culture to adopt…”. While this is a major truth, it is also important to keep in mind that to succeed in adopting this culture, the tools that we end up using are going to be decisive.

This is a rather complex decision to make, particularly because of the overwhelming variety of tools there are available for each of the different DevOps processes. This complexity, in many cases, undermines our efforts to adopt the culture in our companies.

In this article we aim to show some industry-tested tools to be used as accelerators on the adoption of DevOps. Each category, summarizes what we believe are the top three tools, according to our experience and usage in different projects. This is by no means a full list of tools; we suggest using it as trigger for curiosity.


Every Agile project needs a good set of collaboration and communication tools. There are more than enough of these, so it’s important to find one that provides just the right mix of functionalities and complexity that the team needs and feels comfortable with.


In our experience, the combination of Jira & Confluence provides a rich set of tools for agile project management, requirements specification, knowledge management and communications, that integrates seamlessly with several other tools that support DevOps processes.

Slack is a messaging platform with steroids. Very attractive because of the large number of supported plugins that allow it to integrate for example with CI/CD pipelines and monitoring systems to receive real-time notifications.

Trello is a project management tool based on Kanban. If Jira’s number of features turns overwhelming, Trello is a nice and complete alternative.



This definition is mostly subjective and depends on the likes and customs of each team member. In fact, in our team we use all the three suggested IDEs, indistinctly. They are multi-platform (macOS/Linux/Windows) and have lots of plugins that integrate with other tools and allow them to adapt to the particularities of each language (syntax coloring, CLIs, snippets, etc).


Atom and Visual Studio Code are free and open source, while Sublime has a trial period with full features (after this, a license has to be purchased).



It is essential for this path to define a tool that allows us to manage our source code (SCM). For a development team, this is pretty obvious, and it’s done from day one. For those of us that come from infrastructure, it can be a huge paradigm shift.


Git, a distributed source code management, became a de facto standard in the industry; Therefore, all the tools we propose are based on it. They all have free and paid versions, as well as the possibility to use them as SaaS or on-premise.

The DevOps team choice must consider the capacity of the tool to integrate with others used during the application lifecycle. For example, it is important to have traceability between the code we write and the requirement that originates the changes. Also, the SCM should be able to communicate with CI tools when there are events that modify the code to trigger automated build tasks (for example by using webhooks).




One of the many concepts introduced by the DevOps culture is that server configuration should be managed as code (CaC – Configuration as Code). That is, any change made to the base OS configuration, must be done by using idempotent scripts that not only sets the desired state but also prevents deviations from it.


Proposed tools fit the definition, but they differ on how behaviors are implemented. A key concept to consider is how configuration is obtained by controlled servers. One option is that a local agent running on clients pulls configurations from a master repository; Another is that a controller server pushes configurations to the desired servers. Each option has advantages and disadvantages, which should be examined beforehand. See here for a feature comparison that will also shed more light on CaC concepts.



Cloud computing platforms, besides providing a huge set of interesting tools and features, can act as accelerators for developing IT solutions. Not just for coding efforts, but also for designing complex networking and server architectures. When our on-premises sites are short on computing resources, moving to the cloud for prototyping allows us to bypass hardware availability bottlenecks.


Also, cloud providers expose very expressive management APIs to control and monitor resources. This goes in favor with DevOps concept of coding infrastructure resources (IaC – Infrastructure as Code)

When choosing a cloud provider, the first thing to evaluate is the product offering each one provides. Though very similar (and constantly changing), maybe there’s only one that provides the service you’re looking for.
Consider checking compliance and governance regarding data usage, especially if your app may be affected by PCI, SOX or other regulations.

Last but not least, check the prices. They change a lot depending on the provider, the service and even the geographical region.




Current agile development methodologies demand to maximize release cadence for new features, reduce the time-to-market and minimize downtime. DevOps tackles this by enhancing the automation of code build and artifact deployment.


Jenkins is highly customizable, open source, CI /CD system. It’s widely used and has many contributors creating plugins that allow it to integrate with other systems. There’s also a paid version for enterprise.

Microsoft’s Visual Studio Team Services is a SaaS version of a well-known code management solution for .NET, Team Foundation Server. It suffered a major overhaul and became a highly adaptable multi-language system not just for CI/CD but also for project management. It has a free tier with full functionality for small projects.

Jetbrains’ TeamCity is a very powerful CI/CD system. Integrates nicely with several SCM; communicates natively with cloud providers, allowing direct deployment; has a built-in code quality tracker that can be used as part of the build process. It has a free Professional Edition that’s great for starting and is mostly limited by the number of build configurations.



Containers provide a self-contained execution environment that has proved to simplify greatly the application development cycle, particularly code integration and deployment.

An important piece of software when using containers in production is an orchestrator, a component that manages a high-available cluster of container runners.


Each of these options define a particular paradigm for managing Docker containers. Do your research before choosing one, as it will definitely impact on the definition of other of your project’s DevOps processes. See here a nice article featuring a panel discussion between real-world Docker users.



Ok so now your app is running and there’s a nice pipeline for delivery. Is that all? Of course not. After going live, we need metrics to control the state of the application and act fast when problems arise. More over, we want to know about problems before they happen, and to do that we need a good amount of data that helps us predict future failures.


Application Performance Monitors (APM) are great at providing deep, real-time information of our application’s state. To gain better insight, look for a tool that can integrate with the language and framework used for development.

We hope that this summary of tools helps you and your team on the DevOps journey!

Blog Success Stories

2 de January de 2019