The defining characteristic of software is that it’s soft. For an example, let’s contrast the flip phone with the smartphone. If you wanted to change the color of a flip-phone key, which is a physical piece of molded plastic, you would need to change the manufacturing process. From idea to market implementation, it would take weeks—if not months. A smartphone, however, displays its keys using software, and that scope of change is just one line in a configuration file. From idea to market implementation, the change would take hours or even minutes.
So why should a data center professional care?
Because in 2017, every business is a software business, and the people who use your data center desire speed above all else. To them, speed means agile software methods and rapid iterations, and the most efficient way to find the best ideas is to release software as often as possible. Doing so increases the chance that they’ll find more hits per year than their competition, translating into more company revenue.
And that’s why DevOps and the cloud are important together: to give them the speed they crave.
A Day in the Life of a Developer
Asking developers to create a trouble ticket to launch a virtual machine is an invitation for them to go swipe their corporate AmEx at AWS. If you want to actually employ that shiny, well-managed hardware that fills up your data center, you have to make it easy for customers—namely developers—to consume.
The life of a developer typically revolves around two-week sprints that focus on implementing a specific set of functions or fixing bugs from a prioritized list. The list of items to be completed is maintained and organized by someone called a “scum master,” and each developer on a team takes an issue and completes it before moving on to the next one.
The term completes deserves more detail, though. It involves setting up an environment that sufficiently resembles production to be viable for the task at hand, and then writing automated tests for the new feature. When those tests pass, the developer knows the work is complete. This approach is called “test-driven development.” With the environment created and the tests written, the developer gets down to the business of writing the code that implements the new function, typically by breaking the problem into smaller and smaller pieces, coding each piece, and deploying the pieces to the development environment.
At first, all the tests will fail. But as more of these cycles of coding individual pieces are completed, more of the tests pass; eventually they all pass, indicating the work is complete. The code is then checked into a source-code-control system such as Git, where automation will pull the new code, deploy it in a staging environment (and perhaps create an entirely new staging environment) and execute the tests not just for this new function but for all prior pieces as well. If all those tests pass, the code may get batched up as part of a manual release. Alternatively, other automation will immediately deploy it to production, depending upon how the team operates.
Many Cycles, Minimize Waiting
This cyclical process is designed to build small pieces of code into a full-fledged feature, and any wait time that’s injected into the cycle is detrimental to efficiency and developer morale. Imagine taking responsibility for a new feature and trying to create a development environment for your code only to have to wait an entire day while a ticketing process provisions the virtual machines for that environment. That’s lost productivity that slows down the cycle.
Now instead imagine a process whereby a new environment can be created in minutes with virtual machines or in seconds with containers. That scenario enables developers to get to the core of their job more quickly: writing the code. By minimizing that wait time, their efficiency and morale increases. When they cannot get that minimum waiting time from their own data center, they turn to public-cloud alternatives.
What DevOps Success Looks Like
DevOps, then, is the automation involved with setting up these environments that developers need during both their development and deployment cycles to minimize their wait time and allow them to get more iterations over their code base. Given that these environments go up and down all the time, they’re natural allies of cloud-based consumption, but if you press developers on their preference of public versus private cloud, they’ll likely tell you that speed matters more than specifics.
With that in mind, a successful DevOps implementation makes use of the cloud to instantly bring up and down the resources needed to support the various environments involved in the development and deployment processes. Integrating security, monitoring and other aspects of the environments that the data center operations staff cares deeply about is critical, but not at the expense of speed. Failure to automate those admittedly important aspects of managing virtual machines leaves developers with little choice but to seek outside resources that provide the speed their management demands.
For years, developers and operations staff clashed, pointing fingers at each other during production escalations when the culprit was the processes that built walls between them. In years past, IT operations staff had a monopoly on hosting options for the software that developers were creating, but the public cloud changed all that, ushering in an era of automatic environment creation that became the new normal for developers. Data center operations can still play that same game by injecting themselves into development processes through DevOps automation, though. It’s not only possible but mandatory to get the attention of the development teams who are so closely tied to company revenue.
About the Author
A tech-industry veteran of more than 20 years, Pete Johnson is the Technical Solutions Architect for Cloud in the Global Partner Organization at Cisco Systems. He is on Twitter at @nerdguru.