Data-center managers face the twin challenges of optimizing value to the business while minimizing the costs and risks associated with technology ownership. This challenge is becoming more intense as business performance becomes increasingly contingent on IT excellence. Responses to this challenge have included everything from virtualization and converged systems to agile development and DevOps.
Most data-center managers have not been especially aggressive about including their mainframes in their strategic responses to the value/cost challenge. This situation has in part been due to the fact that vendor innovation has focused almost exclusively on the distributed/cloud world—and in part due to the fact that IT organizations tend to view their mainframes as “legacy” environments that need to remain undisturbed so they can run core systems of record with the most reliability and the least headaches.
This lack of aggressive innovation is no longer tenable. IT organizations generally—and data-center managers specifically—will be unable to fully achieve their value/cost objectives unless they move quickly and aggressively to transform their approach to mainframe management.
There are three primary reasons IT needs to focus on mainframe innovation now:
- The mainframe isn’t going anywhere. People have been mistakenly predicting the demise of the mainframe for years. But the mainframe is here to stay. All the market research indicates that companies running core apps on their mainframe plan on continuing to do so for several years to come. Replatforming doesn’t make any economic sense. And the platform itself offers unmatched reliability, security, scalability, performance and cost efficiency. So IT leaders cannot continue subjecting the mainframe environment to nonbenign neglect. It’s just too important to the future of the enterprise.
- Application architectures profoundly link the mainframe to the rest of the IT environment. Today’s developers tend to repurpose existing resources such as databases, components of other applications and business-intelligence (BI) systems. So as they build out mobile apps, big-data analytics and other new IT capabilities, they are inevitably driving new and growing workloads to the mainframe on the back end. Time to benefit, long-term value/cost outcomes and the ability to rapidly iterate application improvements into production thus all depend to a substantial degree on enhanced mainframe management.
- A generational shift is taking place in mainframe stewardship. IT’s most experienced mainframe operators and developers are nearing retirement. The new generation of IT professionals who are taking their place are far less familiar with the underlying technical complexities of the platform. They also have a significantly different work culture. IT’s ability to get optimum economic value from the mainframe is therefore contingent on properly equipping this new generation of IT pros to exercise effective platform stewardship.
Simply put, IT leaders cannot ignore the need to overhaul mainframe management. And they need to get started on this overhaul quickly, because the generational shift is close at hand—and, at most organizations, mainframe modernization has already been ignored for far too long.
The Mainframe Innovation Checklist
So where exactly should IT focus its mainframe modernization efforts? What specific steps can it take to ensure that its mainframe environment can keep pace with the rest of the enterprise?
Some important initiatives to consider are the following:
- Millennial-friendly tool kits. Few Millennials have spent much time rooting around under the hood of the IBM’s z operating system or its state-of-the-art z13 hardware. Nor is it reasonable to expect them to develop the same facility with “green screen” management tools as their predecessors—who spent decades refining their mainframe skills. Instead, next-generation mainframe operators must be provided with intuitive management tools that fully empower them to more quickly and easily discover potential ops issues. This effort requires much more than grafting a Windows GUI onto a decades-old utility application. It requires a full tool-kit refresh that gives platform rookies the ability to rapidly assess platform status—and that may even have enough intelligence to suggest possible remedies.
- Enhanced MSU intelligence. Mainframes offer far lower incremental operating costs than distributed environments, so workload growth on the platform is not inherently problematic. But the pricing structure for IBM mainframe software is based on peak utilization as measured in million service units (MSUs). Smart MSU management is thus essential for diligent mainframe cost control. There are basically two ways to minimize MSUs. One is to understand how applications running on and off the mainframe are driving MSU workloads—and to remediate any inefficiencies in that code. The other is to time-shift workloads wherever practical to “flatten” avoidable utilization peaks. The right tools can help mainframe operators rigorously apply these two disciplines to the data-center environment.
- Linux and Java virtualization. Most IT organizations have focused on virtualization using commodity servers running VMware and/or Microsoft Hyper-V. And there are certainly workloads that can benefit from this architecture. But the mainframe has always been virtualized. So now that IBM offers the ability to run Linux and Java workloads in its z Systems environment, data-center managers should explore the technical feasibility of doing so. Given the management, security and compliance challenges associated with VM sprawl in the cloud, the economics and low risk of mainframe-based virtualization can be very compelling.
These are just a few of the areas where mainframe-related innovation can advance the goals of data-center managers. It is worth noting that they all relate to optimization of how application code runs in production in terms of cost, performance and risk. In other words, these innovations all support DevOps best practices. That’s a critical consideration for data-center managers who have a mandate to fulfill the “Ops” half of the DevOps equation.
Three Keys to Successful Mainframe Innovation Leadership
The above list only suggests some technical innovations data-center managers should consider for their organization’s mainframes. Managers do more, however, than just pick technologies off of a list. They also must lead.
Leadership requires ideas and vision. So here are three areas where data-center managers need to have clarity of vision in order to effectively lead the kind of mainframe innovation essential for success in the coming years:
Mainframe innovation leadership principle #1
Clear understanding of the platform
Data-center managers who think of and speak of the mainframe as a “legacy” platform will probably have trouble gaining consensus and support for genuine mainframe transformation. No one (except for maybe professional antiquarians) wants to invest much time or treasure into a relic of the past. Data-center managers therefore need to be clear that the mainframe is not “legacy.” It is the most powerful, secure and highly efficient converged platform in the enterprise. Distributed and cloud platforms cannot match it—despite the billions that have been spent trying. So the mainframe will remain central to the enterprise for years and even decades to come.
Mainframe leadership principle #2
Clear understanding of the platform in the context of hybrid cloud
For years, the mainframe has been perceived as an island or silo of IT capability. Even as the rest of the IT environment—including both on-premise and off-premise Windows and Linux infrastructure—is increasingly understood as a single software-defined “hybrid cloud,” the mainframe is being left out of the equation. Given how applications and data are used in the enterprise, this siloed view is no longer tenable. The mainframe is as much a part of the data-center resource pool as any VM running in Amazon or Rackspace. It just happens to offer greater scalability, better economics and superior security.
Mainframe leadership principle #3
Clear understanding of mainframe people and culture—and the need for transformation thereof
One of the main consequences of allowing the mainframe to operate as a silo over the years has been the inculcation of a highly change- and risk-averse culture. To some degree, this risk aversion has been positive, because it has allowed mainframe teams to rigorously protect core systems of record. But rapid change has become a core element of the IT value proposition. And it is certainly an important aspect of the broader IT culture. The mainframe coterie can no longer be insulated from the need for agility and safe, frequent promotion of code from dev/test to production. This accelerated promotion requires leadership of a change in culture, in addition to the embrace of innovative new tools.
Conversely, distributed/cloud ops teams must be cured of their cultural prejudices against the mainframe. As noted above, their ability to support, scale and secure new applications workloads such as mobile transactions and big-data analytics depends on the transformation of the mainframe. They must therefore be enlisted as allies in that transformation effort.
Indeed, it is probably safe to say that operations staffs across all platforms in the hybrid cloud—from the mainframe to the private cloud to the entire public as-a-service resource portfolio—will ultimately have to function as a single, unified organization if IT is to achieve its challenging value/cost objectives.
What’s My Motivation?
With all the other infrastructure/operations initiatives competing for data-center managers’ limited time and resources, the transformation of mainframe ops has to offer some pretty compelling benefits to become a priority item on the strategic to-do list.
And it does. In fact, the benefits of mainframe transformation have been shown to include the following:
Better IT economics. The more efficient you make your mainframe—and the more effectively you employ its superior economics to provision new workloads—the more aggressively you can reduce both IT capex and IT opex. That frees money for additional high-value technology projects.
Faster time to market. IT may be able to move workloads onto the distributed/cloud environment pretty quickly—but it will still experience time-to-market bottlenecks if the mainframe workload processes aren’t equally adaptive.
A competitive digital customer experience. The IT agility of “green field” cloud startups often allows them to upset market incumbents by delivering a superior digital customer experience. By bringing greater agility to the mainframe, incumbents can better maintain competitive parity—and even achieve competitive advantage—in this critical area.
Reduced business risk. In the rush to embrace commodity computing, most companies have exposed themselves to excessive risk in the form of near-epidemic security breaches and downtime. Judicious use of the mainframe enables companies to significantly mitigate these risks—especially in regard to their most sensitive data and most critical applications.
De-commodified human capital. Commodity computing ops skills are themselves rapidly becoming a commodity. Thus, many organizations are outsourcing operations to as-a-service providers and/or MSPs. There is, in marked contrast, a severe mainframe skills shortage. Organizations that successfully attract and develop mainframe talent will thus build capital where it is clearly in their best interest to do so.
Of course, the primary benefit of mainframe transformation is protection of the precious intellectual capital it hosts in the form of core application code. This core code is the digital DNA of the business, so it must be protected at all costs. Replatforming it entails high risk and little, if any, financial payoff.
That’s why data-center managers must focus now on moving the platform forward and navigating the impending generational shift in platform stewardship. Further neglect of this imperative will only hurt IT’s ability to fulfill its responsibilities to the business.
Leading article image courtesy of Richard Hilber
About the Author
Dennis O’Flynn is VP of Technology for Compuware and is responsible for product management and software engineering. He has over 30 years of IT experience with roles in product management, engineering and strategy. These roles have spanned multiple platforms including mainframe, open-systems and mobile. He is passionate about business agility, aligning IT with business objectives to improve the customer experience.