The term cloud means something different to everyone. For this article, we’ll define it in relation to infrastructure: the cloud is the ability to autoprovision a subset of available compute/network/storage to meet a specific business need through virtualization (IaaS).
As far as applications, the cloud is browser-based access to an application (SaaS), and, importantly, the utility-based consumption model of paying for these services that has caused a major disruption in the traditional technology models.
This situation has led to a paradigm shift in client-server technology. Just as the mainframe morphed into mini-computing—which led to the client-server model, cloud computing and Amazon Web Services (AWS)—the ubiquity of the cloud is the next phase in IT’s evolution. In this phase, applications, data and services are moving to the edge of the enterprise data center.
A CIO wanting to reduce IT spending and mitigate risk has many options:
- Move budget and functions directly to the business (shadow IT) and empower the use of the public cloud
- Move to a managed service—private cloud for the skittish
- Create a private cloud with the ability to burst to a public cloud (i.e., hybrid cloud)
- Move entirely to a public-cloud provider managed by a smaller IT department
Each of these options has pros and cons. Given all the database options, determining which is the best solution for an enterprise can be difficult.
The three issues most central to an organization’s database needs are performance, security and compliance. So what are the best database-management practices for each deployment option to manage those priorities?
Let’s briefly examine five use cases for your enterprise database strategy: on-premises/private cloud, hybrid cloud, public cloud, appliance based and virtualized.
On Premises/Private Cloud
One of the main pros of this type of database deployment is that an enterprise will have control over its own environment, which can be customized to its business needs and use cases. This approach boosts trust in the security of the solution, as IT and CIOs own and control it.
A customer’s location relative to the data’s location can affect legacy applications. Latency can be an issue if users located in a different part of the globe from the company are accessing data through mobile devices, resulting in overall poor user experience.
Another con is capex. Traditionally, the break-even ROI for on-premises deployment—for hardware, software and all required components—is about 24 to 36 months, which can be too long for some organizations. Storage costs also can get expensive.
A feature that could be a pro or con, depending on how you look at it, is that IT will have a greater involvement. It can sometimes hinder an enterprise’s ability to go to market quickly.
Before moving to an on-premises/private-cloud database, it’s important to examine expected ROI: if the timeline is more than two or three years, this option is justifiable, but that timeline may not apply to all organizations.
Perceived security and compliance are other considerations. Some industries have security regulations that require strict compliance, such as financial services and health care. Canada, Germany, Russia and other such countries are drafting stricter data-residency and data-sovereignty laws that require data to remain in the country to protect their citizens’ personal information. Doing business in those countries, while housing data in another, would be in violation of those laws.
Security measures and disaster recovery both must be architected into a solution as well.
A hybrid cloud is flexible and customizable, allowing managers to pick and choose elements of either a public or private cloud. The biggest advantage of the hybrid-cloud approach is the ability to do “cloud bursting.” A business running an application on premises may experience a spike in data volume during a given time of month or year. With a hybrid cloud, it can “burst” to the public cloud to access more capacity when needed without purchasing extra capacity that would normally sit unused.
A hybrid cloud lets an enterprise self-manage an environment without relying too much on IT, and it provides the flexibility to deploy workloads depending on business demands.
More importantly, disaster recovery is built into a hybrid solution and thus removes a major concern. An organization can mitigate some restraints of data sovereignty and security laws with a hybrid cloud; some data can stay local and some can go into the cloud.
One con of a hybrid cloud is that integration is complicated; trying to integrate an on-premises option into a public cloud adds complexity that may lead to security issues. A hybrid cloud also can lead to sprawl, where growth of computing resources underlying IT services is uncontrolled and exceeds the resources necessary for the users.
Although the hybrid approach offers the flexibility to employ the current data center environment with best-in-class SaaS offerings, it’s important to have a way to govern and manage sprawl. Equally as important is architecting a data-migration strategy into a hybrid cloud. Doing so helps reduce complexity while enhancing security.
The main advantage of the public cloud is its almost infinite scalability. Its cost model, too, is an advantage, providing pay-as-you-go benefits. It offers faster go-to-market capability and gives an enterprise the ability to employ newer applications, as using legacy applications in the cloud can be challenging.
As in a hybrid cloud, sprawl can also be a problem in the public cloud. Without a strategy to manage and control a public cloud platform, costs can spiral and negate the savings and efficiency. But keep in mind that the public cloud may open the door to shadow IT, creating a security issue.
Data visibility is another downside; once data goes into a cloud, it can be hard to determine where it actually resides, and sovereignty laws can come into play for global enterprises. Trust in the public cloud is an issue for CIOs and decision makers, which is why hybrid—the best of both worlds—is so popular.
Public clouds also are often homogenous by nature; they are meant to satisfy the needs of many different enterprises (versus on premises, which is designed for just one company). Thus, customization can be a challenge.
Although a public cloud is opex friendly, it can get expensive after the first 36 months. Keep TCO in mind when deploying a workload: its lifecycle and overall cost benefit, as well as how the true cost of that application will be tracked.
Latency issues can occur depending how an enterprise has architected its public cloud and how it has deployed applications or infrastructure, greatly affecting the user experience. To improve performance, distributing apps and data close to a user base is a better solution than the traditional approach of putting everything in one data zone.
Disaster recovery will be built in, so there is no need for an enterprise to architect it on its own. Security with a public cloud is always a challenge, but that challenge can be mitigated through proper measures such as at-rest encryption and well-thought-out access-management tools or processes.
Traditionally, appliance databases are an on-premises solution, either managed by a vendor or in an enterprise’s own data center. Many popular vendors provide this solution, and using one vendor to control the complete solution can offer performance and support gains.
But this approach also can be a disadvantage, because it locks an enterprise into a single vendor, and appliance-based databases tend to be a niche, use-case-specific option. Vendor selection is an essential process to make certain that the partnership will work both now and in the future.
Appliance databases, because of their specialized, task-specific nature, are expensive. They can be cost effective over time if they are deployed properly.
One advantage of virtualization is the ability to consolidate multiple applications onto a given piece of hardware, leading to lower costs and more-efficient use of resources. The ability to scale is built into a virtualized environment, and administration is simple, with a number of existing tools to administer that environment.
With virtualization, patching can sometimes be an issue; each OS sits on top of a hypervisor, and IT may have to patch each VM separately in each piece of hardware.
It’s best to plan for a higher initial capex, because the cost of installing a database must be accounted for. An enterprise can opt for an open-source solution such as KVM, but this solution often requires additional set-up expenses.
A con is that the enterprise itself will be the single point of failure; if hardware fails, VMs go down. Fault-proof disaster recovery is a major concern and must be well architected.
Network-traffic issues can arise because multiple applications will be trying to use the same network card. The server an enterprise employs must be purpose built for the virtualized environment.
Virtualization is ideal for repurposing older hardware to some extent, because IT can consolidate many applications onto hardware that might have been written off. It’s well suited to clustering; being able to cluster multiple VMs over multiple servers is a critical benefit as far as disaster recovery.
It comes with a capex, but over time, opex decreases because of consolidation (a lot of processes will be automated), so lower operational expenses and savings over time lead to a quicker return and lower total cost of ownership. Licensing costs can get expensive, however.
An enterprise can achieve better data center resource utilization because of the smaller footprint, which saves on the costs of running servers and allows an enterprise to host multiple virtual databases on same physical machine while maintaining complete isolation of the operating-system layer.
Selecting the Right Database
Selecting a deployment option isn’t a trivial matter. Therefore, how can a CIO, for example, mitigate the risk of choosing one over another? Cost can’t be the only driver.
Just as the mainframe eventually led to the cloud, enterprises can succeed if they can enable the simple path from legacy on-premises databases to a private cloud with APIs to the public cloud. The public cloud will connect a legacy architecture to mobile, IoT and artificial intelligence. This approach is a launching pad for a hybrid-cloud architecture with best-in-class public-cloud services: storage, applications and so on.
Every enterprise has its own challenges, goals and needs, and there is no one-size-fits-all recommendation when selecting a database. Carefully examine your own infrastructure as well as ROI expectations, long-term business goals, sovereignty laws, IT capabilities and resource allocation to determine which of these databases is the right one for your enterprise—now and years down the line.
About the Author
Franco Rizzo is a Senior Solution Architect with TmaxSoft, a leading global technology innovator focusing on the cloud, infrastructure and legacy modernization. He has more than 20 years of experience delivering best-in-class solutions in such dynamic segments as the cloud, virtualization and service-oriented architecture (SOA). He has launched strategic initiatives that have defined such future-states as hybrid cloud, workload deployment and software-defined data centers. Franco has worked for Oracle and Kraft Foods (now Mondelez), and he has a client list that includes Pfizer, The Northern Trust and Nike. Franco holds a Bachelor of Science in Engineering from the Wentworth Institute of Technology (Boston, MA) and has completed coursework towards his Master of Science in Engineering at Tufts University (Boston, MA).