Today’s colocation providers are dealing with continuously evolving technology requirements, emerging trends and increasing competition that represent both a great opportunity and a major challenge for the future. In response, providers are changing their models to best address these developments to ensure they’re preparing to meet their customers’ needs.
On top of the industry shifts that are resulting in emerging models, navigating the colocation industry is just difficult. Comparing providers that appear to deliver the same capabilities and quality of service makes it challenging for customers to differentiate one provider from another and, ultimately, decide which one best meets their infrastructure needs.
Customers should keep the broader industry backdrop in mind when assessing colocation providers to better understand what they’re dealing with. Here’s a look at some of the sweeping trends that are affecting the colocation industry.
- Shifting technology requirements. Providers are seeing some of their clients demand less electrical and mechanical redundancy (2N going to N+1 or sometimes even N) as IT and application development have taken resiliency and redundancy to themselves. We’ve discussed higher rack density for years, but the industry average is still approximately 6–8kW per rack. Now we’re seeing a push toward even higher density to support more-compute-intensive workloads and applications brought on by the advent of the cloud, big data and IoT, as well as the promise of 5G. To meet client needs while keeping the flexibility to handle shifting technology requirements, colocation providers are adopting “flexible yet repeatable” builds (prefab, flexible-modular power, white-space containment pods). This development is great for colocation customers, who can now find a provider that meets their needs yet adapt as necessary.
- Spread of the Internet giants. Large cloud providers, including Internet giants such as Google, Microsoft and Amazon, are consuming lots of space and driving even more demand for colocation services. Although this trend is good for colocation providers, as they consume large quantities of the available space and drive future builds, it’s also changing some requirements. A high-level example is how an Internet search engine or content provider may have different physical-infrastructure-redundancy requirements than a financial institution. The supply and demand impact drives the industry back to a more “speculative” build.
- Increasing competition. Many providers are increasingly facing cost pressures, which can often lead them to undercut current offerings to edge out the competition. More-progressive providers are instead finding ways to further optimize their data center business while providing an outstanding result and customer experience. Finding and matching your business and IT needs, risk appetite, and requirements is far easier than ever. Many providers are willing to collaborate and establish the right-size approach to your outsourcing needs. Increased competition drives good things for both supplier and consumer, as the service better fits the need. Although customers face these challenges as they try to find the right colocation provider, providers are simultaneously trying to balance operational change with innovation.
What to Ask Before You Sign: Best Practices
As a prospective colocation customer, you must look beyond the basic variables of price and footprint. Instead, develop a strong understanding of the current state of a provider’s business and operations. Probing under the facility’s hood will help you choose the provider that best meets your needs and avoid any surprises after you’ve signed SLAs.
- When evaluating a prospective colocation provider, transparency is critical. Customers need as much visibility as possible into a provider’s operational environment. For example, does the provider have a formal change-management process? How about an incident-response team? Does it conduct mock emergency drills? How often does the provider perform maintenance on critical infrastructure and what does it entail? Are maintenance reports available on request? If the answer to these questions is no, or the provider is unable or unwilling to share this information, it should raise a red flag. As a customer, you want a provider that’s open about its operations and has the documentation to prove it doesn’t just talk the talk, but it also walks the walks. Transparency also strengthens the partnership, as the provider and customer are working toward mutual goals.
- Built-in control. One other factor that should be a focus is capacity management. Whether it’s a utility transformer, a generator plant, a fuel supply or cooling plants, understanding how the provider manages capacity is critical. Several questions are important: How many people can affect me? How many people are on a shared resource? Oversubscribing power and cooling resources (on the basis of actual usage) can be a dangerous strategy and sometimes remains an unknown threat until something happens. This situation puts a customer’s IT infrastructure at risk, so operators should have built-in controls for managing and reporting power, cooling and even space capacity.
- Managing latency. Additionally, connectivity and latency are critical to the business application you’re looking to outsource. So, location matters. Also, the ability to spin up a disaster-recovery site is a crucial factor to consider: is it the business (meaning compute, network and storage) that has to move, or is it people, or both? Because some applications have a strong impact on latency across the network, customers should know which applications they want to move to colocation and what their latency requirements are to ensure the provider can fulfill those needs.
- Network availability. Work closely with your network team to clearly understand network requirements, and work closely with your IT team to understand your organization’s cloud strategy. You can then reference PeeringDB and similar resources to see which networks are available and built into data center locations. PeeringDB will also identify available peering exchanges that can meet network-traffic flow requirements, and it will identify which sites have direct connectivity to large public-cloud environments.
- Customer portals. One common afterthought that can shine a light on several of these areas is the provider’s customer portal. A well-designed customer dashboard offers an aggregate service view, performance monitoring and measurement, and capacity statistics that track or monitor the work being done. As such, customers should request a demo to determine the assets, statistics and operational information they’ll have access to.
- Sustainability. Sustainability goals are increasingly common among corporations. The data center will be a major contributor to overall power consumption for many of them. The power mix available to your data center provider will enable your organization to achieve corporate sustainability goals.
- Amenities. Computers aren’t the only data center denizens. It’s easy to overlook the importance of easy access to office space, secure storage locations, and staging and burn-in areas, as well as the proximity of lodging and restaurants. Your IT staff may spend considerable time at the site or possibly even require a permanent presence. Ensuring that they have an adequate work environment will improve the site’s efficiency.
- Customer support. As is true in many industries, if a data center sales organization appears slow to respond and hard to work with, you can be sure the experience is unlikely to improve once you’re a customer. A mature organization that has developed a culture of customer support will demonstrate it in all of your interactions.
Choosing a colocation provider is a crucial decision for many organizations and will either help or hurt them in achieving their business and operational goals. The right provider will be the one that listens and understands your business goals and that works to help you get there.
About the Authors
Casey Vanderbeek is Director of Customer Technical Solutions for Infomart. Casey joined Infomart Data Centers in 2016 to build out its design and implementation disciplines. He’s an expert in understanding customer requirements and identifying a solution to meet their needs. Casey has a 20-year history in supporting, planning and managing IT initiatives. Immediately before joining Infomart, Casey spent 10 years with ViaWest, where he was instrumental in building the Oregon market and was the lead solution engineer for the Pacific Northwest and Northern California. His experiences in managing IT teams and corporate data center initiatives have enabled him to address the real-world concerns of Infomart’s customers. Casey has a BA in business management from the University of New Mexico.
Joe Reele is Vice President of Datacenter Solution Architects for Schneider Electric. Joe is responsible for bringing together the full suite of Schneider Electric products and services to provide unparalleled right-size, innovative and complete solutions to customers. He influences and inspires a large team of highly talented professionals to provide cutting-edge designs, innovative solutions and operational models for mission-critical facilities/operations. Joe has more than 10 years of hands-on experience in strategic planning, business-unit development and growth, project development, operations management, and systems-engineering strategies. He provides strong technical and leadership skills and a proven ability to analyze an organization’s critical operational business requirements, identify deficiencies and potential opportunities, and develop innovative and cost-effective solutions to meet customers’ business objectives.