Drawing the PCI Demarcation Line in the Sand

July 18, 2012 7 Comments »
Drawing the PCI Demarcation Line in the Sand

The Payment Card Industry (PCI) Data Security Standard (PCI-DSS) is perhaps the most powerful force influencing IT department budgetary spending since the outbreak of the Nimda worm in 2001. Established by the PCI Security Standards Council (PCI-SSC), the PCI-DSS was created by five founding global payment brands—American Express, Discover Financial Services, JCB International, MasterCard Worldwide and Visa Inc.—as the basis for the technical requirements of each of their respective data-security compliance programs. In the most simple of terms, PCI-DSS provides prescriptive guidance on implementing both technical and nontechnical compensating controls to add some measure of security for organizations responsible for processing, storing or transmitting payment-card account data.

The standard has received critical reception from both businesses and security professionals since its initial release. Organizations claim oppressive regulatory oversight that forces spending on protective technological and administrative countermeasures that offer little to no return on investment. Security professionals cry that the standard is prescriptive in nature, it does not provide adequate (or even relevant) selection or implementation guidance and its bar is set far too low to effectively thwart attacks. For all of its critics, few can deny that the standard has given security practitioners justification and budget for security software purchases previously labeled as “too expensive” or “not a priority.” It has also provided organizations standardized, peer-vetted and reproducible guidance on how to implement some security measure within their own walls. One of the biggest challenges to achieving a PCI-compliant state, however, is the overlaying traditional security controls atop shared infrastructures in data centers—especially in public, private, community and hybrid-clouds—and knowing where the infrastructure provider’s responsibility ends and the company’s begins.

Cloud provider responsibility segmentation

Typical cloud provider segmentation of responsibilities.

The division of PCI responsibility differs among infrastructure providers and their respective service models. In a traditional outsourced data center, the physical servers are typically your responsibility, but the connective tissue (i.e., the network connectivity) is often shared between your organization and the provider’s other customers. The provider can, and does, logically and physically segment customer servers using routers, switches, firewalls and other common infrastructure equipment. In the case of a shared server environment (i.e., virtual hosting), the provider often employs logical segmentation techniques and tools to segregate customer servers from one another. Compliance with PCI-DSS is usually easy to attain in data center environments as the model changes little from that of an on-premises customer data center. As long as the customer can receive assurances from the provider as to its implemented administrative controls, customers can often work with their Qualified Security Assessors (QSAs) to address any technological control shortcomings.

Where the demarcation line resides quickly becomes blurry in cloud architectures. Most software-as-a-service (SaaS) providers take the brunt of the responsibility from the hardware layer all the way up to presentation layer. In an SaaS environment, customers must rely on the compliant state of their provider and have little to no control over how that state is achieved and monitored. Platform-as-a-service (PaaS) providers, on the other hand, tend to leave the compliance of the application and presentation layer up to the customer. In a PaaS architecture, the provider will often claim responsibility for certifying the solution stack, virtual machine, hypervisor, compute & storage, network and physical facility tiers. The generated data, application configurations and methods for presenting the data are the sole responsibility of the PaaS customer. Infrastructure-as-a-service (IaaS) provider environments give you the closest architecture to that of an on-premises virtualized server infrastructure. Customers are responsible for everything from the virtualized machine up to the presentation layer, whereas the provider is responsible for the hypervisor to facility tiers. The IaaS model forces customers and providers to work closely to provide mutual adherence to the tenets of the PCI DSS.

Providers have been working hard over the past few years to label themselves as “PCI compliant,” but what, if anything, does that actually mean? A primary customer concern of running servers in a shared environment is the potential for third-party access to customer data by another organization—or the provider itself. A provider that claims its infrastructure is PCI compliant has undergone independent testing and assessment to verify that the infrastructure is built and operated in a manner that adheres to the tenets of the PCI Data Security Standard (PCI-DSS). So what does this attestation of compliance mean for end-customers’ PCI compliance? Some say not a lot. “A PCI-compliant [infrastructure] is like a rental car,” says Chris Nickerson, well-known penetration tester and founder of Lares Consulting. “The car isn’t yours, but you’re still responsible for driving it.’ Said in another way, the provider may be compliant, but said compliance in no way cascades to encompass the customer’s servers. Essentially, a compliant provider has to the best of its ability ensured that its infrastructure will not introduce anything that might jeopardize a customer’s own PCI compliance aspirations. In a nutshell, this is what a “PCI-compliant infrastructure” means. It should be noted, however, that without the providers being certified, the QSAs working for each tenant would be charged with certifying the environment for themselves. Therefore, a tenant’s servers residing in the environment would be taking a chance that the environment might not be eligible for certification. Furthermore, if it is not a goal of the provider to be PCI-DSS compliant, there is the potential for security standards to quickly degrade after the QSA blesses the environment.

Double-edged sword of PCI compliance.

A PCI-compliant architecture is a double-edged sword of sorts.

An architecture anointed as PCI compliant is, however, a double-edged sword of sorts. Customers have the ability to select the controls that best meet their organizational goals—without having specific tools pushed down on them by the providers. On the other hand, however, many customers are left confused as to which controls satisfy the requirements for hosted infrastructure environments. As Wendy Nather, Enterprise Security Research Director at 451 Research, puts it, “Many organizations want security, and at the same time, chafe under its implementation.” Since security is not a one-size-fits-all model, providers cannot be expected to expand their own compliant state to encompass their customers—just as Internet service providers cannot be responsible for the network-based security of their business and personal users. “Providers can only provide as much security as the customer will allow,” says Nather. “Security is fundamentally not only about the use of security tools, but also about operational discipline: configurations must be tightly managed and monitored, and this means that users will inevitably be restricted in some way in what they can do.”

If you’re looking to your provider to take the responsibility of regulatory compliance completely off of your task list, you’ll be in for a long wait—or more likely in the short term, a shocking surprise when it comes time for your assessment. A “compliant infrastructure” does, however, offer some semblance of progress with regard to provider efforts to offer a trusted architecture on which to deploy PCI in-scope servers. Providers exist to facilitate the easy hosting of servers and applications in a way that reduces the roadblocks to onramp. Security, as it was when we were building on-premises data centers, continues to be an unfortunate afterthought. If a provider can offer, or at least support, third-party solutions that secure their customers’ data in accordance with PCI, they may find themselves in an better position to entice customers to adopt their hosting architecture.

Customers also tend to like choice—including the option of choosing their own tools to accomplish specific tasks. The last thing customers want is to be locked into a particular suite of tools that neither they nor their QSA understands. If providers want to entice customers to move PCI servers and applications into their architectures, they must work to expand their partner integration programs in addition to QSA-specific education programs that help assessors understand their architecture. Hopefully, the forthcoming guidance from the PCI-DSS Cloud Special Interest Group (SIG), due out this fall, will provide the guidance for cloud customers and cloud providers on their respective responsibilities for maintaining and validating PCI-DSS requirements in addition to clarifying how these requirements can be applied to cloud technologies to address the identified risks and challenges. Only when providers can offer a complete end-to-end security stack comprising third-party supported tools and their own technical and administrative controls will they be able to convince customers that PCI compliance can be achieved beyond the walls of their own on-premises infrastructure.

Links:

About the Author

Andrew Hay is the Chief Evangelist at CloudPassage, Inc., where he serves as the public face of the company and lead advocate for its SaaS server security product portfolio. Before joining CloudPassage, Andrew served as a Senior Security Analyst for industry analyst firm 451 Research and provided technology vendors, private equity firms, venture capitalists and end users with strategic advisory services. He is a veteran strategist with more than a decade of experience related to endpoint, network and security management technologies. Before joining 451 Research, Andrew served within the Information Security Office (ISO) of the University of Lethbridge and, before that, at a privately held bank in Bermuda. Andrew also served as a product, program and engineering manager at Q1 Labs (now IBM) and was responsible for the entire portfolio of third-party technology partner integrations.

7 Comments

Add Comment Register



Leave a Reply