As in previous years, the first quarter is the time for prognostications. Publications and social media are flooded with articles discussing 2017 security predictions, trends and priorities. They range from the obvious to the obscure. Although I find these articles interesting and entertaining, I would love to see score cards from year to year on who got what right!
My domain is data security and privacy, so rather than focus on 2017 predictions, I will focus on what can help achieve data security in 2017. Rather than predicting what may or may not happen, let’s look at what organizations can consider the key performance indicators (KPIs) of their data-security efforts. My suggested KPIs reflect current challenges, upcoming legislative requirements and recommendations to help organizations protect their legacy and their transformative cloud and big data initiatives.
So here we go. The three KPIs that could help most organizations create a more secure, breach-resilient and lower-data-risk infrastructure are the following.
- Sensitive data location and risk. It may seem obvious that organizations should have a current and accurate inventory of sensitive data. In a 2016 study conducted by Ponemon Institute, Scale Ventures and Informatica, however, only 12 percent of organizations said they knew where all their sensitive data existed across the enterprise. So, the first data-security KPI for 2017 is understanding where sensitive data exists, continuously, to improve the prioritization and effectiveness of security programs and investments.Most organizations have a long way to go. In the survey mentioned above, only 12 percent reported they did at least monthly assessments of sensitive-data location and risk. Additionally, 54 percent of organizations reported they had no set schedule for assessing sensitive-data risk. How much is your data growing? If we accept that data is doubling every 18 months, then each month data grows approximately 4 percent. If you have one million sensitive records, you can extrapolate 40,000 new sensitive records per month (compounded of course). Most organizations have high sensitive-data-record counts.
- General Data Protection Regulation (GDPR) risk. This may be the year for GDPR compliance; with May 2018 getting closer, many organizations are working on ensuring they meet standards, but much more is needed to understand potential gaps. Number two on our data-security KPI list is to evaluate GDPR risk with relevant factors that will help prioritize GDPR efforts and actions. Risk factors include location, protection, cost, user access and activity, data movement, and data volume. The risk scoring should be tuned to organizational GDPR policies; the key is automation of the data-risk scoring process for a continuous and accurate view of your GDPR risk scores.An alternative focus would be HIPAA regulated data. In 2016, the U.S. government issued several HIPAA fines exceeding two million dollars. Enforcement for 2017 and beyond is likely to grow—as will the severity of the fines. Details of the Office of Civil Rights (OCR) enforcement activities are here, including information on cases, settlements and fines.
- Detect and protect. To help improve breach resistance and recovery, organizations should strive to automate the detection of high-risk data access or movement—and the orchestration of remediation. In other words, continuously assess sensitive data location and risk, access activity, movement, and user behavior, and couple that assessment with automatic remediation. Therefore, KPI number three for 2017 is to create foundational capabilities in the automation of detection and protection for sensitive data. Although related to KPI one and two, “detect and protect” differs in that it defines an overall strategy and the tactics that could help organizations automate data security. The key is to obtain or develop a system that automates and consolidates various manual processes and employs current security infrastructures:
- Confirm what you know about your sensitive data: global visibility of sensitive data with data classification, discovery, proliferation analysis, user access, and activity correlation and visualization for management and practitioners. This function should be automatic, global and actionable. Various views of sensitive-data location and risk are crucial, including the ability to visualize sensitive data by classification, geography, function, users and risk.
- Monitor risk continuously: Track sensitive-data risk and remediation with risk scoring. This approach should consider multiple factors that identify top risk areas on the basis of organizational requirements. This function should be part of the discovery process, providing risk scoring that is fine-tuned to an organization’s data-use policies and operating environment (e.g., what regulations apply).
- Uncover the unexpected: Detect suspicious or unauthorized data access by continuously correlating, base-lining, detecting and alerting on high-risk conditions as well as anomalous behavior that threaten sensitive data.
- Remediate risk: Orchestrate data-security controls to protect data at rest, prevent unauthorized access, and encrypt as well as anonymize sensitive data.
These three KPIs provide reasonable responses to challenges that nearly all organizations face.
- Data growth—driven by the cloud, big data and social, and data proliferation—challenges traditional security measures. Identifying and defining the organization’s perimeter is crucial. I would propose that data should be part of the end point. Other critical factors include data leaving the organization, data shared between global departments and functions, and even insider threats.
- Compliance, as highlighted with the looming GDPR regulations and growing enforcement of existing legislation, is a concern. Organizations must understand the target of these regulations, including sensitive data in the form of privacy, health and credit information. The ability to comply and pass audits requires continuous knowledge of sensitive data and its risk. Yesterday’s manual efforts to gather information about sensitive data don’t scale to today’s challenges.
- Traditional security, as indicated by the never-ending news of data breaches, requires better protection of the data itself, guided and prioritized by risk. Implementing a “detect and protect” strategy may improve breach resiliency. By targeting data protection on high-risk sensitive data, organizations can make their data more secure. Information, such as health and credit data, should be tightly controlled through a combination of encryption, data masking and access controls. This approach can help prevent catastrophe in the event of the inevitable outside breach or insider misuse.Traditional security remains critical to thwarting most attacks. But the sheer volume and sophistication of attacks and insider threats means eventually an attack will be successful. When that happens, data security—driven by automation and intelligence—becomes the new “data perimeter.” This perimeter complements the traditional security perimeter, defined as protecting end points and all network access, but focuses on the goal of most breaches: the data itself.
About the Author
Robert Shields is Director of Product Marketing, Data Security and Privacy, for Informatica.