The proliferation of information technology has allowed organizations to extend their reach further than ever before via remote communication, and to integrate distributed systems in a manner that would have seemed impossible not so long ago. These advances have been accompanied, however, by new challenges in servicing and securing high-tech facilities as (relatively) simple as ATM machines and as complex as unmanned data centers. Organizations can now spread themselves as far across the globe as they like, but if they don’t keep close control over the remote outposts they operate, those outposts won’t do them much good—and could, in fact, do them substantial harm.
The call-a-repairman approach, long the go-to solution for IT managers, can be problematic. The delays occasioned by a physical inspection are often crippling, as managers can wait for hours or even days for an IT service worker to arrive, identify the problem and find a fix. And that’s all beside the fact that an IT administrator based in Baltimore might not happen to have in her Rolodex the name of a trustworthy repair company in Dusseldorf to quickly put out a fire (figuratively, hopefully) in a data center in Germany.
When it comes to critical infrastructures, companies often do not have the luxury of waiting days for service (like you and I might have to for, say, the cable man). The data center may be the poster child for both the value and the downfall of the new, interconnected world that technology has made possible. On the one hand, it’s that very interconnectedness that makes the data center both possible and a powerhouse of the networked world. But that interconnectedness comes with the price of ultimate dependence, as everything in an organization might rely on that functioning data center to stay afloat. If and when a critical part of that networked infrastructure goes down, an enterprise may face a situation where all their operations are inconvenienced or, possibly, worse.
Reducing response time is even more vital when it comes to addressing security threats. In the case of breaches, such as credit-card skimming, a response is required within minutes—or even seconds—to minimize the scope of damage. Hours-long delays are insufficient alternatives.
As organizations grow more dispersed, their need to find efficient ways to maintain these facilities and defend them against attacks only becomes more urgent. The stereotype of a cyber-security threat conjures up an image of a lone wolf operating out of his bedroom, but in actuality, 29 percent of breaches involve some sort of physical component, according to a 2011 study by Verizon. The isolated nature of remote facilities make them particularly attractive to cyber criminals, as is the fact that these facilities can often serve as points of entry to the wider network.
Moreover, every remote infrastructure is at risk for either malfunction or security breach regardless of how well maintained or maximally secure the facility is. All the care, foresight and money in the world will not cure all the vulnerabilities of a data center. Even if routine malfunctions or security risks have been completely minimized, data centers are still open to those other risks, the acts of nature or accident that conspire to take down the power, damage hardware and generally interfere with the day-to-day activities of critical infrastructures.
What’s the best way to monitor and respond to issues that crop up at these remote facilities, whether be they overheating mainframes, break-ins or an errant feathered visitor (an “angry bird,” if you will)? Increasingly, industry professionals are finding answers in the same types of high-tech advances that enabled organizations to spread their networks so widely in the first place. Remote site management systems, as they’re known in the trade, let organizations integrate sophisticated maintenance and security programs into their facilities, allowing them to watch from afar and reducing their reliance on technicians on site.
Remote site management systems can monitor the physical environment—conveying information about temperature regulation, the presence of moisture and the state of power systems, among other things. They can also immediately detect security breaches, monitoring access to the site, providing an audit trail and alerting managers within seconds to signs of physical tampering.
Just as crucially, these systems can respond to problems, functioning as virtual hands in giving operators nearly the same level of control they would have were they physically present. They can perform simple fixes, such as restarting failed equipment, and can also be programmed to take remedial actions via a series of automatically triggered, scriptable responses.
As these systems have become more effective, they’ve also grown more reliable. Take the evolution of out-of-band solutions as an example. As recently as 10 years ago, most remote site management systems relied on dialup as a backup when in-band service failed. Compare that with the situation today, when out-of-band management systems often feature redundant power supplies and multiple wired or cellular interfaces, ensuring an exceedingly low rate of failure. Although an issue such as a power outage or network failure would have previously meant that communication with the data center was impossible, these redundant systems ensure that IT management not only knows that there is a problem, but can also keep the remote site management systems operational and online so that IT staff can still diagnose the problem at the same time.
So it’s no surprise that many IT managers have come to rely on these systems as much as, if not more than, they do on help from on-site human technicians. The systems are able to detect and respond to problems almost immediately, and they also tend to be quite cost-effective. (A remote site management gateway will generally pay for itself in the first avoided service visit from a technician.)
Remote site management systems will not completely take the place of human intervention in high-tech facilities—at least not in the foreseeable future. There are still critical functions that only technicians are capable of fulfilling, certain kinds of maintenance that is best undertaken with manual finesse and issues that will continue to require a human touch.
IT managers should, however, feel some relief in knowing that as the systems they oversee stretch to span an ever-greater portion of the globe, their ability to control those systems from afar is advancing apace.
Leading article photo courtesy of NASA Goddard Photo and Video
About the Author
Rick Stevenson is CEO of Opengear, a company that produces a range of remote site management systems.