Industry Outlook is a regular Data Center Journal Q&A series that presents expert views on market trends, technologies and other issues relevant to data centers and IT.
This week, Industry Outlook asks Sid Mair, Senior VP of Federal Systems for Penguin Computing, about AI’s increasing role in the public and private sectors, as well as its potential consequences. Sid joined Penguin Computing in 2015 as the Federal Systems Senior Vice President. He has over 30 years of experience serving as president, CEO, VP of sales and marketing, director of business development, and regional/district sales manager, as well as in direct-selling roles. He is a visionary, high-performance and transformational leader across a broad industry range. Sid has worked with the federal government and federally funded organizations for over 20 years. His experience includes leadership across all aspects of the federal market including the Department of Defense, Department of Homeland Security and civilian agencies in both classified and unclassified areas, as well the executive and legislative branches of government. He is a proven innovative and motivational leader with a reputation for achieving results. Sid has a master’s degree in leadership and bachelor’s degree in engineering math and computer systems. He began his career at NASA and has lead organizations in the computing industry across a wide range of hardware and software technologies, primarily in the high-performance-computing market.
Industry Outlook: How does AI positively affect the public sector? How does it affect the private sector?
Sid Mair: A core purpose of AI is to provide deeper and more-significant insights in areas where they can bring the greatest value. In the public sector, AI’s advanced algorithms can spot trends quicker than humans can. By using machines that rapidly process and analyze massive amounts of data, the government can streamline tasks that involve data-driven calculation or are repetitive. A few examples of the many AI applications in the public sector include improving data processing to identify cybersecurity and national-security threats, “listening” to social media for quick notifications of emergency situations, predicting traffic congestion and preventing car accidents, supporting military agencies, and improving the health-care system.
In the private sector, AI also has the potential to affect businesses, both large and small, across the areas where humans would previously be needed to complete routine tasks. One area where it could have a big impact is sales prospecting. According to a 2015 BrightTalk survey, 53% of marketers spend 50% or more of their budget on lead generation. AI can help sales teams reduce the time needed to find relevant leads by more quickly analyzing sources and creating lists of appropriate prospects. A few other tangible ways AI can have an impact are by delivering dynamic activity reports to IT departments, helping to schedule meetings and speeding up HR paperwork.
IO: What are the potential challenges for industries affected by AI’s adoption? How can challenges be avoided?
SM: The early adopters of AI are typically researchers and organizations that deal with massive amounts of data. They are comfortable with technology to aid their research, but they haven’t been trained to design complex AI systems. The cutting-edge nature of AI and the intricacies of effectively running sophisticated AI systems often mean AI researchers struggle to integrate the most comprehensive technology. Organizations are having difficulty finding sufficient technical expertise to ensure complex AI systems are operating at their full potential. What may sound like an obvious solution is to work with an experienced AI-system partner to not only navigate the initial build of the infrastructure but also run the system to its peak performance levels.
IO: What do you think is the appropriate role of government with regard to the widespread adoption of AI technology?
SM: The current AI iteration presents an opportunity for the government to become an AI role model, as well as a technology enabler to nurture AI’s advancement. To do so, it should partner with technology leaders that understand the factors that are essential to developing and deploying AI systems.
Although there is no one-size-fits-all AI solution, the government must stay on top of AI adoption and work alongside top technology leaders. When it works with top AI experts, their role in merging the intersection of IT and the government will be much more seamless.
IO: What are some of AI’s negative implications in the public sector? What about the private sector?
SM: The headlines trumpet many doomsday scenarios (e.g., loss of jobs, loss of industries, HAL-like computers committing malicious acts against humanity, etc.). More practically, security is a critical issue both now and in the foreseeable future, because AI is powered by data (much of which is often confidential and sensitive). Ensuring this data is only available to authorized parties will be an ongoing challenge.
Private-sector businesses must be hypercritical and aware of their own operations to ensure they are using AI effectively. It includes constant vigilance and anticipation of major challenges that could threaten performance. Relying on AI alone for solving all operational and logistical problems is a naive approach and will likely cause numerous issues.
IO: Can AI increase government’s efficiency? If so, how?
SM: Yes, AI has great potential to create efficiencies in how the government operates, from balancing the federal budget to managing the complexities of various federal systems and identifying national-security threats. A current example is government customer-service chatbots that use AI to streamline operations. The chatbots speak with constituents and answer questions through a website, in state and local governments. AI is also being used to quickly parse through large numbers of photos and videos to recognize and report objects of concern, such as guns. AI systems also excel at analyzing different languages or tones in emails and handling various service requests. It can anticipate water-infrastructure failures. It’s also undergoing tests to predict crime and suggest optimal police-patrol presence, as well as identify fraudulent benefits claims. Through ongoing improvement, AI has the potential to create a more efficient and beneficial use of government employees’ time.
IO: What should the federal sector now do to successfully implement AI without negative consequences?
SM: The federal sector can evaluate existing IT infrastructure to identify technology gaps, assess their internal AI expertise to determine where and how it is currently being implemented, and appropriately determine how stringently and intensively to regulate AI across sectors in the interest of the well-being of U.S. citizens. Expertise is essential. The correct people and technology must be in place as AI is deployed and continues to advance over time. The federal sector should continue to partner with technology leaders to prevent harmful consequences as the technology advances.
IO: What types of AI regulations are likely, and when will they arrive?
SM: When looking at AI from a machine-learning perspective, extreme regulation is unnecessary because machine-learning applications are typically designed to solve specific problems. But government should be prepared to mandate stricter regulations when AI becomes more autonomous. Regulatory policy should advance current policies to reflect the changing technology. For example, just as we have driving laws to keep everyone on the road safe, government should create appropriate regulations governing self-driving vehicles.
A number of bills address certain AI applications, including the autonomous-driving bills, emerged at the end of 2017. Organizations such as OpenAI and Partnership on AI create deep conversations regarding AI’s potential and unintended consequences. Across the public and private sectors, certain areas such as critical infrastructure (nuclear power plants, missile defense, etc.), health care and consumer privacy have a critical need for policies and regulation.
IO: What AI trends are likely at the federal level?
SM: AI has the potential to more accurately and quickly identify cybersecurity and national-security threats, “listen” to social media for emergencies, predict traffic congestion and car accidents, support military agencies, improve the health-care system, and perform advanced climate modeling and weather forecasting.
For example, last year President Trump signed the Weather Research and Forecasting Innovation Act (H.R. 353). The NOAA recently announced it had upgraded its supercomputing system to process eight quadrillion calculations per second and add 2.8 petaflops to each federal data center, increasing its total operational computing speed to 8.4 petaflops. NOAA is reportedly in a race to exceed Europe’s weather-forecasting capabilities. AI is advancing to a point of greater practicality to help achieve this goal.
Another example is that the DoD has earmarked an overall $18 billion increase for science and technology spending in its 2019 budget proposal. These funds reportedly comprise funds for a new federal AI roadmap, including developing a workforce that understands AI, employing it to improve the military’s command and control systems, conducting intelligence analysis, and finding ways for humans to work with machines in pursuit of a given mission.
IO: Do data centers have different AI needs for their massive data sets? And if so, why and in what ways?
SM: AI, whether in the data center or not, requires massive data sets. Some less commonly considered factors to assess when approaching these data sets in the data center include the following:
- Getting a realistic assessment of how much data is necessary to process versus how much data must be archived.
- Balancing power for efficient data center load and deploying density to match data center capabilities.
- Taking into account what workloads various users need to address. One question to ask: Would all users need access to the production environment while only a subset need access to the research and development environment? What would be the percentage of data used for training versus inference?
- Understanding the workflow and data types required between projects and data scientists who are working on the infrastructure. Does it make sense to deploy a platform or internal “data science as a service” to streamline access and infrastructure needs?
IO: What’s the role of storage in the data center with AI?
SM: Storage in the data center with AI plays a critical role, but it’s rarely a main focus. Adding storage after a compute cluster has been finished will result in a redesign or a failure to achieve the AI system’s research goals. Dozens of nontrivial storage-related issues must be addressed when considering the overall system design. Storage for AI should be optimized for data ingest, workflow and modeling to accommodate the huge volumes of data required to help build more-accurate models.
In addition to selecting high-performance storage, another aspect to consider is flexibility. AI is still a nascent technology, and needs are difficult to predict. The optimal storage system is one that delivers the most flexibility to accommodate inevitable change.
Finally, when implementing optimized storage for a high-investment project involving AI, it’s important to look for data center experts who have previously supported AI deployments and have worked with compute-component designers to build a truly integrated system. Doing so ultimately reduces costs and increases system efficiency.