Technologies that are Reshaping the Design and Operation of Datacenter Capacity By Sanjay Motwani, Regional Director, Asia Pacific Region, Raritan

Technologies that are Reshaping the Design and Operation of Datacenter Capacity

Sanjay Motwani, Regional Director, Asia Pacific Region, Raritan | Friday, 24 November 2017, 10:37 IST

  •  No Image

Over the past 30 years, the two of the biggest challenges in datacenter technology have been addressed—operating at an optimal availability with infrastructural energy efficiency close to theoretical design targets. However, the pace of change in the datacenter industry is showing no signs of slowing down and is likely to accelerate in the next decade driven by new technologies and increasing demand for digital services. The combination of business and technology drivers are likely to lead to the emergence of new classes of datacenter facilities.

The data center market is ready for a major shift in the next 12 months and the space will grow to 1.94 billion square feet. According to the International Data Corporation, more than one-third of worldwide companies will strategize datacenter planning and governance processes to speed digital transformation efforts by next year. In fact, by the end of 2017, over 70 percent of the Global 500 will have dedicated digital transformation/innovation teams and the worldwide spending on digital transformation initiatives will reach $2.2 trillion, almost 60% larger than 2016, by 2019.

The broader forces of change in the datacenter industry are closely coupled with a number of specific technologies and trends that are reshaping the design and operation of new datacenter capacity.

Hyperscale cloud: Datacenter managers at enterprise and colocation facilities are under increasing pressure to match the efficiency and cost-optimization of hyperscale operators such as Amazon, Facebook, Google and Microsoft. These operators are able to maximise the economies of scale along with the advances in IT infrastructure to drive up virtualization and utilization, while testing innovations in datacenter architectures. The one thing Cloud operators will have to consider is if they should invest in IT infrastructure to compete or just explore long-term partnership strategies.

PFM designs: Prefabricated modular datacenters are assembled using one or more structural building blocks that are assembled and tested in a factory-like environment and shipped for final on-site integration. Operators will stand to gain many advantages through PFM designs: Standardization, compressed timelines, tighter budget controls, lowered risk, and better alignment with business goals by adding capacity in a more modular fashion.

Software- and data-driven: A growing proportion of datacenters are regulated through software in order to improve utilization, availability, resiliency and agility. Despite early concerns surrounding datacenter infrastructure management (DCIM) software implementation and its return on investment, DCIM is beginning to be recognized as an integral component of software-defined infrastructure. The recent development of cloud-based datacenter management as a service (DMaaS) promises to increase the value of DCIM data as it is aggregated and analyzed at scale. This could eventually allow for the data-driven, real-time, autonomic management of datacenters based on using large data sets.

Smart and transactive energy: Datacenters have made considerable advances in improving energy efficiency and power usage effectiveness (PUE) ratios. However, the next stage is to link energy use to demandand take more control over energy supply. This could involve smart buying and selling of energy and managing IT power consumption by greater use of power management and power capping.

Connectivity: The rise of public cloud is putting increased pressure on enterprise and MTDC providers, but it is also opening up new opportunities to connect and integrate public cloud with private cloud and non-cloud services. As a result, interconnectivity is becoming an increasingly critical service. Application and data resiliency, including disaster recovery, will be achieved by replicating data and processes across a network of facilities spread within and between regions.

Open architectures: The Open Compute Project (OCP) hasn’t made a noticeable impact yet (outside of hyperscale), but it could be significant in the long term as open ecosystems continue to evolve and grow. Open-sourced hardware and software promise to bring hyperscale-inspired designs and efficiency to the enterprise and CoLo markets, and are disrupting traditional facility architectures including distributed UPS, alternative rack designs, distributed connectivity and DC power distribution.

Edge datacenters: Edge computing, which can be defined as the distribution of compute and storage capabilities to the very edge of the network near the point of data generation and data use, covers a range of workload types and use cases. Demand at the edge is expected to be a significant driver for new datacenter types, including micro-datacenters (small form-factor sites including prefabricated micro-modular sites), but also new centralized facilities.

Due to the rapidly changing data usage patterns, the data center will become a strategic asset for enterprises worldwide rather than getting viewed as a back-office support. The IDC FutureScape Report also predicts that 65% of new datacenter infrastructure investments will increasingly be focused on client-facing and analytical workloads, rather than maintaining existing systems of record. The total number of data centers worldwide will peak at 8.6 million in 2017, and will slowly decline from there as businesses will move their datacenter operations from on-premise facilities to mega datacenters, run by service providers.

Today, datacenter operators are incentivized to ensure that the existing sites are available productive, highly utilized and efficient for as long as possible. The use of specific technologies such as upgradable and intelligent (three-phase) power distribution units will provide scalability and adaptability to changing load requirements for existing sites, while environment sensors will tell the DC managers to understand hotspots, thus effectively and optimally cooling equipment. Data-driven insights from DCIM can be used to closely monitor and manage existing sites, leading to new efficiencies, better capacity forecasting and improved business agility. This is a sure-shot way of future proofing your DCs. 

On The Deck

CIO Viewpoint

India and its Data Center Advancements

By By Michael Cantor, CIO, Park Place Technologies

How AI/Machine Learning can Revamp Data Centers...

By Piyush Kumar Chowhan, CIO and Vice President, Arvind Lifestyle Brands Limited

Data Center & Server

By Sanjay Chowdhry, CIO, Hamdard WAKF Laboratories

CXO Insights

Why A Data First Approach Could Be Your...

By Geetha Ramamoorthi, Managing Director, India, KBR Inc

A Short Guide for Data-driven and...

By Kapil Makhija, VP - Technology Cloud Business, Oracle India

The burgeoning market of prompt engineering in...

By Deepak Kaushik, Regional Practice Head - Apps, Data & AI, Insight

Facebook