Modern data centers are synonymous with massive high-speed computational capabilities, data storage at scale, automation, virtualisation, high-end security, and cloud computing capacities. They hold massive amounts of data and provide sophisticated computing capacities. Earlier, a simple network of racks with storage units and a set of management tools to individually manage them were enough. The architecture was simple to understand, and only local resources were consumed in its operation.
However, as organisations became increasingly internet dependent, the volumes of the data exploded with more of it added by the social media and the sensing devices that grew manifold. Remote access to this data through the Web emerged as the trend. The local tools that were used earlier in a traditional data center were fragmented and were inefficient to handle not just the volumes but also complexities that in effect needed a large infrastructure. There were challenges of scaling up when companies expanded, and performance dipped when peak loads were required to be handled. This led to the evolution of hyperscaling as a solution.
Hyperscale is based on the concept of distributed systems and on-demand provisions of the IT resources. Unlike the traditional data center, a hyperscale data center calls in a large number of servers working together at high speeds. This ability gives the data center a capacity to scale both horizontally and vertically.
Horizontal scaling involves on-demand provisioning of more machines from the network when scaling is required. Vertical scaling is about adding power to existing machines to increase their computing capacities. Typically, hyperscale data centers have lower load times and higher uptimes, even in the demanding situations like the need for high-volume data processing.
Today, there are more than 400 hyperscale data centers operating in the world, with the United States alone having 44 per cent of the global data center sites. By 2020, the hyperscaled data center count is expected to reach 500 as predicted by Synergy Research Group. Other leading countries with hyperscaled data center footprints are Australia, Brazil, Canada, Germany, India and Singapore.
Can do more with less
A traditional data center typically has a SAN (Storage Area Network) provided mostly by a single vendor. The machines within the data center would be running on Windows or Linux, and multiple servers would be connected through commodity switches. Each server in the network would have its local management software installed in it and each equipment connected to them would have its own switch to activate the connection. In short, each component in a traditional data center would work in isolation.
In contrast, a hyperscale data center employs a clustered structure with multiple nodes housed in a single rack space. Hyperscaling uses storage capacities within the servers by creating a shared pool of resources, which eliminates the need for installation of a SAN. The hyperconvergence also makes it easier to upgrade the systems and provide support through a single vendor solution for the whole infrastructure. Instead of having to manage individual arrays and management interfaces, hyperscaling means integration of all capacities, such as storage, management, networks and data, which are managed from a single interface.
Installing, managing and maintaining a large infrastructure consisting of huge data centers would have been impossible for emerging companies or start-ups that have limited capital and other resources. However, with hyperconvergence, even micro-enterprises and SMEs, as well as early stage start-ups can now enjoy access to a large pool of resources that are cost-effective and provide high scalability in addition to flexibility. With hyperconvergence, these companies can use data center services at a much lesser cost with the additional benefit of scalability on demand.
Unified system
A hyperscale data center would typically have more than 5,000 servers that are linked through a high-speed fibre optics network. A company can start small with only a few servers configured for use and then, later at any point of time, automatically provision additional storage from any of the servers in the network as their business scales up. An estimate of the demand for additional infrastructure is made based on how the workloads are increasing, and a proactive step can be taken to scale up capacities to meet the increasing need for resources.
Unlike traditional data centers that work in isolation, hyperscaled infrastructures depend on the idea of making all servers work in tandem, creating a unified system of storage and computing.
When implementing a hyperscale infrastructure, the supplier could play a significant role through the delivery of next-gen technologies that need high R&D investments. According to a McKinsey report, the top five companies using hyperconverged infrastructure have over $50 billion of capital invested in 2017 and these investments are growing at the rate of 20 per cent annually.
Leveraging hyperscaled data centers, businesses can achieve superior performance and deliver more at a lower cost and a fraction of time than before. This provides businesses with the flexibility of scaling up on demand and an opportunity to continue operations without any interruptions.
The writer is CTO, DC Colocation and Delivery Services, Sify Technologies
Comments
Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.
We have migrated to a new commenting platform. If you are already a registered user of TheHindu Businessline and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.