What is the data center of the future? What should its capabilities be and what scenarios should be managed? Point.
New demands on data processing workloads, such as IoT, smart devices, and data security and regulation are creating new challenges and opportunities. This article takes a closer look at some essential features of tomorrow’s data centers.
By migrating their on-premise data center to the cloud, companies have eased many complex aspects of center maintenance, as the cloud provides them with access to computing, storage, and networking designed as core products. However, this model change has posed other challenges for companies. In fact, they sometimes use multiple cloud providers, while continuing to maintain or deploy on-premise solutions to host their existing applications, or for niche use cases, such as requirements management at the edge or high security.
The fundamental model of the distribution of services by the database, even the very definition of the center, are changing rapidly, as are the expectations of the developers who create the applications of tomorrow. So what does the future hold for us? What is the data center of the future? What should its capabilities be and what scenarios should be managed?
New data centers will need to be extremely flexible to accommodate many different environments. Public cloud providers are set to become central players in this future, but first we’ll look at on-premise environments, which will continue to play an important role.
In some use cases, on-premises infrastructures can be cheaper than their public cloud counterparts. Gartner he noted that cloud services may initially be more expensive than on-premise operational data centers, with an overall negative return on investment. This obviously assumes that the complexity and risk do not exceed the cost factor.
The most sensible local use cases are:
Network and Storage Intensive Big Data Workloads: Networking and storage costs can be surprisingly high on the public cloud, making on-premises infrastructure cheaper.
Highly secure environments such as governments, financial institutions and healthcare are some of the examples of areas where it is important to keep functioning locally. Although cloud service providers can meet many security requirements, compliance with government standards or certifications may not be possible.
The flexibility should also apply to the use of public cloud providers. For example, companies may need to use multiple vendors at the same time or switch between them for cost reasons, platform stability, choice of features, or, as recent events have revealed, for geopolitical reasons.
Another feature of sophisticated data centers is their ability to offer diversified geographic distribution. Three main reasons justify choosing a distributed infrastructure.
The first is to limit interruptions and data loss by avoiding single points of failure. The list of potential incidents that can impact a data center is quite long: power outages, fires, bankruptcies and system failures, human errors, network outages are just some of the many risks that loom over them. By distributing infrastructure across geographically distant data centers, companies mitigate these risks.
The second reason is proximity. The hyper connectivity brought by 5G pushes the limits. As data is consumed and produced more and more on the periphery, the data centers of the future will be called upon to serve an increasing number of peripheral devices: data cabinets in retail chains and factories, street furniture in smart cities, sensors in parking, video surveillance and self-driving cars. Proximity offers a number of benefits, such as reduced latency and lower cost of data transport. Some even argue that proximity is the real reason the cloud computing era is reaching its limits today.
The final reason is that companies are increasingly forced to check the location of data to ensure regulatory compliance, data sovereignty and data protection. The ability to comply with jurisdictional and customer requirements in accordance with the General Data Protection Regulation (GDPR) and 120 Other data protection laws will force data centers to operate seamlessly with hosting infrastructures around the world. Given the current hyperconcentration of major cloud providers in a limited number of countries, this will be essential.
Independence from suppliers
The data center of the future will need to be vendor independent. Regardless of the underlying hardware or technology of virtual machines or containers, operational and administrative capabilities must be transparent. This flexibility allows companies to streamline implementation and maintenance processes and avoid vendor lockdowns.
Furthermore, since no cloud provider is present anywhere in the world, the ideal data center must be able to operate in any environment to meet the aforementioned deployment requirements. For this reason, the new data centers will largely consist of open source components in order to achieve such a level of interoperability.
An optimal user experience
Distribution and flexibility shouldn’t come at the expense of ease of use. Data centers should enable seamless cloud-native capabilities, such as on-demand scaling of compute and storage resources, as well as access to APIs for integrations. While this is the norm for containers and virtual machines on servers, these same features should apply to all environments, even to remote devices such as IoT devices and edge servers.
The challenge for software service providers
The data center of the future has many similarities to today’s multicloud or hybrid cloud. However, while two thirds of CIOs want to use multiple providers, only 29% of them actually do, and their cloud budget is 95% captured by a single cloud provider. In other words, there is an important need that is not yet satisfied.