The Evolving Software-Defined Data Centre

dta-cntr-artcl.jpg

The majority of modern data centres are extremely complex. 

Very few organisations have the luxury of starting with a clean slate, so data centres typically contain many generations of technology, often managed in separate silos and by different teams. 

Each new “next-generation” technology comes along with the promise of replacing or fixing everything that came before. 

However, things have rarely turned out this way in reality.  In the 1990’s, client/server computing did not fully replace the mainframe, then web applications did not replace client/server applications and cloud is not predicted to replace all of our on-premise infrastructure any time soon.  For many companies, the adoption of newer technologies just adds another layer to be integrated, secured and managed and ultimately increases the complexity within the data centre.  With increased complexity comes a higher risk of system failure, a lower level of flexibility and much higher ongoing costs.

Much of this complexity has been exposed as a result of the increase in popularity of server virtualisation.  Widespread virtualisation of computing resources has given organisations a glimpse of what is possible when applications are no longer tethered to individual servers.  Server virtualisation introduced the world to a software-defined computing model where virtual machines are managed like files and can be moved around on demand, copied, backed up and restored with ease and with little care for the computing hardware underneath.  However, this flexibility comes at a price.  A complex storage area network (SAN) comprised of storage arrays and storage networking switches is required to enable many of server virtualisation’s best features such as virtual machine migration and high availability.  This combination of compute, storage and storage networking hardware came to be know as Three-Tier Architecture and is prevalent in almost all modern corporate data centres.  While the compute layer is software-defined, the other storage related layers are not.  The compute layer can be easily scaled by just adding more servers and balancing the virtual workloads across them.  A SAN cannot be scaled so easily.  The initial purchase of a SAN typically requires a complex sizing exercise and a good deal of foresight to make sure that it can accommodate both the business’ capacity and performance requirements for years to come.  When either of these capacity limits are reached, an expensive fork-lift upgrade is required, and the cycle starts again.  As new applications come on board, these limits can easily be reached well ahead of the expected life of the SAN.  Specialist skills are also required to implement and maintain a SAN and often operate within a separate IT management silo.

The public cloud providers such as Amazon Web Services, Microsoft Azure and Google Cloud Platform learned a long time ago that they could not build their services using this traditional and complex three-tier data centre architecture.  These hyper-scale cloud companies do not rely on Storage Area Networks at all.  Instead they build up their infrastructure using small x86 servers with locally attached disks.  These small servers contribute their compute, memory and storage resources into the overall pool which is then consumed by the applications running on top.  Individual server nodes can and do fail regularly but the entire software-defined system is designed to expect failures and automated to work around them without affecting the availability of applications or the quality of service to customers.  Failed or obsolete nodes can easily be replaced with new hardware and over time the entire system can evolve and grow in small increments and without disruption.

The obvious way for businesses to benefit from the simplicity and efficiencies that these cloud giants have achieved is to subscribe to their services and move their applications onto one of these platforms.  Many organisations are already doing this but are finding that not all of their infrastructure can be migrated to public cloud.  In some cases this is for technical or compliance reasons and in others it is a commercial decision.  While great for applications that need to scale up or down rapidly, for more predictable workloads, the rental model of cloud will be more expensive in the long run.  In the same way that one would rent a car for a short visit to a foreign country, it would be better to buy a car if moving to that country for several years.  Thankfully there is now a solution for businesses aspiring to reach similar levels of simplicity and flexibility as the cloud providers within their own data centres. 

Hyperconverged Infrastructure (HCI) is that solution and it came about several years ago when software engineers from the likes of Google and Amazon realised the techniques they were using to build the infrastructure for their cloud platforms could be adapted to corporate data centres.  They developed a fully software-defined solution which runs on any commodity x86 hardware and supports any common hypervisor to create cloud-like infrastructure for use in on-premise data centres.  By starting out with as little as three server nodes, businesses can create a scale-out virtualisation platform which has no complex SAN to manage and includes automation and self-healing features that revolutionise how the data centre is architected.  Instead of having to predict their future infrastructure needs, organisations can start small and grow in small increments adding a little more compute, storage or both as needed.

While combining compute and storage like this using HCI forms the foundation for running a simple and highly scalable private cloud on-premise, the real benefits, both now and into the future, come from the fact that it is fully software defined.  New capabilities come from software updates and do not rely on specialised hardware.  At the same time, as hardware does evolve, new nodes will bring those innovations as they are added and older nodes can be decommissioned or re-purposed.  No more expensive fork-lift upgrades are required.

The next evolution of this technology is in moving beyond the convergence of compute and storage in the data centre to the convergence of on-premise private cloud with any number of public clouds to create a hybrid environment which is managed as a single entity; a true Enterprise Cloud.  As I mentioned previously, organisations need to be able to run their applications wherever makes the most sense at a particular time whether for technical, compliance or commercial reasons.  By just adopting public cloud while maintaining a legacy three-tier architecture on-premise, the overall complexity of IT is increasing. 

Each public cloud provider also has its own strengths and one may be better suited to a use case than another.  Enterprise Cloud is an extension of the software-defined capabilities of HCI which will allow applications to migrate between private hyperconverged data centres and any public cloud as business requirements dictate.  Machine learning based predictive analytics services will monitor the overall hybrid ecosystem and based on commercial, governance or security rules, will make sure that applications are running in the right place at the right time.

None of this would be possible of course if IT did not evolve away from its hardware centric approach of the past to a fully software-defined, hyperconverged, Enterprise Cloud.

DataSolutions Marketing