Organisations are looking to move big data workloads to the cloud in increasing numbers. The hyperscale trend is best explained within a broader push towards OPEX and usage-based consumption of IT infrastructure, software and services. Of critical importance will be how companies manage these environments to be as lean as possible.
The common perception is that using Public Cloud is cheaper due to it being usage based, however the main benefit lies in the sheer scale of the hyperscale providers’ data centres: they are able to offer a wider range of services due to the volume of demand they have by aggregating multiple customers’ requirements. Having said that, Enterprise IT operations is sceptical about moving business critical internal applications to nameless, faceless global organisations without telephone numbers for assistance and support, and with no transparency on data security and underlying hardware performance, and rightly so.
The efficiency hyperscale providers offer allow enterprises to direct access to specific, unusual and expensive hardware, in portions only as big as they need, which may otherwise be unavailable Consumers of these services will be offered a great deal of choice in which provider to use. Clearly this is beneficial for businesses. The presence of hyperscale providers alone will catalyse the market for these services, which may not have the traction seen globally due to the previous lack of local access.
Complementary businesses offering architecture and management services will also flourish while non-hyperscale players, such as local infrastructure-as-a-service providers, will similarly benefit from Cloud becoming more mainstream through wider acceptance. However, businesses will need to take on the task of staying well informed of new developments to understand the true value of services on offer. Hyperscale providers are constantly releasing new technologies which are often targeted at Big Data, including serverless computing functionality. In addition, the ability to monitor and assess whether the scale of the deployment matches what is needed and within budget are the key skills required to manage big data and analytics in the Public Cloud. Another important concern when it comes to Cloud is that of bill shock and subsequent unexpected usage charges at the end of the month. Routed keeps pricing simple and fixed where requested, so that customers can easily understand their costs.
Hyperscale providers are riding the wave to move into the data centre due to low penetration in South Africa and elsewhere on the continent in particular. Where hyperscale providers can offer services such as those facilitating big data analytics, which are difficult to deliver elsewhere, they will be successful. This is unlikely to happen at the expense of other Cloud providers, especially initially or in the near future where businesses have yet to transform their applications.
Ultimately, hyperscale providers have their place, as do local providers and even owned infrastructure on premise. Both a challenge and advantage will be the vast number of technologies and services on offer and how to navigate these for best results.
Andrew Cruise is the Managing Director at Routed.