Business Standard

Friday, January 10, 2025 | 02:28 PM ISTEN Hindi

Notification Icon
userprofile IconSearch

Why this bare metal is the new cloud rock

To the discerning CIO, it gives undiluted power to run critical workloads with guaranteed performance even in the most unpredictable instances

Deepak Kumar
The CIO of a rapidly growing e-commerce company that had the ambition to be among the Top 3 players in its home market, was faced with a peculiar business IT problem. She had to support her CEO in meeting the company’s aggressive plans to grow the site traffic three-fold in the next six months. The company had put the digital marketing plan in place to help generate that growth in traffic and had also expanded its line of offerings to match up to the likely demand.

A three-day online blitz, which was to coincide with the next New Year holidays, had been planned as a mega promotional event where discounts and freebies were to float lavishly. But therein also lay a business problem that was bothering the CIO. She could very well see that while the traffic on the site would rise exponentially during those three days, it would nevertheless get rationalized to a lower level after the blitz got over.
 
So while the CIO’s mandate was to ensure that IT scaled up to make the mega event a success, she could ill afford to do a captive IT deployment that would remain underutilized for much of the following months after the event. Of course, she didn’t have the budget to deploy so much of redundancy but even if she had, it would have amounted to a gross underutilization of IT assets and she would have been loath to do so.

But wasn’t this a clear use case for a public cloud-sourcing? Why was she then struggling to take the obvious IT decision?
To be fair to her, she had good reason to be sceptical. Her experiences from past had shown that a public cloud solution worked impressively in general but could show up performance lags during the New Year season since all e-businesses would be rushing to the cloud with their respective agendas. This scramble often led to instances of increased workloads on shared cloud assets, which in turn led to potentially compromised user experiences.

Moreover, most of the pubic clouds offered stock operating system environments but her company was using a custom OS that supported some key applications designed to create a differentiation for the company.

She had to weigh in all the options very carefully before committing to a decision she would have to live with, good or bad.

There was only one event to live, almost

And she must ensure that she made a success out of it, she said to herself. The question remained though—how!
She started screening out her options. First, any captive deployment was straightaway ruled out and so was a private cloud option, for the simple reason that a private cloud would be a function of virtualization of internal IT, which didn’t have the necessary scale in the first place.

The second option, to go for one of the public clouds that had been commonly used by her peers from other user organizations, was where there still lay a dilemma. What if she was overemphasizing the odds that an oversharing of the public cloud could lead to performance bottlenecks for users of her site during the holiday peak loads? More importantly, was there a better option?

She set out to build a wish list of what she wanted the cloud to do.

Building a dream cloud

Ideally, the public cloud should be guaranteeing a dedicated pool of resources for the entire duration of the three-day mega event.

Given that the event would expectedly generate a voluminous rise in traffic to the e-commerce site, not only multiple web servers would need to be employed but also the load balancers would be needed to ensure that no single server is overloaded by client requests. Also, the e-commerce aspect of the website demanded that the features like secure socket layer (SSL) would be a hygiene factor and not a good-to-have thing. She wanted the cloud provider to explicitly assure that the resources would not be open to being shared with other user organizations during the entire period of the event.
Equally importantly, she wanted to load her own OS instances onto the public cloud’s servers, so as to seamlessly extend the same differentiated user experience to all users for which her company had strategically built a USP.

Other resources were to include compute power, storage, e-commerce application servers as well as load balancers and content delivery gateways.

The deciding factors

Drawing up the wish list has helped. Some of the leading cloud providers only offered a best-effort availability of resources at peak instances, which effectively made the performance unpredictable. While the risks could be mitigated by overprovisioning, that would defeat the very purpose of optimizing costs by way of cloud-sourcing.

The other ask, that she should be able to port her own OS instances onto the cloud’s servers, looked like too tall an order...well, almost. While negotiating with the executives of a relatively new cloud offering in the India market, she was pleasantly surprised to know it was possible. (As a veteran CIO who had always pushed the boundaries of IT to meet business goals, she had been convinced it would be possible.)

This new offering, from this trusted IT service brands, she was told, provided bare-metal functionality that allowed users to host instances of their own customized OS onto the cloud’s servers. In effect, this amounted to having functionalities and controls otherwise possible only on a private cloud.

Moreover, while the database servers could be associated with top-end CPU resources and high-speed storage on bare metal, the web servers could be put in a virtualized environment for rapid page loads. This mixed approach would also ensure that the costs would be much optimized.

To connect to the cloud provider’s network, all one needed was to connect to a nearest network point of presence (PoP), which could be done in a secured manner through a network-carrier partner. The PoP would then connect to the cloud provider’s nearest data center where the customer’s servers would be hosted. And just in case the end-customers were expected to come to the website from multiple geographies, the servers could be replicated across multiple data centers to optimize throughput by way of shortened transit times.

Within the cloud environment, the data could be moved across the servers over private networks at multi-gigabit speeds, at no additional cost. Load balancers and CDN-optimized networks would ensure that no one server got overloaded during peak traffic periods, thus ensuring that user experience was not compromised for any target geography.

The CIO decided to sign up with IBM SoftLayer (yes, that’s the cloud provider’s name) and with all the service-level agreements in place, she is now confident that the forthcoming mega online blitz will be an astounding IT success. The cherry on the top is that the overall cost is comparable with other public cloud offerings.

Don't miss the most important news and views of the day. Get them on our Telegram channel

First Published: Nov 25 2014 | 12:19 PM IST

Explore News