There is no question that our industry is growing fast. There are articles everywhere on the growing demand for new data centre capacity… the latest technologies and applications driving the boom in data generation and consumption… and the increasing number of non-traditional players with fresh capital entering the market. And not just in our industry press but in the mainstream media too, where stories about 5G, autonomous vehicles, drone deliveries and kids tweeting from the family’s Internet-enabled refrigerator are now standard fare.
But for all the bullish talk about data industry growth, what we hear far less of is how this new capital is going to be spent; the challenges and complexities of building new data centre facilities that can seamlessly expand as the market continues to grow; and ultimately whether or not the significant sums being invested in new facilities have met the goals of the operators concerned.
Data centre operators – especially colocation providers building up new data centre capacity – are under immense pressure to deliver at the highest possible levels. Downtime is completely unacceptable. It is very hard to find the sweet spot of size, power densities and efficiencies that need to be built in order to future proof the operation and ensure that service pricing is both competitive and profitable. And when multiple suppliers and contractors are involved in a complex new project, important questions can get missed and design teams can end up working independently of the input of the customers that are eventually to be served. Clearly, this is not ideal.
Critical questions need to be asked
For a colocation provider to be able to set itself up to efficiently deliver the services its customers want, a number of critical questions need to be asked and answered before the facility designers begin their work. What standards and certifications will the design, build and operations all need to conform to? Who are our customers? What do we know of the services they used in the past? What are their expectations now? Has a customer previously used a certain brand or technology they are familiar and happy/unhappy with? How quickly will they want to deploy new infrastructure? Will customers be satisfied with simply managing service levels or will they also care about the actual infrastructure being deployed? How efficient will the customer be at using their contracted capacity? Will there be the potential to oversell a certain percentage of capacity with minimum risk? What are the service levels that are to be contracted? And under what circumstances will we achieve the efficiencies in operations that the business case is crafted upon?
These are just some of the issues that need to be reviewed and unless the colocation provider has an anchor tenant (or two) which has clearly articulated its requirements, the result is often a cacophony of uncoordinated demands from internal stakeholders and external consultants that makes it impossible for the design team to articulate a clear vision that everyone can be satisfied with. The end result is a solution that is compromised both theoretically and practically before it has even been built, as well as one that has a significantly increased risk of not being optimised to meet the operator’s or customers’ business objectives.
And if things weren’t already uncertain enough, throw into the mix any unpredictable evolutions in the nature of customer demand and the introduction of new technologies over the life span of the data centre and the risk increases to the point of possibly even jeopardising the viability of the data centre itself.
The scenario painted above is somewhat bleak, but all is not lost. These pitfalls can be avoided by adopting a few simple and basic guidelines.
First and foremost, “Think Customer”. The 20-30 per cent of customers that are expected to take up the 70-80 per cent of capacity is a good place to start. Typically, this type of customer almost certainly has pre-existing deployments from which the design team can extract key learnings. Where and with whom are their current facilities? What worked and what didn’t? Are they moving out of their current facility and if so, why? What design innovations are they looking for or interested in? What is the demand forecast and how can this demand be translated into a capacity roadmap that is then part of the design and build plan?
Further, when the design team engages with customer contacts it’s important to ensure that both the customer’s project management and operations management teams are represented. They are highly likely to have very different priorities with respect the set-up of the new facility, so it’s crucial that both are fully satisfied with the final design.
Secondly, “Think Modular”. Adopting a modular approach allows an operator to initiate a data centre deployment in a way that minimises the capital that needs to be invested up front and allows the build to happen over phases that make sense for the business case. In its most basic form, these phases may address customer expansion in white space and/or power densities. but may also incorporate future power or cooling technologies to secure further operational efficiencies.
The term modular is typically associated with containerised and prefabricated form factors, in some cases possibly combined with a brick and mortar element to the build, thus creating a hybrid solution. That said, not all modular facilities are created equal, with containerised options being severely limited in terms of both initial configuration and longer-term flexibility. A far preferable solution would be to use a prefabricated system such as Flexenclosure’s eCentre. This combines off-site construction, complete system pre-integration, unlimited open white space configuration options and, relative to most alternatives, relatively risk-free future expansion capability.
Prefabricated form factors also allow for design iterations to be implemented quickly and easily in order to accommodate the evolution of customer requirements and changes in technology, while at the same time maintaining a predictable build quality and a promise of a faster overall deployment. This is particularly important when an operator has customers that are demanding multiple additional megawatts in a very short space of time.
Finally, speed to market can be further enhanced with a prefabricated solution when an operator starts to “Think Standardisation”. A prefabricated solution allows for the design of repeatable standard data centre halls for specific customers which can be rolled out of the factory and deployed on site in a matter of months. Further, for larger scale customers that have a standardised model, a colocation provider using prefabricated facilities can easily customise the operating environment and operations model specific to that particular customer in specific data halls – a level of customisation at a cost and speed that would be very difficult to achieve with a brick and mortar approach.
Ultimately, there is no doubt that the number of colocation data centres required around the world is set to grow exponentially. There is also no doubt that building any new facility is an undertaking fraught with complexity and risk. However, adopting a prefabricated solution from a supplier experienced in thinking “customer”, “modular” and “standardisation” can significantly decrease project risk while increasing future service and facility flexibility versus more traditional brick and mortar or containerised facilities.