New video streaming services, live networked gaming applications, the ongoing expansion of the Internet of Things, autonomous vehicle developments, the rollout of 5G and the general global increase in digitalization are all creating significant new demands on the Internet. And in particular on the availability of data centre capacity to store and process large amounts of data much closer to the end users (“at the Edge”).
For Internet infrastructure, this means a shift in focus from using relatively few hyperscale data centres to using many more Edge data centres of smaller size. Of course, the term Edge will mean different things to different people. For a large international colocation company, Edge can mean entering new markets with data centres of 2-5MW in size. Meanwhile, Edge for a mobile operator can mean placing small 100kW sized micro data centres together with 5G telecom base stations. Independent of definition though, the smaller scale and larger number of Edge data centres that are and will be increasingly deployed around the world will bring significant new challenges.
Capital and operational cost hurdles
First, the capital cost to build data centres (per MW) will always be higher for smaller scale builds than for larger facilities, thus increasing the challenge to achieve a positive business case. This is important as companies operating data centres typically get paid by clients based on the megawatts of IT hosted. Capital costs can potentially be managed in part by reducing technical redundancy, with less power and cooling equipment installed. However, there are obvious risks associated with this approach, which is why many data centre operators are now looking at deploying Edge facilities using custom-designed mass-produced prefabricated data centres such as Flexenclosure’s eCentre.
At the same time, the increased number of data centres being deployed will undoubtedly lead to increased operational costs, especially when one considers the operational cost per MW. For Edge data centres to really take off, it’s therefore clear that operational cost reduction is prioritised, and the good news is that there are several ways of achieving this extremely effectively.
The rise of the “smart” data centre
People are always an enormous operational cost at a data centre, so by increasing how “smart” a facility can be in its own right while simultaneously reducing the number of on-site personnel to a minimum and minimizing power consumption and service costs, operational costs can be held in check. For this to happen, operations must essentially change from a local model to a centralised one where data centres are operated from a single central network operations centre (NOC), much like other distributed infrastructure is already operated today, such as telecom and electrical power networks.
The following five areas are key to keeping down the operational costs of Edge data centres:
Centralised Equipment Monitoring
Similar to the way Telecom Network Management Systems and SCADA systems used in electrical power networks operate today, the NOC will need to be able to keep track of multiple data centres from a centralised location; monitor and report on business KPIs and alarms; and combine a top-level network view of all Edge data centres with the ability to easily drill down into a specific data centre as needed.
Site security staff can be a major operational cost item, especially for smaller and more remote Edge data centres. Here remote access control and CCTV systems are needed that can be operated from the NOC with no need for on-site staff. Such systems must be flexible enough to manage random situations such as personnel who have lost or broken their access cards or when subcontractors need temporary access. One approach, in this case taken by Etix Everywhere, is to enable local printing of QR codes to allow access.
In certain situations it may not be realistic to send maintenance experts to the site. Full access to all site equipment must therefore be possible from the NOC, in order to enable remote troubleshooting. This is feasible as most data centre equipment provides an Ethernet interface to allow for remote log-in. Naturally any related security aspects must be taken care of. Of course there will be some situations where technicians will be needed at the site to replace physical hardware, but the majority of typical support requirements should be able to be managed remotely.
Building Information Model
All equipment should be tagged with an asset ID during design as part of defining a data centre 3D BIM (Building Information Model), that is an exact replication of what is installed on-site. Having defined a BIM, there should never be a need to visit the site to find out what is installed and how things are routed. The BIM will enable precise remote planning of data centre upgrade works or maintenance. Note that it it is critically important to maintain discipline over time to update the BIM in order to avoid any non-documented “local fixes.”
Preventive and Predictive Maintenance
As far as possible, Edge data centres should be outfitted with equipment needing little or no maintenance. Maintenance management software should be used to keep track of planned and actual service work and to keep the status of all assets updated, as well as the spare parts inventory. Data from sensors installed at each data centre can be interrogated for the prediction and optimisation of service/inspection periods for cooling systems, generators, electrical systems, etc. And coordinated maintenance logistics planning can further reduce service visit costs for multiple data centres.
There is no question that the world needs many more data centres than it currently has, especially at the Edge. The good news is that by building smart prefabricated data centres, initial capital and ongoing operational costs can be kept in check, allowing operators to efficiently and profitably deliver the data-rich services their customers are increasingly demanding.