Cloud computing moves to the edge

CommScope

By John Schmidt, Data Center Solutions Lead, CommScope
Friday, 10 June, 2016


Cloud computing moves to the edge

The face of data centre infrastructure is changing, thanks to the cloud.

Global data centre networks are fully integrated into our daily lives and activities. Every post, tweet, email, online purchase, financial transaction, picture and video we collectively produce flows through an intricate network of switches, routers and servers connected via fibre optics inside monolithic, concrete buildings we elegantly refer to as the “cloud”. As a society we are wholly reliant on this infrastructure and it is fundamentally changing. In 2015 the world’s population produced 3.7 exabytes of mobile data on a monthly basis, which is a 74% increase over 2014 and a 4000x increase over the past decade[1].

As a result of this unprecedented growth in data consumption, content is being pushed closer and closer to consumers, or to the edge of the network. This change in architecture has profound implications on data centre networking and design that we will discuss further. First let us examine the drivers that are pushing content to the edge of the network.

Latency

As data consumption increases the corresponding willingness to wait for content has decreased. Our expectation of immediate service continues to increase even as richer content is served up and devoured.  A consumer viewing Netflix in high definition has an expectation of near zero buffering. That same consumer has the same expectation for 4K video even though the corresponding bandwidth is 5x higher[2]. Consumers have an expectation that native data on a tablet or mobile device and apps utilising data from the cloud have the same user experience.

The average user has the same expectations of a movie that is downloaded and played and one that is streamed in terms of both quality and instantaneous access. New technologies such as virtual reality/augmented reality, high-resolution cloud-based gaming, and cloud-assisted autonomous vehicles will require even lower latencies to support evolving user expectations.

Content delivery networks (CDN) have made a science of caching and delivering content within local regions. CDN has previously been a niche market dominated by companies such as Akamai, but now major players like Amazon and Google are offering their own CDN for both their own use and their customers. CDN performance will increase dramatically from a buildout at the edge of the network. The closer these networks are to the user, the better the performance. Of course this need for lower latency is not relegated to consumers only. Businesses are also driving the need for edge computing. Most notably, brokerages and in particular high-frequency trading (HFT) rely on minimal latency to provide the highest level of performance to their clients.

Data sovereignty

The concept of data sovereignty is centred on the belief, and in many jurisdictions the law, that digital data is subject to the regulations of the country in which it is stored. With cloud computing, data could reside nearly anywhere in the network. Strict interpretations enacted by various countries mean that it is the responsibility of the network provider to make sure that data that originates in a given country is stored locally to ensure compliance with the law. The obvious method to make this happen is to have local data centres in country.

This is generally counter to the concept of a virtualised cloud where data could exist in various instances globally. With edge data centres, cloud providers have the ability to comply with even the strictest interpretations of the law while also providing optimal service to the local region. In the long run, cloud companies are lobbying for safe harbour exemptions, but until then they must find alternative means of compliance. Edge data centres provide a means to this end.

Bandwidth consumption costs

Most consumers think they are the only ones that pay for access to high-speed pipes. The debates around net neutrality are focused on access for the customer, but in the backbone of the internet, transport costs are a multibillion-dollar business. In 2015, the cost for a 10 Gbps long-haul transport was US$4,000 per month.[3]

The further the data centre is located from the user, the more costs incurred from long-haul transport. Large service providers have built out their own networks, but this remains a very capital-intensive business, especially with trans-ocean subsea networks. As the edge of the network is pushed closer and closer to the user, the costs for long-haul transport will decrease or even be eliminated.

Analytics

The original purpose of data centres was simply to store and access data. This required companies to have a local primary data centre with a geographically separated disaster recovery centre. Networks have evolved to distributed computing in order to support content delivery. With all the storage of data, it was inevitable that companies would want to mine this information for trends to make better decisions.

Big data became the latest buzzword as a new field of data analytics formed and, again, this changed the network from just long-term storage to one of processing and ready access to data. We now exist in the era of real-time analytics. Machine learning has given data analysts the ability to apply algorithms to massive amounts of data in real time and a great example of this is the ads that pop up when we do an internet search. The ads appear on our screen in real time, but behind the scenes sophisticated algorithms are identifying and monetising the ads that you are most likely to click through based on your current search term as well as your browsing history. In order to make this process as transparent as possible, companies must put massive amounts of processing power as close as possible to the user. The field of real-time analytics will continue to expand and the demands on the network will continue to be stressed.

Impact on data centre design and location

As data centres move close to users at the edge of the network, one of the obvious considerations is location. Since data can reside anywhere, data centres in the core of the network are logically placed where they are most convenient to the owner. Those decisions can be driven by economics, tax incentives, existing real estate, proximity to renewable energy or a number of other factors. The key requirement for data centres at the core is to the need for power and communications and this dynamic fundamentally changes in this new paradigm since by definition the edge of the network in close proximity to users. As a result, edge data centres must be flexible and innovative. As an example, Microsoft recently announced it was testing an underwater data centre citing that 50% of the world’s population is within 200 km of the ocean. Other innovative avenues for bringing computing resources to the edge of the network must also be explored. A handful of options follow.

Colocation

A number of colocation (colo) services are now building out white space in tier 2 cities with the express purpose of positioning them as edge data centres. This trend will likely continue as companies desire to push their content closer to the user without incurring fixed capital costs associated with building out brick-and-mortar locations. This provides companies with a relatively straightforward and scalable means of deployment. Assuming the colo provider already has infrastructure in place, the speed of deployment is also optimal. Several service providers also have global operations for companies that are expanding or servicing a global user base. Many hyperscale and cloud providers have already used this as a means to deploy their global network of edge data centres. The downside, if any, to colo is that a company is bound by where the service provider chooses to locate, which may or may not be optimal for the network. With so many choices globally, this potential issue is quickly being resolved by the service providers responding to customer demands.

Central office consolidation

The telecoms have seen a significant decline of telephony lines and as a result there is additional real estate available within these central offices. As circuit switched infrastructure is decommissioned, it makes room for Ethernet switches and servers and, due to the distance limitations of fixed line telephony, these central offices are already located at the edge of the network. The main obstacles to overcome are power and cooling. Telecoms are traditionally the domain of DC power and hardened equipment. Data centre equipment is more likely to be AC powered and require significant cooling. This could be a systemic change that limits the ability to convert this valuable communications real estate to edge computing centres. Further innovation is required to facilitate this transition. Innovations around DC-powered equipment, lower power servers and modular data centres can solve this issue.

Modular data centres

Prefabricated or modular data centres, sometimes called “data centre in a box”, may be an elegant solution to the challenges of edge buildouts. As a self-contained solution, they can be deployed in a much shorter time frame than traditional brick-and-mortar facilities and can be deployed nearly anywhere. Any existing real estate with access to power and communications could be transformed into an edge data centre and modular solutions can also be deployed alongside existing central offices to leverage existing real estate and communications lines. Because of their flexible size, modular data centres can be deployed in a wide variety of applications to suit the client’s unique requirements. They can also be designed to be very efficient on power usage, with certain designs leveraging adiabatic or free-air cooling as opposed to direct exchange cooling used in many traditional designs.

Operational challenges

Another value proposition of edge data centres is that the staffing may be minimal to non-existent. As the number of total data centres increases, it is challenging and costly to fully staff the sites to the same degree as core data centres. At the same time, understanding what is happening within the data centre has never been more critical. As a result, technologies like data centre infrastructure management (DCIM) and automated infrastructure management (AIM) will become essential in edge deployments. DCIM is critical to the remote monitoring of all major aspects of the data centre, in particular power, cooling, security and communications that are the life blood of the data centre. One benefit of DCIM is the ability to monitor the health at the edge of the network from a centralised location, but more important for edge data centres is the ability to document and coordinate the interaction of multiple sites globally.

This level of coordination is absolutely critical in the deployments of multiple edge sites. Remote management is also a driver of AIM, which can monitor, track and alert any changes in state to the physical layer infrastructure. Imagine an outage caused by a physical disconnect in a site hundreds or thousands of miles away from core of the network — AIM can identify the exact location down to the port and patch cord so the problem can be corrected. Without AIM to identify outages, it is akin to finding the needle in the proverbial haystack. Implementing AIM at the edge will lead to superior operation and management of the distributed data centres, enabling IT managers to remotely have full visibility of all the physical connections, instant knowledge of any changes and up-to-date reports on what devices are connected where and how. This will significantly shorten troubleshooting time required in case of downtime, gives limitless control onto the physical infrastructure, improves work order management, provides better security and improves change and asset management.

The future of the edge

We will continue to see buildouts closer and closer to the edge of the network as content becomes richer, devices become more intelligent and user expectations continue to increase. This will drive the need for innovation at the edge of the network and innovation will take place on multiple fronts including hardware, software and the facilities themselves. In particular, we will see more deployments of modular data centres to offer flexibility and increase efficiency being coupled with advanced DCIM and AIM capabilities to monitor all aspects of the remote sites. This move to the edge will ultimately benefit the entire network of users and clients as network efficiency is improved. Network transformation is not simply desired for better performance, it will be essential to providing us with the content and analytics that will drive our businesses and our lives in the very near future.

Join some of the most respected data centre industry leaders in the South Pacific region at CommScope’s interactive DC workshop.Click here for more details.

References

[1] Cisco Visual Networking Index: Global Mobile Data Traffic Forecast Update, 2015–2020 White Paper

[2] Netflix Internet Connection Speed Recommendations

[3] EdgeConneX white paper researched by ACG Research: The Value of Content at the Edge (http://www.edgeconnex.com/insights/white-papers/the-value-of-content-at-the-edge/)

Related Articles

Powering data centres in the age of AI

As data centres are increasingly relied upon to support power-hungry AI services and...

Smart cities, built from scratch

With their reliance on interconnected systems and sustainable technologies, smart cities present...

Smart homes, cities and industry: Wi-Fi HaLow moves into the real world

Wi-Fi HaLow's reported advantages include extended ranges and battery life, minimised...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd