How can you test cloud computing performance?
Is your ‘cloud’ delivering the expected performance? How do you know until you test it? What are the complexities of measuring virtual systems?
As you sit at your PC cursing the little window saying your software needs updating (and asking whether you would rather to waste time doing it now, or waste time later) the idea of software delivered to your computer from a pure source, like mains water or electricity, becomes highly attractive.
It’s not basically a new idea, because it harks back to the old days of a central mainframe doing all the processing while users interact via a network of dumb terminals. It re-emerged in the early 90s with talk of the ”thin client” and then with Internet intoxication came the idea of Software as a Service - buying usage only as needed from a source that managed all the licencing and upgrading and kept the applications in peak condition.
The main difference is that a new name was needed to mark the fact that this idea now works. Pioneering attempts failed simply because broadband access was not yet widely good enough to support the service, but with today’s widespread broadband it is becoming a practical proposition. It’s called Cloud Computing because instead of the processing happening inside your computer, or in the company mainframe, it happens at some unknown location in the Internet cloud. It doesn’t even need to happen in any one location or piece of hardware, for it could be running on some geographically dispersed virtual machine.
That’s true cloud computing, arriving like a paid public service from who knows where, but you could also use the term to describe software delivery within an organisation’s intranet from a physical or virtual data centre - the so-called ”Private Cloud”. As before, the key to successful service is that the network to the mobile or desktop computer must be fast enough not to frustrate a user who is used to the speed and responsiveness of on-board software. Also, in the case of software running in a virtual server, the network connecting its parts must be sufficiently fast and low-latency to allow the application to perform as well as it would on a single physical machine.
The performance challenge of cloud computing
Performance really is the challenge. Cloud computing potentially offers all the benefits of a centralised service - pay for what you actually use, professional maintenance of all software, single contact and contract for any number of applications, processing on state-of-the-art hardware - but it has to match the speed, responsiveness and quality experience of local software if the service is going to be accepted by the user.
So how does the provider ensure that level of service? The answer, as we shall see, must lie in exhaustive testing. The complexity of virtual systems makes for unpredictable behaviour, you can only be sure when you have put it to the test. But there is also a fundamental problem in testing any virtual system, in that it is not tied to specific hardware. The processing for a virtual switch or virtual server is likely to be allocated dynamically to make optimal use of available resources. Test it now, and it may pass every test, but test it again and the same virtual device may be running in a different server and there could be a different response to unexpected stress conditions.
Seen in those terms, there has been no absolute, definitive way to put virtual systems to the test. Spirent has, however come up with a virtual test solution as will be explained later. But first, let us focus on the question of maintaining applications in a centralised system.
Whether the central processing runs on a physical, virtual or cloud server, it needs to hold a large amount of application software to satisfy the client base, and that software needs to be maintained with every version upgrade and bug fix as soon as they become available. It’s a complex task, and it is increasingly automated to keep pace with development. There must be a central library keeping the latest versions and patches for each application package, and some mechanism for deploying these across the servers without disrupting service delivery.
At this stage, the service provider is in the hands of the application developer - the service to the end user can only be as good as the latest version on the server. We hope the application developer has done a good job and produced a reliable, bug free product, but the service provider’s reputation hangs on that hope until the software has been thoroughly tested on the provider’s own system.
In the case of a physical server, we do not expect any problem because the application is likely to have been developed and pre-tested on a similar server. But virtualisation and cloud computing adds many layers of complexity to the process. The speed of the storage network becomes a significant factor if the application makes multiple data requests per second, and that is just one of many traffic issues in a virtual server. Faced with such complexity, predicting performance becomes increasingly difficult and the only answer is to test it thoroughly under realistic conditions. You cannot expect your clients to play the role of guinea pigs, so usage needs to be simulated on the network. It is critical to gauge the total impact of software additions, moves and changes as well as network or data centre changes. Every change must be tested to avoid mission critical business applications from grinding to a halt.
Testing applications in a virtual environment
There are two aspects to testing applications in a virtual environment. Firstly functional testing, to make sure the installed application works and delivers the service it was designed to provide, and then volume testing under load. The first relates closely to the design of the virtual system - although it is more complex, the virtual server is designed to model a hardware server and any failures in the design should become apparent early on. Later functional testing of new deployments is just a wise precaution in that case.
Load testing is an altogether different matter, because it concerns the impact of unpredictable traffic conditions on a known system. To give a crude analogy: one could clear the streets of London of all traffic, pedestrians, traffic controls and road works then invite Michael Schumacher to race from the City of London to Heathrow airport in less than 30 minutes. But put back the everyday traffic, speed restrictions, traffic lights and road works and not only will the journey take much longer, it will also become highly unpredictable - one day it might take less than an hour, another day over two hours to make the same journey.
In a virtual system, and even more so in the cloud, there can be unusual surges of traffic leading to unexpected consequences. Applications that perform faultlessly for ten or a hundred users may not work so well for ten or a hundred thousand users - quite apart from other outside factors and attacks that can heavily impact Internet performance. So the service provider cannot offer any realistic service level agreement to the clients without testing each application under volume loading and simulated realistic traffic conditions.
The Spirent test solution
Network performance and reliability have always mattered, but virtualisation makes these factors critical. Rigorous testing is needed at every stage in deploying a virtual system. During the design and implementation phases it is needed to inform buying decisions, and to ensure compliance. Then, during operation it is equally important and to monitor for performance degradation and anticipate bottlenecks, as well as ensuring that applications still work under load as suggested above.
But large data centres and cloud computing pose particular problems because of their sheer scale. Spirent TestCenter is the company’s flagship test platform for testing such complex networks, and it meets the need for scalability in a rack system supporting large numbers of test cards, to scale up to 4.8 terabits in a single rack.
As a modular system, TestCenter can be adapted to any number of test scenarios. In particular, Spirent TestCenter Virtual is a software module that specifically addresses the challenge mentioned above: how is it possible to test any virtual system reliably when it is running on dynamically allocated hardware resources?
With Spirent Virtual in the TestCenter, it is not only possible to test application performance holistically under realistic loads and stress conditions, but also to determine what virtual or physical component is impacting performance when Virtual is combined with modules generating massive volumes of realistic simulated traffic. Spirent’s Avalanche is another module which accurately replicates real world traffic conditions by simulating error conditions, realistic user behaviour, and maintaining over one million open connections from distinct IP addresses. By challenging the infrastructure’s ability to stand up to the load and complexity of the real world it puts application testing in a truly realistic working environment.
The Virtual software provides unsurpassed visibility into the entire data centre infrastructure. It is designed specifically to meet the needs of a complex environment where as many as 64 virtual servers, including a virtual switch with as many virtual ports, may reside on a single physical server and switch access port. It extends and complements the capabilities of Spirent TestCenter to accurately benchmark and optimise performance of virtual server switches and cloud-based virtualisation.
As was suggested, even minute levels of latency can become an issue across a virtual server. So how does one measure such low levels of latency, where the very presence of monitoring devices produces delays that must be compensated for. Manual compensation is time consuming and even impossible in some circumstances, whereas in the TestCenter this compensation is automatic and adjusts according to the interface technology and speed.
The acceptability of cloud computing depends upon delivering a quality of experience as good as local processing but without all the overheads of licencing and software version management. Quality of experience is a subtle blend of many factors such as latency, jitter and packet loss and all these can be precisely monitored on the TestCenter under wide-ranging traffic loads, both running pre-programmed tests automatically and allowing operator intervention via a simple user interface.
And the question of security
As well as delivering good quality of user experience, the cloud computing provider needs to satisfy the clients’ fears about security in the ’cloud’. The hacker that accesses a soft switch can re-route traffic at will, and so virtualisation leads to potentially severe vulnerability across the whole business - and the social infrastructure in the case of cloud computing. Again, the growth in virtualisation demands a corresponding increase in prior and routine testing.
Here it is not only the need to test under unusual load conditions - because those are the times when attacks are most likely to succeed - but also there is a need to simulate a whole range of attack scenarios. The application must still work when tested in the context of the network security devices working under attacks and vulnerabilities, that’s real life.
Spirent’s system delivers the most comprehensive, accurate user emulation of end user traffic and unexpected attack traffic even while at high load. Simply put, Spirent can model the user behaviour while scaling to full Internet levels. This “no compromise” approach is important since measuring the impact to the user and the network while loading the application with real-world loading patterns helps identify, isolate and resolve problems before the provider commits them to service agreements and puts them on-line.
Cloud computing offers many advantages to the user, but the provider must assure the client that the service will consistently deliver on its promises. Fail and users will vote with their feet. The only way to ensure success is to offer a tried and tested service. Spirent has the necessary solutions to ensure that, along with a revolutionary approach to testing a virtual environment, Spirent Virtual, can generate a virtual test structure in the cloud itself.
By Daryl Cornelius, Director Enterprise for EMEA Spirent Communications.
Powering data centres in the age of AI
As data centres are increasingly relied upon to support power-hungry AI services and...
Smart cities, built from scratch
With their reliance on interconnected systems and sustainable technologies, smart cities present...
Smart homes, cities and industry: Wi-Fi HaLow moves into the real world
Wi-Fi HaLow's reported advantages include extended ranges and battery life, minimised...