Defending the cloud from ‘the enemy within’

Matrium Technologies Pty Ltd
Thursday, 19 May, 2011


Securing a virtual environment means more than just policing its entry points, says Marc Meulensteen, Cloud Security Expert, Spirent Communications.

Cloud security is receiving a lot of attention nowadays, because fears about data security in the cloud remain the biggest hurdle to mass acceptance. Cloud computing removes many of the familiar physical control points for protecting sensitive data - from personnel screening to PIN and smartcard access to the data centre. Instead of visible, tangible security systems, we must put our trust in specialist IT skills, and the people with those skills, to safeguard data in the cloud. For the non-specialist, this is scary.

A natural first approach to cloud security is to model those familiar perimeter defence systems, to focus on internet traffic to and from the cloud - if the bad guys can’t get in, then the cloud must be safe. That’s a very necessary security component, but it is not enough. At worst, it can lead to a false sense of security, because it does not address attacks that originate within the cloud itself - the enemy within.

Confidence on the rise

Confidence is infectious. If the authorities in 1886 had known how many people would die on the roads over the next century, they might have taken steps to silence Karl Benz and his patent for a ”vehicle with gas engine propulsion”. Yet we now drive out on the killing fields with little thought for the risks.

You can argue that fears about cloud security are ’irrational’ and you can equally argue that they are very rational. And when the tide turns, and cloud computing gains mass acceptance, some will say it is because the herd mindset has shifted, and others will say the cloud has been ’proved’ safe.

Gartner conducted a survey between September and December last year that showed growing confidence amongst 2000 CIOs across 50 nations. Among its findings were:

  • Cloud computing and virtualisation are their two highest priorities for 2011.
  • They expect these services to allow up to 50% of current budgets to be shifted from operations towards new business strategies.
  • Only 3% currently run the majority of their operations over cloud or SaaS, but this should rise to 43% over the next four years.

That last result may indicate that CIOs do not yet trust the cloud but are prepared to bet on it being made sufficiently secure in the near future. Would such confidence be justified?

First, the bad news

For the end user, security in the cloud simply means ”is my data safe?”. For the service provider there is not only the question of data security but also one of maintaining SLAs.

It is no good guaranteeing rock-solid data protection if the security measures are so cumbersome that they degrade the service provided. And if the response to cyberattack slows down the system so much that the service is unusable, damage has been done - however well the data was protected.

So all security measures must stand or fall on the results of stringent testing under realistic operating conditions. The tests must confirm not only that data is protected, and not altered, corrupted or leaked, but also that service performance is maintained under all sorts of operating and attack conditions - for ultimately user perception is the thing that counts.

But there is a fundamental problem in testing any virtual system, in that it is not tied to specific hardware. The processing for a virtual switch or virtual server is likely to be allocated dynamically to make optimal use of available resources. Test it now and it may pass every test, but test it again and the same virtual device may be running on different hardware and there could be a different response to unexpected stress conditions. Malicious traffic could even be forcing the system to dynamically assign all CPU power to the attack process.

This is what worries the customer - is it really possible to apply definitive testing to something as formless as a virtual system? And knowing what we do about the determination and skill of cybercriminals, how can we secure a system as amorphous, dynamic and complex as the cloud?

This is why cloud security has focused on securing access to the virtual system. The concept of a secure perimeter is easier to grasp because it is analogous to physical security keeping the wrong people out of a building, and it lends itself to traditional solutions such as firewalls and intruder prevention systems (IPS) monitoring access points to the virtual system.

In the case of a physical data centre, people also understand that access security alone is not enough. Because an authorised operator can enter the building with good intentions but still be subject to human error - maybe connecting their laptop without realising it has been infected with a virus. So you need to inspect and clean traffic within the network as well as what comes into the network from outside.

When it comes to the cloud, even if every precaution is taken to stop such human errors, there are other risks. Take the proliferation of internet ’app stores’, where independent software developers are given space in the cloud to set up shop and sell their software. How can we be sure that even a trusted developer might not upload malicious software into the cloud?

In a physical data centre, the solution is to disinfect all traffic within the system as well as what comes in and out. Hence the need for IPS monitoring and cleaning traffic at strategic points within the data centre, in addition to firewall protection at the perimeter. In a virtual system this is a far greater problem, because the system is never static. So where do you install the IPS and how can you monitor network traffic that is constantly being re-routed?

Bear in mind, also, that virtualisation adds another level of vulnerability to the data centre - intraserver vulnerability. Traditional data centres have interserver and infrastructure vulnerabilities, such as the possibility of performance and security weaknesses internally between servers, externally at the gateway and in the end-to-end network. Virtualisation not only intensifies these potential threats, it adds the risk of intraserver threats, between virtual machines inside a single physical server.

What is needed is some way of monitoring and disinfecting traffic in a virtual network as reliably as a physical IPS in a physical network. And unless there is some way of testing the protection in the virtual environment, we can never be sure how safe it is.

Now for the good news

The question is: is it really possible to apply definitive testing to something as formless as a virtual system? The answer to this question came in a report published last year by Broadband Testing called Secure Virtual Data Centre Testing (September 2010).

Broadband Testing set out to determine whether it is possible to secure a virtual environment, knowing that its first problem was to create a rigorous and repeatable test process. The security system under test was the TippingPoint IPS-based Secure Virtualization Framework (SVF), and the test bed consisted of both the physical and virtual versions of Spirent Communication’s Avalanche traffic generator. These were combined with a typical network environment including both physical and virtual elements in order to replicate a truly representative hybrid data centre environment.

Using cloud computing testing solutions with performance, availability, security and scalability (PASS) methodology, Broadband Testing was able to monitor and test internal and external-to-internal traffic under normal operating and extreme conditions plus a wide range of attack scenarios. All the threats in the HP TippingPoint signature base were successfully blocked, and the only ones that passed were those that had not yet been added to the then-current database.

Such was the success of their test process that Steve Broadhead, founder and director, Broadband Testing was able to say in conclusion: “Can we trust the cloud? The answer now is ‘yes’. Virtual security works in theory but, until there was a way to test it thoroughly under realistic conditions, solution vendors have had a hard time convincing their customers. With the use of combined physical and virtual test machines, the testing proved not only highly rigorous, but also quite simple to operate.”

Test criteria for the cloud

There are two aspects to testing applications in a virtual environment. Firstly, functional testing, to make sure the installed application works and delivers the service it was designed to provide, and then volume testing under load.

The first relates closely to the design of the virtual system - although more complex, the virtual server is designed to model a hardware server and any failures in the design should become apparent early on. Later functional testing of new deployments is just a wise precaution in that case.

Load testing is an altogether different matter, because it concerns the impact of unpredictable traffic conditions on a known system. In a virtual system, and even more so in the cloud, there can be unusual surges of traffic leading to unexpected consequences. Applications that perform faultlessly for ten or a hundred users may not work so well for a hundred thousand users - quite apart from other outside factors and attacks that can heavily impact internet and overall performance. So the service provider cannot offer any realistic service level agreement to the clients without testing each application under volume loading and simulated realistic traffic conditions.

Large data centres and the cloud pose particular problems because of their sheer scale. The Spirent test platforms used by Broadband Testing allow for this in rack systems supporting large numbers of test cards scalable to several terabits per rack. These modular devices can be adapted to any number of test scenarios, and the one used by Broadband Testing had the option of a software module that specifically addresses the challenge of testing the performance, availability, security and scalability of virtualised network appliances as well as cloud-based applications across public, private and hybrid cloud environments.

This combination of a physical test device plus virtual test software was shown to provide exceptional visibility into the entire data centre infrastructure, where as many as 64 virtual servers, including a virtual switch with as many virtual ports, may reside on a single physical server and switch access port. With this combination, it is not only possible to test application performance holistically under realistic loads and stress conditions, but also to determine precisely what component - virtual or physical - is impacting performance.

To create realistic test conditions, the virtual software was used in conjunction with devices designed to generate massive volumes of realistic simulated traffic. The simulation replicates real-world traffic conditions including fault conditions and realistic user behaviour, while maintaining over one million open connections from distinct IP addresses.

Also under test was the quality of user experience at all times. Here, latency becomes a critical parameter. Even minute levels of latency can become an issue across a virtual server, while the very presence of monitoring devices produces delays that must be compensated for. So it was important that Broadband Testing chose a test platform that provided automatic latency compensation, adjusting according to the interface technology and speed.

The test scenario described so far covers the security testing of the infrastructure and the interserver security. Mentioned earlier was a further security issue in today’s data centres - the intraserver security between virtual machines running on the same server. Such traffic stays within that server and never hits the actual wire, so testing that part of the secured cloud requires tests run between two virtualised testers. Spirent Avalanche allows not only the testing between physical and virtual machines but also testing virtual to virtual on the same server in the cloud.

Testing the virtual system proved not only possible, but surprisingly straightforward, because the testing methodology is no different from that used for regular security testing, and the same test cases can be used.

Conclusion

To make a virtual environment secure, we need to address what is happening inside it, not just what is coming in or out.

Two key conclusions emerged from the BroadBand Testing report. First, it is reassuring to know that it is indeed possible to secure the virtual environment using the TippingPoint SVF solution.

More significantly in the long run, the fact that rigorous and consistent system tests can be carried out in a virtual environment means that from now on we can confidently test the security of virtual networks and the cloud. With Spirent equipment and methodology we can test the cloud and all aspects of the infrastructure, interserver and intraserver security.

The factors that cause people to doubt the cloud - its dynamism, shapelessness and lack of tangible boundaries - have been addressed by TippingPoint, have been put to rigorous test by Spirent and have been shown to work by BroadBand Testing.

So - yes, we can trust the cloud.

Related Articles

Powering data centres in the age of AI

As data centres are increasingly relied upon to support power-hungry AI services and...

Smart cities, built from scratch

With their reliance on interconnected systems and sustainable technologies, smart cities present...

Smart homes, cities and industry: Wi-Fi HaLow moves into the real world

Wi-Fi HaLow's reported advantages include extended ranges and battery life, minimised...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd