Opinion

Virtual machine security – offloading the endpoint

If many endpoints’ traffic pass through the VM host, why not move security there, sort of catch “bad things” before they even enter the endpoint, asks ESET’s Cameron Camp.

If many endpoints’ traffic pass through the VM host, why not move security there, sort of catch “bad things” before they even enter the endpoint, asks ESET’s Cameron Camp.

Now that virtual machines (VMs) have moved into core workload deployments, guest VM endpoint workloads have a more serious impact on overall performance, and security at every endpoint seems like duplication of effort. If many endpoints’ traffic pass through the VM host, why not move security there, and sort of catch ‘bad things’ before they even enter the endpoint?

Offloading security, or really, pushing the heavy lifting – security wise – to the core of a server with lots of processing and network power seems like smart optimization. That, in conjunction with the efforts to produce thinner and thinner clients, capable of performing small tasks, rather than the all-in-one PCs of yesteryear, makes the core VM host server security option a natural fit.

Also, offloading some of the load from the upstream network by handling it at the natural networking intersection of a bridge on a VM server means you don’t need to get bigger core routers just yet: a major cost saving, especially with latest generation routers moving toward becoming their own security appliance, with the associated steep price tags.

“Stopping modern threats isn’t a question of finding a single ‘magic bullet’ defense, but more of layering defenses.”

But while it’s certainly wise to look for ‘bad things’ as they come across the wire, there’s more to the puzzle. Stopping modern threats isn’t a question of finding a single ‘magic bullet’ defense, but more of layering defenses.

In the same way signature-based defense is no longer sufficient by itself for defending the endpoint, a security posture should include a mix of network and host-based defenses, along with aligning a defensive approach based on the role of the VM you’re protecting.

Users who need a full-fledged desktop environment, for example, would need a different set of tools than a Linux VM that acts as a content caching proxy. They have different roles and need different defenses. It’s safe to say your caching proxy is in no particular danger of a typical user plugging in an infected USB drive, but it happens all the time with desktop environments.

One way of offloading the endpoint is a shared local cache. This approach scans files at the VM server level to prevent rescanning of the same files by multiple endpoints in the environment. In this way, scanned files are flagged at the server level and that information is passed to the endpoints. This means total server load can be reduced across the enterprise, and servers can stay deployed longer without upgrades to handle increased load from rescanning by innumerable VM clients.

A shared local cache scanning approach can also serve to blunt larger attacks by implementing it at the server level, meaning by shunting the load elsewhere, your endpoints are more likely to stay operational for longer.

This is also true of a security appliance monitoring your VM network, which can trigger other network actions to triage, or to redirect, or otherwise thwart attacks before they hit the core of your organization.

Five years ago there just weren’t that many options for defending the operational side of larger virtual machine deployments, but the whole ecosystem has grown up now and given us some great tools. Whether you want to deploy a security appliance, shared local cache scanning, or lightweight endpoint security suites, you now have a lot more options. And in the end, if you match them with a more holistic approach to defending your whole environment, you have a much higher chance of surviving much more complex attacks, which have – sadly – also grown just as fast.

To Top

Pin It on Pinterest

Share This