When it comes to architecting secure enterprise networks, capable cybersecurity tools are a key part of the equation. But they’re not a silver bullet.
In a recent network segmentation project on behalf of a client, my team and I discovered a vulnerability affecting Cisco’s Firepower Threat Defense firewall line (as disclosed by Cisco here). The fact that our discovery was inadvertent — and that we observed it during a pre-planned segmentation project — speaks to the importance of proactive and continuous monitoring. Without it, vulnerabilities can still lurk beneath the surface of “normal” operations.
As a network architect, my work with clients involves not only designing and implementing operational tools and resources, but routinely testing the resulting environments.
In a recent engagement, I worked with my team on a segmentation project to help a client control, log and monitor unprivileged access. The project involved testing the various security zones of the client’s environment to ensure its highly specific access control needs were being met.
As part of the network segmentation efforts, we implemented Cisco’s Firepower Threat Defense firewall. Our segmentation project wasn’t aimed at identifying vulnerabilities in Cisco’s firewall line. Instead, our discovery was more of an unhappy accident as we worked alongside the client’s IT team to determine their asset protection needs, delineate the various security zones of their environment, and execute access testing. As part of this execution phase, we deployed new firewalls between the various network segments. In conjunction with the client, after each change, we scanned the new firewalls to confirm that open ports matched our change requests.
It was during our proactive scanning that we noticed the Cisco firewall was passing unintended traffic. After recognizing the anomalous activity, we went through the process of determining what was happening. Because we didn’t have a window into the inner-workings of Cisco’s product, we quickly elevated the issue to Cisco’s Security and Incident Response Team with the information we had. From there, we talked to the product’s engineers and determined the issue required further investigation.
To me, my team’s discovery of the Firepower vulnerability points to four important lessons and takeaways for all enterprises looking to address and contain operational security risks:
It’s a dangerous misconception that robust security tools, once deployed, are a solve-all for threats and vulnerabilities. Yet in my professional experience, I’ve noticed that many enterprises fall into a pattern of complacency where they implement capable tools but not the appropriate training and resources to monitor and maintain them.
Takeaway: Organizations should take stock of their operational practices and adjust accordingly before looking for the next silver bullet to reduce security risk.
Just because you install a tool and the dashboard light says green doesn’t mean you’re fully protected. Accurate and continual measurement against a desired state is critical to make sure organizational investments aren’t wasted. Logging and monitoring is a good start, but take it a step further to define what’s “normal” for your environment. If application errors or network performance experiences significant and sudden change, would IT notice?
Takeaway: Enterprises need to work proactively — and continuously — to establish informed baselines and check for compliance against these baselines. It’s important that these continuous checks follow a regular and pre-determined cadence. Otherwise, they risk slipping through the cracks.
Provisioning new systems and applications is an important process, but decommissioning doesn’t seem to get the focus it deserves. IT staff will be quick to say there are plenty of things to clean up, but that none of it is urgent. And yet leftover firewall rules as systems re-use IP addresses, or ad-hoc permissions changes that are left permanently, can significantly increase security risk.
Takeaway: Make clean-up review a periodic and prioritized event. Plan time to review and reduce operational security risks within the organization.
Security software and hardware is designed to support an infinite number of configurations and environments. The tools are also written by humans so there will be errors in the code. Humans also implement the tools. This trifecta sometimes results in issues and tools not operating or providing the intended protection.
Takeaway: Anytime a security tool is implemented, upgraded, or configuration significantly changed, the tool should be tested to confirm it is providing the intended level of protection.
While cybersecurity incidents can make new security tools or solutions sound appealing, the “brilliant basics” of operations should not be overlooked. So, does your organization have a plan to address operational security risks?
If not, consider shoring up current practices in tandem with new initiatives. West Monroe assists firms across a variety of industries to assess, understand, and improve their security and operational posture. Contact us to see how we can help.