How quickly are we patching Microsoft Exchange Servers?

In this Security MEA exclusive article, Matt Kraning, CTO, Cortex at Palo Alto Networks, elaborates on Microsoft Exchange Server attacks.

As the world learned about the four zero-day vulnerabilities in Microsoft Exchange Server and the attacks that exploited them, many people wondered how widespread the effects would be. How many organizations are vulnerable to the Microsoft Exchange Server attacks? How likely is it that attackers will take advantage? How quickly are organizations responding to the news and patching Microsoft Exchange Servers? Expanse, a recent Palo Alto Networks acquisition, which provides an attack surface management platform, collected telemetry to answer those questions.

The Internet is Tiny
Fifteen years ago, if you accidentally exposed a device on the Internet, it might go unnoticed by attackers for months or even years. Things are different today – attackers scrutinize your attack surface daily. With open source software anyone can download, an attacker can communicate with every public-facing IP address in IPv4 space in hours. Any unpatched system, misconfiguration or accidental exposure is likely to be discovered very quickly. The internet is tiny.

This is why it’s crucial to apply patches promptly – new attacks such as the DearCry ransomware that began circulating around March 9 seek to take advantage of newly disclosed vulnerabilities in Microsoft Exchange Mail Servers to turn a quick profit before organizations have a chance to respond to them.

How Quickly are Organizations Patching Microsoft Exchange Servers?
Expanse continuously gathers information on all Internet accessible devices, and we used our platform to find the total volume of publicly accessible Microsoft Exchange Servers and the subset of servers that were vulnerable. Comparing information gathered three days apart – March 8, and then again on March 11 – allowed us to see not only how many Microsoft Exchange Servers were vulnerable but also to glean some data about the pace at which organizations applied patches.

Our results show that patch rates are lightning fast – 36% in just three days. From FireEye data on the time between disclosure, patch release and exploitation, we know that in the past the average time to patch was nine days. But patching does not mean you’re safe – assume exploitation as a result of this attack, as threat actors were observed widely launching zero-day attacks against very high numbers of Exchange Servers across the internet before a patch was released. (For information about how to handle various scenarios and determine impact on your organization, see Unit 42’s “Remediation Steps for the Microsoft Exchange Server Vulnerabilities.”)

It’s important to understand your attack surface from the attacker’s perspective. The statistics above represent all publicly facing unpatched Exchange Mail Servers – but this does not mean the affected customers know about them. Acquisitions, foreign subsidiaries, and forgotten or rogue IT often mean enterprises miss outstanding assets, including email servers. With so many attack vectors and limited resources to defend them, it’s crucial that organizations understand where the critical entry points are and how they can prioritize attack surface area reduction in a smart, data-driven way.

How to Understand Your Attack Surface
The attack surface area of an organization has never been more distributed than it is today. Organizations have to identify, track and manage more asset types across different locations than ever before. It won’t work to jump from emergency to emergency – you need a new playbook.

The first step? Understand your attack surface. A discovery and mapping program should start with the basics:
• A system of record of every asset, system and service you own that is on the public internet.
• Comprehensive indexing, spanning all major port/protocol pairs (i.e., not limited to the old perspective of only tracking HTTP and HTTPS websites).
• Leverage multiple data sources for attribution (i.e., not just registration and DNS data).
• No reliance on agents (which can’t find unknown internet assets).
• Continuous updating (i.e., not a two-week refresh rate).