Detonated in the SOC’s sandbox, the malware revealed itself, attempting to beacon back to its command-and-control (C2) infrastructure. Armed with the C2 address, the team quickly located the half dozen other malicious emails whose recipients had opened the attachment. Their compromised devices were isolated and disinfected.
Three days later, the team blocked the second intrusion. The attackers used password spray attacks on a misconfigured cloud services portal. Their automated script tried hundreds of logins in rapid succession with email and password combinations culled from previous data breaches. Alerted by one of their identity security tools, the SOC secured the portal and updated their blocklist with the Internet Protocol (IP) addresses from which the login attempts had originated.
As his nighttime counterpart had done earlier with the spear-phishing attack, the daytime SOC manager prepared a report cataloging the attacker’s tactics, techniques, and procedures (TTPs) and the relevant indicators of compromise. He posted the report anonymously through their industry’s information sharing and analysis center to warn other enterprises in their sector about the attacks. “In both of those incidents,” the CISO correctly insisted afterwards, “the team did everything right.” But they still missed the third intrusion.
The fatal vulnerability turned out to be a web server that was spun up five years earlier to support a failed and swiftly canceled series of corporate marketing events. It hadn’t been updated in years, and scans conducted by the security team didn’t flag it because the server used the never-reported and now-forgotten brand of the marketing event, rather than the company’s name, as its domain.
Buried in that server’s file directory were years-old Secure Shell (SSH) keys that still provided trusted access to the organization’s main cluster of marketing servers. Once there, the hackers were able to swiftly pivot and gain domain access to SharePoint and OneDrive. At that point, as revealed in logs they later recovered, there was a 48-hour pause—probably while the hackers sold their access to a ransomware gang.
The SSH connection from the abandoned web server was an “anomalous” event. The security team’s incident and event management platform flagged it as such using the code “amber/anomalous,” the lowest level of alert. The platform flagged hundreds of other amber events that day and dozens classed as “red/suspicious.” None were rated “flashing red/ hostile.” The security team’s half-dozen analysts attempted to resolve as many as they could, but no one checked on the anomalous SSH connection.
“When you have that many false positives, stuff is occasionally going to slip through the cracks,” the CISO acknowledged. “The bottom line is we don’t have the manpower to resolve every anomaly report.”
The postmortem by a third-party security company discovered that the scans of and attacks on the abandoned marketing server originated from the same IP address range the SOC placed on their blocklist because they were the origin of the password spray attacks. There were also links through Whois Database Search to the spear-phishing email campaign. Putting the IP addresses on the blocklist didn’t protect the abandoned server because—as an unrecorded, abandoned shadow IT asset—it was not subject to the company’s security policy.
“If only we’d had a way to sort through those alerts, separating the wheat from the chaff, and the capability to correlate data from the significant ones, we might have spotted the connection and stopped the third attack,” the CISO said in the postmortem internal inquiry. When they returned to the main marketing cluster with domain access after 48 hours, the hackers began exfiltrating and encrypting data.