Cutting Down Trees in Our Forest

☠ So, trees have been cut down in our forest! Last week, hackers managed to breach and encrypt a server of one of our clients. I write about this often and believe that sharing experiences is the way forward. Here and now, therefore, I present our experience.

1ïžâƒŁ How the attackers accessed the server

How the attackers accessed the server was the first thing we wanted to know. We weren’t aware of having neglected anything. Upon investigating the incident, it “fortunately” turned out to be a “supply chain” attack.

Attackers attacked the server through the IS supplier and his access credentials. I put the word attacked in quotes deliberately because having administrative access to something isn’t exactly hacking. 😊

Such an attack vector is nothing new (see my article “How Hackers Attack Companies Through Their IT Providers”). From our experience, suppliers often get lucky, and attackers don’t realize they can extract much more from them (their customers). Unfortunately, this supplier wasn’t so lucky, and the attackers encrypted at least one of his customers, aside from him.

Why do I say at least? The supplier’s initial reaction was along the lines of “we know nothing, nothing happened here”. đŸ€„ However, logs and traces from the attacked server pointed his way. The attackers not only used his login credentials but also launched the attack directly from his network (computers). Similarly, the supplier’s websites became inaccessible.

The supplier eventually admitted to the attack on his network. However, he keeps the scope of the attack (in his network and with his customers), the initial attack vector, and the measures taken, to himself. We offered help and asked for more information, but supposedly, they have everything under control. Personally, their approach doesn’t reassure me much. 🙄

After all, we hear “we have it under control” seemingly after every attack. It just keeps bothering me why, if they have it under control (meaning they understand security), the attack happened in the first place. đŸ€·đŸŒâ€â™‚ïž

2ïžâƒŁ How they disabled ESET antivirus

We had ESET Server Security antivirus deployed on the server (latest version), with configuration (and thus uninstallation) password protected. However, the attackers managed to destroy the antivirus.

Sure, I understand, if someone has administrative privileges, it’s just a matter of time and effort. However:

  • It’s not so simple as to open task manager and simply end the antivirus. ESET (and other antiviruses) have self-defense mechanisms trying to protect the antivirus against attackers. For example, ESET uses the Windows “protected service” functionality.
  • Compared to other attacks we’ve dealt with, the attackers here certainly demonstrated much more effort and capability. The whole attack, including disabling AV and deploying ransomware, took them only one hour.
  • We hoped that a correctly configured antivirus would occupy the attackers for longer than approximately 20 minutes. Moreover, they destroyed it so cleanly that the antivirus didn’t report a single event indicating a problem to the central console. Hats off. đŸŽ©

The exact method used by the attackers to get rid of the antivirus remains unknown to us. We hope ESET will assist us in the analysis. My guess is that the attackers brought their own driver (running in kernel mode) to the server, which they subsequently used to destroy ESET. However, even so, you need a driver/tool that the AV doesn’t detect before its destruction.

I had a PoC on this topic tucked away. It came about after a discussion with a colleague who believed that antivirus can’t be erased from the system (without emergency mode). Just didn’t have the time to finalize that PoC. However, this attack has changed priorities, and here is the given PoC: (unfortunately available only in czech language)

3ïžâƒŁ What Worked

Since we prepare for such events and often assist other companies post-attacks, our response was successful, and the server was back online within an hour.

Hackers were unable to perform lateral movement (reach other servers) in the client’s environment. We prioritize this – there are no additional access credentials or user accounts on our servers that would allow them to progress further in the network (see the article “Network Security: Tier Model and PAW”). Thus, the attack remained isolated to one server.

Over the past two years, we have invested significant effort in the backup infrastructure for both us and our customers. Essentially, most of our primary backup storage is SSD only and connected through 10 Gbps links. Standard backups (using CBT) take us minutes, and we usually restore entire customer environments within 2 hours. This is a major advancement from when we would restore an Exchange server all night after a crash, anxiously watching the clock – wondering whether it would finish before people came to work.

To sleep well, we set up our “Veeam Hardened Repository” in the data center, where we backup all customers. These repositories serve as our “Noah’s Ark,” where backups survive even a complete ransomware attack on a customer. We have capitalized on the findings from the lecture “Backups that won’t survive ransomware” where I demonstrated hackers’ strategies and tactics (unfortunately not available in English).

Restoring the server from backup took 3 minutes and 44 seconds, with the customer losing data (RPO) from the last 3 hours. We could likely have reduced data loss to 1 hour (we have snapshots on production storage in addition to backups). However, after a quick discussion, we chose an older recovery point (a swift calculation between system downtime costs, the importance of the sacrificed data, and the probability that the given recovery point was compromised).

4ïžâƒŁ Lessons Learned

Of course, post-attack, we spent several hours discussing “what if” scenarios. We would prefer an ideal environment, but reality is complex. We would like to have suppliers we can trust for our clients. We are willing to help them with many things, but there must be mutual interest.

The market for thematic IS is often not competitive enough for supplier attitudes toward security to play a role in selection. Customers are generally happy if someone has a functional solution for their use case. Ensuring security is primarily our responsibility.

After this incident, we are considering requiring 2FA using mobile phones (TOTP/push notifications) from all our customer suppliers. The current situation is that they already have 2FA or their access is limited to specific IP addresses.

Limiting access to IPs is also a form of second factor (something to have). We just didn’t expect attackers to use the computers of the compromised supplier directly to attack customers (thus having his IP). đŸ€ŠđŸŒâ€â™‚ïž

Another possible measure is to limit supplier access only to working hours. It doesn’t have a big impact, but it’s easier to implement.

Implementing 2FA might seem easy, but things get complicated when pondered in detail. For instance, one supplier replied that they don’t use smartphones due to battery life


We would also appreciate ESET’s help in analyzing how attackers disabled the antivirus and implementing measures in their antivirus. More time for detection and early stopping of an attack is always handy. 😊

Conclusion

So, this was our experience and our thought process. What do you think? Would you have done anything differently? Do you have personal experience with a “supply chain” attack? I would love to hear your thoughts in the comments below, or you can email me.

May your networks stay secure,

Martin

Do you like topics, I write about?

It is not necessary to periodically visit my blog to check if there is a new article. Subscribe below for notifications. You will be the first one who will know about new article.

Discussion

Leave a Reply

Your email address will not be published. Required fields are marked *

Hack The Box OSCP MCSE CHFI ECSA CCNP CCNA