View Only

Ransomware in the Cloud

By Erdem posted 08-12-2016 15:01


Recently, ransomware has been become a popular way to make money for malware authors. Organizations of different sizes ranging from hospitals to banks and pharmaceutical companies are affected.


Ransomware has become so popular that a recent episode of Mr. Robot featured it as an attack vector against an evil corporation presumably to defraud them:


So how can ransomware attack medium and large companies if data isn’t stored on one machine? Is it the same mechanism as the traditional distribution method? Are such attacks still feasible?


Traditional Malware Model


When ransomware targets Windows/OSX users, it is primarily distributed via browser exploits, social engineering, or software updates:





Then the author collects money by asking users to pay pseudo-anonymous currency such as bitcoin to decrypt their data. This has been the case since 2011 when popular ransomware first started using the cryptocurrency. In 2012, the more user-friendly Citadel toolkit started distributing ransomware. Locky made headlines early this year and was followed by several crypto-ransomware strains including KeRanger ransomware, which made waves as the first major ransomware targeting MacOS. In fact, we recently covered how various recent ransomware take over a user’s computer.


Overall, traditional ransomware primarily targets the end-user and doesn’t spread much from user to user.


However, nothing stops the malware authors from using initial infections as a stepping stone to encrypt data stored in the cloud.


Cloud-aware ransomware


Holding data for ransom doesn’t work if you have strong distributed backups. However, the evolution of ransomware has seen increasingly sophisticated attempts to defeat weak backups. Beyond excising backups on locally-attached storage, modern ransomware can take the attack one step further and delete old versions of the files in Dropbox, Box, Google Drive and other similar services. Although no successful ransomware has tried this approach, such an attacks are possible and fairly easy to execute.


This is possible because if the user can have the ability to delete these files, then the malware that takes over the user’s computer also has such ability.


Taking your cloud for ransom


In many small to medium sized companies or groups within large companies, the organizational structure among developers who have access to the cloud is as follows:




Many developers can access server instances and cloud storage. Some developers (dev-ops in the image above) may have elevated privileges to access sensitive data.


Various sensitive information can be stored on Amazon AWS S3/Google Cloud storage/Microsoft Azure. This storage is fault tolerant and the cloud provider is responsible for maintaining the data. However, if files are modified, there is usually no backup except on the local server instances.




An attacker can either compromise one of the developers’ machines or any server instance.




Once a developer’s machine is compromised, anything they have access to is also available to malware authors.

However, this is not yet the time when ransomware begins encryption, because if it happens at this stage, the overall attack can be thwarted by revoking credentials. Instead, the malware can spread to more developers by utilizing ssh connection of the compromised server to spread to developer machines that access the server:




At this point, malware spreads to more developers and has access to cloud storage that many programmers have access to:




Finally, once machines that have elevated privileges (dev-ops in the picture) are compromised, even sensitive cloud-stored data that isn’t accessible to most developers is also compromised:




Now malware can rewrite stored data on all of the server instances and cloud storage with its encrypted counterpart. This would be followed by extortion and, if successful, decryption stages.


Why doesn’t this happen everywhere?


Although the approach is obvious and payoffs are big, enterprise-targeted ransomware is not yet very common.


It is difficult to pull off such an extensive attack and may require insights into the targeted org’s development structure and digital data storage practices. Unlike single machine infection, such enterprise infections require an approach that works without fault on multiple operating systems and is stealthy. This is because programmers are more likely than general populace to have different operating systems on their machines and custom environments. C&C communication is much more challenging as well since now some of the servers may never have internet access. Finally, decryption stage is challenging since it may be difficult to establish which machines belong to which user and which users paid the ransom. If decryption is not successful, victims would be less likely to pay the ransom.


Why doesn’t this happen with source code?


Unlike binary data that is usually only stored in a few places, source code is stored on each and every developer machine. If any server is not affected (usually the case since some servers/users are offline during the attack), source code can be recovered.


Countermeasures for the enterprise


Cold offline backups are an obvious solution, however some recent data would inevitably be lost.


2-factor authentication will not stop such attacks, but it may make them more difficult.


Some initial attack vectors can be stopped with network-based security solution or a local antivirus.


Developer culture needs to change so as to not allow package acquisition from non-audited public repositories.


Another solution is to have tiered developer privileges and no ssh access to sensitive server instances as well as possible duplication of data across different cloud providers. This step however is either labor intensive or pricey.


Partial data recovery is possible from non-compromised servers such as cache servers (memcached or Redis instances) or traditional databases. Data in databases is more difficult to reversibly encrypt. Dumping the data from a remote database server is time consuming and resource intensive so will likely not go unnoticed.