Ransomware 1.0 → 3.0: what actually changed?
To understand why your old playbook is failing, it helps to zoom out and look at how ransomware evolved over the last decade. At a high level, you can think of three phases:
Ransomware 1.0 — “Encrypt and pray”
- Attackers broke in, encrypted files and demanded a ransom for the decryption key.
- The damage was mostly technical: downtime, data loss, rebuild effort.
- If you had clean, recent, offline backups, you often could refuse to pay and rebuild instead.
Ransomware 2.0 — Double extortion
- Gangs started stealing data before encrypting, then threatening to leak it if you didn’t pay.
- They set up “leak sites” on the dark web where they publish samples, embarrassing emails, IP and customer data.
- Now backups weren’t enough. Even if you could restore, the risk was regulatory fines, lawsuits and brand damage.
Ransomware 3.0 — Critical infrastructure & RaaS
- Attacks increasingly hit critical infrastructure: healthcare, energy, transportation, critical manufacturing, local governments and schools.
- Many crews operate as Ransomware-as-a-Service (RaaS), renting out their malware to affiliates who handle intrusion and extortion.
- Tactics include multi-layer extortion (data leaks, DDoS, harassment of customers and partners), destruction of backups and long-term persistence in networks.
The pattern is clear: ransomware has shifted from a noisy “smash-and-grab” into a slow, strategic business that treats your organisation as a repeatable victim profile. The question is no longer “can we restore files?” It’s “how do we keep our entire operation from being held hostage?”
Why your backup plan breaks under Ransomware 3.0
Backups are still crucial. But modern ransomware groups design campaigns specifically to make “just restore” impossible or extremely painful. Here’s how they break the old assumptions:
1. They go after your backups first
- Once inside, attackers quietly map your environment — including backup servers, hypervisors and storage arrays.
- They attempt to delete or corrupt snapshots, replication jobs and backup catalogs before launching encryption.
- The real goal: when you finally notice the attack, your “clean” restore points are either gone or untrustworthy.
2. They exfiltrate data, then encrypt
- Even if your offline backups survive, your crown-jewel data may already be in the attacker’s hands.
- They threaten to leak source code, contracts, health records, industrial diagrams, and executive mailboxes.
- Regulators, customers and partners now enter the story — backups don’t fix reputational and legal fallout.
3. They target your business processes, not just servers
- In manufacturing and logistics, ransomware can disrupt OT/ICS environments, scheduling systems and supply chains.
- In healthcare, it can delay surgeries, lab results and prescriptions — turning cyber risk into patient risk.
- In public sector, it can halt permits, payroll, emergency dispatch and public records systems.
4. They weaponize time
- Modern crews often dwell in networks for weeks or months, disabling alerts and learning your patterns.
- The final “big bang” encryption event is just the visible end of a much longer compromise.
- Recovery is no longer about restoring data; it’s about rebuilding trust in systems, identities and logs.
Bottom line: you still need backups — but you also need strong identity, segmentation, detection, and incident response so that ransomware can’t turn those backups into a false sense of security.
Critical infrastructure: when outages become life-and-death
When a design agency gets hit by ransomware, it’s painful and expensive. When a hospital, port, or pipeline is hit, the impact can ripple into real-world safety, national security and the economy.
Recent reports from law-enforcement and security firms show:
- Ransomware remains one of the most pervasive threats to critical infrastructure operators, with year-on-year growth in incident reports across sectors like healthcare, energy, manufacturing and transport.
- Industrial and manufacturing organisations now account for a significant share of global ransomware attacks, causing material downtime and safety risks on shop floors and in supply chains.
- Attackers increasingly exploit known but unpatched vulnerabilities in VPNs, remote access systems and OT gateways rather than relying purely on phishing.
Why go after critical infrastructure at all? Because:
- Time pressure is extreme. Hospitals and utilities can’t tolerate long outages, so attackers believe they’re more likely to pay.
- Legacy tech is common. Unsupported OT systems, old protocols and fragile networks make hardening tricky.
- Complex stakeholder maps (regulators, insurers, boards, the public) can slow decision-making just when speed matters most.
For critical infrastructure, ransomware is no longer “just IT’s problem”. It’s an executive, board and national resilience issue — and your playbook has to reflect that.
RaaS: ransomware as a global franchise business
Ransomware-as-a-Service (RaaS) turns what used to be a small group of elite operators into a franchise-style ecosystem. Instead of building everything themselves, criminals now specialise:
How the RaaS model works (high level)
- Core crew: develops the ransomware family, maintains infrastructure and runs “support”.
- Affiliates: break into victims, move laterally, exfiltrate data and deploy the payload.
- Initial access brokers: sell footholds into networks (compromised VPNs, RDP, credentials).
- Money launderers: help move and clean ransom payments through crypto and financial channels.
The affiliates typically get a cut of each ransom, while the core crew handles updates, leak sites, and negotiations. The result:
- Lower barrier to entry: Less-skilled criminals can still launch high-impact attacks by renting tools instead of coding them.
- Faster innovation: If one group is disrupted, tooling and playbooks often resurface under a new name with minor tweaks.
- Wider target pool: From small clinics and councils to global manufacturers — if there’s money or leverage, someone in the ecosystem is interested.
Treat RaaS crews like a shadow SaaS industry: they ship updates, run marketing (boastful leak sites), and measure conversion (who pays and why). Your job is not just to block a single piece of malware, but to break their business model wherever you can.
Defense playbook: resilience, not just recovery
So what do you actually change on Monday morning? Think in five layers: exposure, identity, data, detection/response, and people.
1. Reduce exposure and harden the obvious doors
- Minimise and monitor remote access (VPNs, RDP, admin portals, OT gateways).
- Aggressively patch internet-facing systems and high-value apps; many critical incidents start with known, unpatched vulnerabilities.
- Enforce strong MFA everywhere it makes sense, especially for privileged accounts and remote access.
2. Treat identity as the new perimeter
- Apply least privilege: remove standing domain admin rights and use just-in-time elevation instead.
- Separate admin accounts from everyday user accounts and segment high-value systems.
- Monitor for abnormal account behaviour: logins from unusual locations, mass access to file shares, etc.
3. Upgrade your backup & recovery posture
- Maintain immutable, offline backups (or logically separated “air-gapped” storage) that attackers can’t easily modify even with domain admin rights.
- Regularly test restore at scale — not just a single file, but key applications and full environments.
- Document RPO/RTO trade-offs with the business so everyone knows what “good enough” looks like under pressure.
4. Invest in detection and incident response
- Deploy modern endpoint and server protection with strong behavioural detection and containment.
- Capture and retain logs from identity providers, EDR, firewalls and critical apps so you can reconstruct what happened.
- Run regular tabletop exercises that simulate Ransomware 3.0 scenarios, including data leaks and OT impact, not just IT encryption.
5. Prepare the humans: execs, staff and partners
- Train staff to recognise phishing, social engineering and unusual access requests — still a major initial access vector.
- Align legal, PR, compliance and cyber insurance ahead of time so you’re not negotiating contracts mid-crisis.
- Share threat intel and practice joint response with key suppliers and managed service providers.
None of these controls are magic on their own. But together, they turn ransomware from an existential threat into a tough, but survivable, operational incident — even when backups are under attack.
Data snapshots: sectors, vectors & ransom payments (sample charts)
To make the risk landscape more concrete, here are two simple sample datasets: one showing which sectors take the brunt of ransomware attacks, and another showing which initial access vectors show up most often in 3.0-style incidents.
Example mix: industrials and manufacturing, healthcare and public sector together account for a large share of global incidents, reflecting how valuable uptime is in these environments.
Phishing and exploited vulnerabilities remain dominant. Compromised remote access and supply-chain attacks make up a smaller but growing slice of Ransomware 3.0 entry points.
Executive checklist: 15 questions before the next incident
You don’t need to be a technical expert to drive the right changes. Use these questions in your next board, risk or leadership meeting:
- Do we know which systems and data, if encrypted or leaked, would shut us down within 72 hours?
- When did we last test a full restore of a critical application from backup — not just a file?
- Are our most important backups immutable and offline, or can domain admins delete them?
- Which internet-facing systems are exposed today, and how quickly are we patching high-risk vulnerabilities?
- Do all privileged accounts use strong MFA and just-in-time access, or are there permanent standing admins?
- Can we quickly isolate a business unit, plant or hospital from the rest of the network if needed?
- Do we have up-to-date contact paths for law enforcement, regulators and incident-response partners?
- Have we rehearsed a scenario where attackers both encrypt and leak data at the same time?
- How would we communicate with staff and the public if email and chat were unavailable?
- Do we understand the cyber dependencies of our key suppliers and MSPs?
- Is cyber insurance aligned with our actual risk profile and response plan?
- Are OT and IT teams aligned on incident response, or are they operating in silos?
- Have we defined in advance who can authorise ransom-related decisions, under what conditions?
- How often do we brief the board on ransomware risk specifically, not just “cyber” in general?
- Most importantly: if an attack landed tonight, would we be having this conversation for the first time?
The goal isn’t to tick every box perfectly; it’s to avoid discovering your gaps while your screens are already flashing ransom notes.
FAQs: Ransomware 3.0, critical infrastructure & RaaS
Backups are necessary, but not sufficient. Modern gangs steal data first, try to corrupt or delete recovery points, and may also target OT systems and your wider ecosystem. You still need strong preventative controls, detection, segmentation and a tested incident-response plan.
That’s ultimately a legal, ethical and business decision that depends on your jurisdiction, regulators, insurers and the specifics of the case. Many authorities discourage paying because it fuels the ecosystem and offers no guarantees. The key is to decide your principles and escalation path before an incident, with legal counsel and law enforcement input.
No. RaaS exists precisely to lower the barrier to entry. Some state-backed groups may use similar tooling, but many RaaS affiliates are financially motivated criminals with varying skill levels who buy or rent access, then follow playbooks from the core crew.
Unfortunately, yes. Smaller hospitals, councils, schools and manufacturers are often hit because they have valuable data and operations but fewer resources for security. RaaS affiliates may specifically target “mid-market” victims they believe will pay but can’t afford long downtime.
Focus on the basics that stop the majority of incidents: harden and reduce external exposure, enforce MFA, patch high-risk systems quickly, secure and test backups, and run a simple tabletop exercise so leadership knows what to do. Those moves alone can dramatically reduce both likelihood and impact.
At least annually, and after any major change: new facilities, acquisitions, major system upgrades, significant incidents in your sector, or key leadership turnover. Ransomware crews iterate quickly; your strategy should, too.