PICERL — The Six Phases (Quick Reference)
Playbook 1 — Ransomware
- Maintain offline, tested backups following the 3-2-1 rule (3 copies, 2 media types, 1 offsite) — the most important ransomware defence
- Segment the network with VLANs to limit lateral movement if ransomware spreads
- Disable SMBv1, restrict RDP access, enforce MFA on all remote access
- Deploy and configure EDR (Endpoint Detection and Response) with ransomware behavioural detection
- Document all critical assets and their backup/recovery procedures
- User reports files renamed with unknown extensions, ransom note on desktop
- SIEM/EDR alerts on mass file encryption activity or Volume Shadow Copy (VSS) deletion
- Identify patient zero — the first infected system — via EDR telemetry or log analysis
- Determine the ransomware family (Nokoyawa, LockBit, etc.) — guides decryption tool search and legal reporting
- Confirm scope: how many systems are encrypted? Has data been exfiltrated (double extortion)?
- Immediately isolate infected machines — disconnect network cable or disable NIC. Do not simply shut down (memory evidence lost)
- Block known C2 (command-and-control) IPs and domains at the firewall
- Disable compromised accounts — attacker may have stolen credentials
- Quarantine the affected network segment at the switch/VLAN level
- Do not pay the ransom at this stage — legal, law enforcement, and insurance must be involved first
- Preserve forensic evidence: memory dumps, disk images, network logs — before reimaging
- Check ransomware decryption tools (NoMoreRansom.org) before reimaging — some families have free decryptors
- Reimage all infected systems from known-good baseline — do not attempt to clean in place
- Reset all credentials — assume all accounts on infected systems are compromised
- Identify and patch the initial access vector (phishing link, RDP brute force, unpatched vulnerability)
- Remove all persistence mechanisms: scheduled tasks, registry run keys, startup entries
- Restore data from verified clean backups — confirm backup integrity before restoring
- Bring systems online gradually — monitor closely for re-infection before expanding
- Verify EDR is active and detecting on restored systems before returning to production
- Notify affected users, management, legal counsel, and regulators per breach notification requirements
- Monitor for signs of data exfiltration — double extortion ransomware may still threaten to publish stolen data
- Timeline review: how did ransomware enter, how long was it present before detection, what was the dwell time?
- Review backup strategy — were backups clean? Were they accessible? How long did recovery take?
- Update IR plan and playbook with lessons from this incident
- Conduct phishing simulation and security awareness training if phishing was the vector
Playbook 2 — Data Breach
- Classify all data by sensitivity — know where PII, PHI, PCI data lives before a breach occurs
- Implement DLP (Data Loss Prevention) tools to detect and alert on large data transfers
- Establish legal counsel relationships and regulatory notification procedures in advance
- Know your breach notification deadlines: GDPR: 72 hours to supervisory authority; HIPAA: 60 days; many US state laws vary
- SIEM alert on unusual large outbound data transfer, access to sensitive file shares at off-hours
- DLP alert on emails containing credit card numbers, SSNs, or other sensitive data patterns
- Determine: what data was accessed, by whom, when, and was it exfiltrated or only viewed?
- Check access logs for the affected systems — determine if internal or external actor
- Document a preliminary scope assessment — data types, record count, affected individuals
- Revoke compromised credentials immediately — disable accounts, rotate API keys, invalidate sessions
- Block exfiltration paths — close outbound connections to attacker-controlled infrastructure
- Preserve all evidence before containment actions change logs — forensic copy of relevant systems
- Engage legal counsel immediately — they determine if law enforcement notification is required and guide all external communication
- Do not publicly disclose the breach before legal review — premature disclosure can create liability
- Remove attacker access — all backdoors, compromised accounts, and persistence mechanisms
- Patch the exploited vulnerability (SQL injection, misconfigured S3 bucket, unpatched CVE)
- Verify no additional exfiltration paths remain — audit all egress points
- Conduct full credential review — assume any credentials on affected systems are stolen
- Send breach notifications to regulators within required timeframes (GDPR 72h, HIPAA 60d)
- Notify affected individuals per legal requirements — include what data, what happened, what they should do
- Offer credit monitoring if financial or identity data was breached
- Restore any affected systems from clean backups — verify integrity before returning to production
- How was the initial access gained — credential theft, vulnerability exploitation, insider?
- How long did the attacker have access before detection? (Dwell time)
- Were DLP and SIEM controls adequate? What additional monitoring is needed?
- Update data classification and access control policies to reduce future exposure
Playbook 3 — DDoS Attack
- Subscribe to a DDoS mitigation service (Cloudflare, AWS Shield, Akamai) before an attack — reactive subscription during an attack is too slow
- Over-provision bandwidth where possible — gives headroom during an attack
- Configure rate limiting and geo-blocking on edge devices
- Establish ISP contact point for upstream filtering — ISP can null-route attack traffic before it reaches your network
- Sudden spike in inbound traffic, dramatically increased connection attempts, service degradation or outage
- Identify attack type: volumetric (bandwidth flood), protocol (SYN flood, ping of death), application layer (HTTP flood)
- Distinguish DDoS from legitimate traffic spike — check source diversity and traffic patterns
- Note: DDoS is often a smokescreen for a simultaneous breach attempt — check for other indicators
- Activate DDoS mitigation service — scrubbing centre filters malicious traffic, passes legitimate traffic
- Implement blackhole routing (null route) for the targeted IP if service must be sacrificed to protect the network
- Contact ISP for upstream filtering — block attack traffic at the ISP level before it reaches your infrastructure
- Apply ACLs to block known attacking IP ranges or ASNs if attack is from identifiable sources
- Rate-limit connections per source IP at the firewall
- DDoS attacks typically end when attackers stop or move on — eradication is about removing the attack path, not a persistent attacker
- Identify and block all identified botnet IPs used in the attack
- If attack exploited an amplification vector (DNS amplification, NTP amplification), close the misconfiguration
- Gradually restore services — monitor for return of attack traffic before fully opening
- Remove temporary blocks carefully — some legitimate users may have been caught in IP blocks
- Verify service performance is back to baseline — check response times, error rates
- Keep mitigation services active for 24–48 hours — attacks often resume after a pause
- How quickly was the attack detected? What monitoring gaps existed?
- Was DDoS protection in place adequate? Should a higher-tier mitigation service be subscribed?
- Was this a smokescreen? Review logs for other compromise indicators during the attack window
- Calculate business impact — downtime cost justifies investment in prevention
Playbook 4 — Phishing / Business Email Compromise
- User reports suspicious email with link or attachment, or reports clicking a link and entering credentials
- SIEM alert on user authenticating from unusual geographic location after clicking a link
- Determine: did user click the link? Did they enter credentials? Did they open an attachment?
- Pull the email headers — identify actual sending server, check SPF/DKIM/DMARC pass/fail
- Identify all other users who received the same email — scope the campaign
- If credentials were entered: immediately reset the affected account's password and revoke all active sessions
- Enable or enforce MFA on the compromised account before restoring access
- Block the phishing domain and sending IP at the email gateway and web proxy
- Recall/quarantine the phishing email from all recipient mailboxes using email admin tools
- Check if the compromised account sent any outbound phishing emails to contacts
- Audit the compromised account's activity — what was accessed, were email rules set up (attacker forwarding), were files shared?
- Remove any inbox rules the attacker created (common BEC tactic: forward all emails to attacker)
- If malware was delivered via attachment, treat as a malware incident — scan and remediate the endpoint
- Review OAuth app permissions — attackers may have authorised malicious apps to access the account
- Why did the phishing email bypass email security filters? Update rules accordingly
- Conduct phishing simulation training with the affected user and their team
- Review DMARC, DKIM, and SPF configuration for your own domain to prevent outbound spoofing
- Enforce MFA across the organisation — a phished password alone should not grant access
Containment Strategies — When to Use Each
| Strategy | What It Does | Use When | Trade-off |
|---|---|---|---|
| Network Isolation | Disconnect system from all networks — pull cable, disable NIC, quarantine VLAN | Active malware spreading laterally, compromised system, ransomware detected | System is unavailable — business impact. May alert attacker they've been detected. |
| Segmentation | Move affected system to an isolated VLAN — internet blocked, internal blocked, only forensics team can access | Need to monitor attacker behaviour (honeypot), maintain limited availability during investigation | Attacker still has access to the isolated system — must monitor closely |
| Blackholing / Null Route | Route all traffic to the targeted IP to /dev/null — drops all packets, service completely unavailable | DDoS overwhelming the network — sacrifice the targeted service to protect everything else | Service is completely unavailable to everyone, including legitimate users |
| Account Suspension | Disable the compromised user account — prevents authentication | Credential compromise, phishing, insider threat — stop attacker from using stolen identity | Legitimate user cannot work — coordinate with affected user for rapid recovery |
| Evidence Preservation (No Action) | Monitor without containment — collect evidence while attacker is unaware | Law enforcement investigation, threat intelligence gathering, when attacker access needs to be proven | Risk of additional damage while monitoring — requires legal authorisation and careful management |
When collecting evidence, start with the most volatile data (lost when system is powered off) and work toward the least volatile. The Security+ exam tests this order:
1. CPU registers and cache → 2. RAM / memory contents → 3. Running processes → 4. Network connections → 5. Filesystem (temporary files, open files) → 6. Disk storage → 7. Remote logs → 8. Archived/backup media
This is why you do not immediately reboot or power off a compromised system — you lose RAM contents (encryption keys, running processes, network connections) that may be critical to the investigation.
Containment before eradication — you always stop the spread before removing the threat. Eradicating malware from one machine while it's still spreading to others is pointless.
Legal counsel first for data breaches — before notifying the public, law enforcement, or even affected users, legal counsel must be engaged. They determine what must be disclosed, when, and how. Premature disclosure can create liability.
Lessons Learned is mandatory — the post-incident review is not optional. Security+ scenario questions often describe skipping this phase — the correct answer is always that the organisation should conduct a lessons learned review regardless of how the incident resolved.
Exam Scenarios
Studying for Security+ SY0-701?
See the best courses, practice exams, and study guides.