The 6 Steps - Memorise This Order
1. Identify the Problem → 2. Establish a Theory → 3. Test the Theory → 4. Establish a Plan → 5. Implement the Solution → 6. Document Findings

The most common exam traps: skipping to the fix before identifying and theorising, skipping documentation (step 6 is mandatory), and changing multiple variables at once in step 3.

The 6 Steps - In Detail

1
Identify
Identify the Problem
Gather information before touching anything. Interview the user - what changed, when did it start, is it intermittent? Check error messages, event logs, and system information. Distinguish symptoms from the actual problem: "screen is blank" is a symptom; the problem could be GPU, cable, PSU, or software. Duplicate the issue if possible - you cannot fix what you cannot reproduce.
Ask open-ended then closed questions. Check Event Viewer, Device Manager, SMART data. Identify scope - one user, one department, or all users? Note any recent changes.
2
Theory
Establish a Theory of Probable Cause
Form a hypothesis based on what you gathered. Start simple - question the obvious first (is it plugged in?) before diagnosing complex failures. For network problems, start at OSI Layer 1 and work upward. List causes ranked by probability. Research if needed - knowledge base, documentation, search.
Apply Occam's Razor. Recent changes are the most common cause of sudden failures. Document your theory before proceeding.
3
Test
Test the Theory to Determine the Cause
Change one variable at a time. Swap a known-good component, run diagnostics, or check a configuration against baseline. If the theory is confirmed - move to step 4. If it's disproved - return to step 2 for a new theory. Never change multiple things simultaneously. If you cannot determine the cause - escalate.
Use known-good spares. Run built-in diagnostics. If the problem disappears when you swap something - that is your cause. If nothing changes - that component is not the cause.
4
Plan
Establish a Plan of Action and Identify Potential Effects
Before implementing any fix, plan the solution and consider its impact. Will it require downtime? Does it affect other systems? Should you back up data first? Does it need change management approval? Identify side effects - a firmware update that fixes one bug may break a peripheral.
Back up data before making changes. Check change management policy. Identify rollback plan. Schedule downtime if needed. Get sign-off from the appropriate authority.
5
Implement
Implement the Solution or Escalate
Execute the fix. Verify the solution actually resolves the problem - don't assume it worked because no error appeared. Test the original symptom directly: if the user couldn't print, make them print. If you cannot resolve the issue - escalate to the next level of support with all findings documented.
Apply the fix. Test the original problem is gone. Test that no new problems were introduced. Have the user confirm resolution.
6
Document
Document Findings, Actions, and Outcomes
Documentation is not optional - it is step 6 of the process. Record the original problem, root cause, what you did to fix it, and the outcome. Update the knowledge base or ticket system. Educate the end user if their actions caused the issue. This is how organisations learn from incidents and how future technicians solve the same problem faster.
Update the help desk ticket with cause, fix, and time. Document any configuration changes. Note preventive measures. Educate the user if needed. Close the ticket only after confirmed resolution.
The 3 Most Tested Methodology Mistakes

Skipping to the fix - the most common wrong answer. Scenarios describe symptoms and ask what to do first. The correct answer is almost always gather information / identify the problem, not implement a solution.

Making multiple changes at once - violates step 3. If you swap the RAM and reinstall the OS simultaneously and the problem goes away, you don't know which change fixed it. One variable at a time, always.

Skipping documentation - step 6 is mandatory. When a scenario ends with "the problem is fixed" and asks what's next, the answer is document findings. CompTIA consistently tests that documentation is part of the job, not an afterthought.

Domain-Specific Troubleshooting Tracks

Hardware Troubleshooting
Identify: POST beep codes, no display, won't power on, peripheral not detected
Theory: Start with power (PSU, cables, seating), then peripherals, then internal components
Test: Swap known-good PSU, reseat RAM/GPU, POST with minimal hardware
Plan: Back up data before component replacement
Implement: Replace faulty part. Verify POST boots and functionality works
Document: Failed component, replacement part number, date
Network Troubleshooting
Identify: No internet, slow speeds, no IP - scope one user vs all
Theory: OSI bottom-up: Layer 1 (cable) → 2 (switch/VLAN) → 3 (IP/DHCP) → DNS/firewall
Test: ping 127.0.0.1 → gateway → external IP → hostname (nslookup)
Plan: Config changes need change management approval
Implement: Fix root cause (cable, IP config, DNS, port). Verify connectivity.
Document: Root cause, fix applied, config changes made
OS and Software Troubleshooting
Identify: BSOD, app crashes, slow performance, driver errors - note all error codes
Theory: Recent changes? Windows Update, new driver, new install? Check Event Viewer
Test: Safe Mode test, SFC /scannow, DISM, Task Manager resource usage
Plan: Driver rollback, System Restore, or repair install - each has data risk
Implement: Roll back driver or restore. Verify no regression.
Document: Error codes, what was tried, what worked
Security Incident Troubleshooting
Identify: Malware symptoms, unusual account activity, unexpected traffic
Theory: Ransomware, RAT, phishing, insider? Check IoCs in SIEM/EDR
Test: Isolate system before testing. Preserve evidence (order of volatility)
Plan: Containment plan - notify security team and legal before changes
Implement: Follow IR playbook: contain → eradicate → recover
Document: Full incident report, IoCs, chain of custody for evidence

The Loopback Ping Sequence

Network Connectivity Troubleshooting - Run In This Order

ping 127.0.0.1 - tests the local TCP/IP stack. If this fails, the NIC driver or TCP/IP stack is corrupt.

ping [own IP] - confirms the NIC has a valid IP configuration assigned.

ping [default gateway] - tests Layer 3 reachability on the local subnet. If this fails, check subnet mask, cable, or VLAN.

ping 8.8.8.8 (external IP) - confirms internet routing works. If this succeeds but hostnames fail, DNS is the problem.

ping google.com (hostname) - tests DNS resolution. If this fails but IP ping succeeds, fix the DNS server setting.

Common Troubleshooting Mistakes

MistakeWhy It's WrongCorrect Approach
Fix before identifyingRandom changes waste time and can introduce new problemsAlways identify and theorise before implementing - steps 1 and 2 come first
Multiple simultaneous changesYou won't know which change fixed the problemOne change at a time. Revert before trying the next thing if it doesn't work.
No backup before changesA failed fix can make the problem worseBack up data as part of step 4 whenever the fix carries data risk
Closing ticket without verifyingUser may reproduce the issue minutes later in their real workflowHave the user confirm resolution in their normal workflow before closing
Skipping documentationNo audit trail. Future technicians get no benefit from your work.Document every time - even quick fixes. Step 6 is not optional.
Not escalating when stuckContinued guessing wastes time and can cause additional damageEscalate to tier 2 with all findings documented. Escalation is expected and professional.

Change Management and Troubleshooting

In enterprise environments, step 4 intersects with change management. Any fix that changes a production system typically requires a change request approved by a change advisory board (CAB) before implementation. Exam scenarios that involve production systems often expect you to recognise that "implement the fix immediately" is wrong. The correct answer is: document the proposed change, get approval, schedule a maintenance window, then implement.

Exam Scenarios

A technician immediately replaces the PSU when a computer won't turn on. What troubleshooting step was violated? Steps 1 and 2 - the technician skipped information gathering and theory formation, jumping straight to step 5. Correct approach: check if the power cable is connected, verify the outlet works, and listen for POST sounds before replacing components.
A technician swaps the network cable and reinstalls the NIC driver simultaneously. The problem resolves. What is wrong with this approach? Violates step 3 - multiple variables changed simultaneously. The technician cannot determine whether the cable or the driver caused the issue. Correct: change one at a time, revert if it doesn't work before trying the next.
A technician has resolved a user's Outlook sync issue and the user confirms it works. What should happen next? Step 6 - Document findings, actions, and outcomes. Update the ticket with root cause, what was done, and user confirmation. Then close the ticket. Skipping this loses the knowledge gained.
A user can ping 8.8.8.8 successfully but cannot browse to any websites. What is the most likely cause? DNS failure - internet routing (Layer 3) works, proven by the successful IP ping. But DNS resolution is failing. Fix: check DNS server settings in the network adapter configuration, or configure 8.8.8.8 as the DNS server.
A technician identifies a critical security vulnerability in a production web server and wants to patch it immediately. What should be done first? Step 4 - Establish a plan of action and get change management approval. Even critical security issues require following change management: document, assess impact, get approval, plan rollback, schedule maintenance window. Emergency change processes still require documentation and approval.
After following all troubleshooting steps, a technician cannot resolve a hardware failure. What is the appropriate next action? Escalate to tier 2 or a hardware specialist, providing the full documented record of what was found and what was tried. Escalation is a defined and expected outcome of step 3 - not a failure.

Device-Specific Troubleshooting Patterns

The CompTIA A+ exam presents troubleshooting scenarios across multiple device categories. Here are the systematic approaches for the most commonly tested device types.

Desktop/laptop won't power on: Step 1 — verify the power source. Is the power cable plugged in? Is the outlet live (test with another device)? Is the power strip switched on? For laptops, try removing the battery and running on AC power only. For desktops, check whether the front panel power switch connector is properly seated on the motherboard header. If power LEDs light but the system doesn't POST, the fault is likely the PSU (under load), RAM, or CPU. POST beep codes (if audible) identify the component.

Display issues: A blank screen on a system that appears to be running should lead you to check: monitor cable connection (both ends), monitor power, correct input source selected on the monitor, and whether the monitor works with a different device. If the system has a discrete GPU, verify the monitor is plugged into the GPU's output (not the motherboard's video-out). A system that shows POST but fails during OS loading points to a software or storage issue, not a display failure. Screen flicker or bright spots on an LCD indicate a failing backlight or inverter.

Audio issues: No sound from speakers — check: volume not muted in OS, correct playback device selected in sound settings, audio drivers installed, physical speaker connection. Check Device Manager for audio device errors. For USB audio devices, try a different USB port. Static or crackling audio can indicate a driver issue, electromagnetic interference from cables running near power lines, or a failing audio jack.

Slow system performance: Symptom-based approach. Slow only at startup: too many startup programs (check Task Manager → Startup tab). Slow all the time: check CPU and RAM utilization in Task Manager — sustained 100% CPU or memory near capacity explains slowness. High disk usage: check for active antivirus scans, Windows Update activity, or a failing HDD. Thermal throttling: check CPU temperature using HWMonitor — temperatures above 90°C cause the CPU to reduce its speed to prevent damage.

Troubleshooting Connectivity — The OSI Layer Approach

For network troubleshooting specifically, the OSI model provides a structured framework for isolating where a problem exists. Working bottom-up from the physical layer to the application layer — or top-down from application to physical — prevents jumping to conclusions and ensures no layer is skipped.

Layer 1 (Physical): Check cable connections, link lights on switches and NICs, cable damage, and whether the correct port is being used. A cable plugged into the wrong port on a switch is a Layer 1 problem. Commands: check ifconfig/ipconfig for interface status showing "disconnected."

Layer 2 (Data Link): Check MAC address table on the switch, VLAN assignment, and duplex/speed settings. A mismatch between the NIC (auto-negotiate) and a switch port forced to 100Mbps full-duplex causes persistent errors and poor performance. Commands: show mac address-table, show interfaces on Cisco switches.

Layer 3 (Network): Verify IP address, subnet mask, and default gateway are correct. Use ping to the default gateway to test local IP connectivity. If the gateway ping succeeds but external pings fail, the problem is beyond the local network. Use tracert / traceroute to identify where packets stop being forwarded. Commands: ipconfig /all, ping, tracert, route print.

Layer 4–7 (Transport through Application): If Layer 3 works (successful pings) but applications fail, the problem is higher up. Can you resolve DNS names (nslookup google.com)? Is the service running on the destination port (telnet servername 443 or Test-NetConnection -port 443)? Is a firewall blocking the specific port? Is the application service stopped on the server? The combination of ping, nslookup, and port connectivity tests quickly identifies Layer 4–7 failures.

Effective Use of Diagnostic Tools

Knowing the troubleshooting methodology is only useful if you know which tools to apply at each step. The A+ and Network+ exams test specific command-line tools — knowing not just what they do but when to use them is critical.

ToolWhat It DoesWhen to Use It
pingSends ICMP echo requests to test reachability. Measures round-trip latency.Verify IP connectivity to a host or gateway. First-line test for Layer 3 reachability.
tracert / tracerouteTraces the path packets take through the network, showing each router hop and latency at each hop.Identify where in the network path connectivity fails or where latency is introduced.
ipconfig / ifconfigShows IP address, subnet mask, gateway, DNS, and interface status on Windows/Linux.Verify IP configuration is correct; check if DHCP assigned an APIPA address (169.254.x.x indicates DHCP failure).
nslookup / digQueries DNS to resolve hostnames to IP addresses. Tests DNS server functionality.Diagnose DNS resolution failures. Test whether a specific DNS server is responding correctly.
netstatShows active network connections, listening ports, and associated processes.Verify a service is listening on the expected port; check for unexpected outbound connections (malware indicator).
arpShows the ARP cache — IP-to-MAC address mappings cached locally.Diagnose Layer 2/3 boundary issues; detect ARP poisoning; verify gateway MAC is correct.
nmapPort scanner — discovers open ports and running services on remote hosts.Security assessments; verify firewall rules are working; identify running services on a server.
WiresharkPacket capture and protocol analysis tool. Captures all traffic on an interface for detailed inspection.Deep protocol-level troubleshooting; analyze handshakes; verify packet contents; diagnose application protocol issues.

Troubleshooting Printers — A+ Specific Scenarios

Printer troubleshooting is heavily tested on the CompTIA A+ exam because printers have multiple failure modes across all layers — physical, network, driver, and application. Applying the methodology systematically prevents the common mistake of reinstalling drivers for what turns out to be a paper jam.

Printer not printing at all: Start with Layer 1 — is it powered on, online, connected? Check the control panel for error codes. Is there paper? Is a toner/ink cartridge empty or improperly seated? Move to Layer 2/3 — is the printer visible on the network? Can you ping its IP? Is the IP address correct and not conflicting with another device? Move to Layer 7 — is the correct driver installed? Is the correct printer selected as default? Is there a job stuck in the print queue that's blocking subsequent jobs?

Print quality issues are almost always Layer 1/physical: streaks or lines on laser printer output indicate a scratched or worn drum or dirty fuser roller. Faded output indicates low toner. Smearing or unfused toner indicates a failed fuser. Ghosting (faint duplicate images on the page) indicates a worn drum that isn't fully cleaning between passes. On inkjet printers, banding or missing colors indicate clogged print heads — run the printer's head cleaning cycle and nozzle check print.

Studying for CompTIA A+?

The troubleshooting methodology appears on both 220-1201 and 220-1202. See the best A+ resources.

Best A+ Resources →
IH
IT Study Hub Editorial Team
CompTIA A+ · Network+ · Security+

Our content is written and reviewed by IT professionals holding active CompTIA certifications. Every article is grounded in current exam objectives and cross-checked against official CompTIA documentation and authoritative primary sources. About us →

Related Articles