- Fixed pricing on recovery (You know what you are paying - no nasty surprises).
- Quick recovery turnaround at no extra cost. (Our average recovery time is 2 days).
- Memory card chip reading services (1st in the UK to offer this service).
- Raid recoding service (Specialist service for our business customers who have suffered a failed server rebuild).
- Our offices are 100% UK based and we never outsource any recovery work.
- Strict Non-disclosure privacy and security is 100% guaranteed.
Case Studies
Case Studies
We have helped a wide range of clients over the last 15 years and reasons as to the whys and wherefores of the loss of their data. To that end we have a broad knowledge of problems relating to data recovery as our case studies show.
Case Study 1 — WD Elements (3.5″ HDD) Clicking / Progressive Read Failures
Issue
A WD Elements external drive began emitting repetitive seek-reset clicks yet initially remained readable. Over several weeks, error frequency increased until the volume intermittently dropped offline. A local repair shop advised a “mechanical fault” and referred the client to us.
Technical Assessment
-
SMART on arrival (read-only): Pending and uncorrectable sector counts rising; reallocation events present; repeated command timeouts.
-
Acoustic signature: Periodic “click–spin–click” consistent with failed head initialization / servo loss.
-
Surface risk: Continued power-on while clicking likely caused secondary media abrasion and debris.
Recovery Procedure
-
Write-block intake & imaging triage
-
Captured ID, ROM, and adaptive data; prevented any host writes.
-
Head map test showed two weak heads with immediate seek error escalation on inner tracks.
-
-
Mechanical remediation
-
Opened the HDA under controlled lab conditions; observed head crash debris and concentric media scoring on inner zones.
-
Performed head-stack replacement (HSA) using a matched donor (same model, micro-jog family, preamp revision).
-
Transferred native ROM/adaptives to ensure proper servo alignment and defect list compatibility.
-
-
Service Area (SA) & translator checks
-
Read SA modules (P-list/G-list, adaptives) and rebuilt translator to restore stable LBA addressing.
-
-
Targeted imaging strategy (PC-3000 / DeepSpar)
-
Per-head, zone-aware imaging: read in large outer-zone spans first to maximize good yield early.
-
Aggressive soft-reset window tuning, reverse LBA passes, and head-swap bypass rules for the two weakest heads.
-
Created an error map and ran multiple low-duty retry passes only where file metadata resided.
-
-
Result & data validation
-
Achieved ~89% sector imaging; non-recoverable regions aligned with visibly scored tracks.
-
Reconstructed the NTFS volume; verified directory trees and opened sample client files (project folders, photos).
-
Delivered recovered content to a client-supplied external drive with hash manifests (per-file verification).
-
Note: Clicking = heads repeatedly failing to acquire servo; continued power can turn a fixable head issue into permanent platter damage. Early imaging dramatically improves outcome.
Case Study 2 — Dropped WD NAS (2×6 TB) in RAID 0 with On-Box Encryption
Issue
A two-bay WD NAS configured as RAID 0 (striped) was dropped during an office move. On power-up the chassis LEDs illuminated, but the unit failed to boot; faint internal clicking was audible. The array was configured with NAS-level encryption (on-box).
Technical Assessment
-
Both members exhibited mechanical symptoms (clicking, intermittent ID) indicative of head damage from shock.
-
Given RAID 0, 100% (or near-full) images of both members are required; any unreadable stripe segments will corrupt corresponding file regions.
-
Encryption layer likely implemented as LUKS/dm-crypt or vendor on-box encryption tied to the NAS OS/keys.
Recovery Procedure
-
Safe disassembly & intake
-
Instructed client to remove the two disks from the NAS and ship drives only (no further power-ons).
-
Logged serials/slot order; preserved controller metadata where present.
-
-
Mechanical remediation (both disks)
-
Each HDD opened under controlled conditions; head-stack replacements performed with matched donors.
-
Verified SA modules and translators; transferred ROM/adaptives to donors.
-
-
Imaging (member-wise)
-
Hardware-assisted cloning with per-head maps, low-duty cycles, and adaptive timeouts.
-
Achieved full LBA images on both members (minor re-reads resolved with targeted passes).
-
-
Array reconstruction
-
Determined stripe size, member order, and start offsets heuristically (filesystem anchor alignment and entropy boundaries).
-
Built a read-only virtual RAID 0 from the two images; validated with filesystem structures.
-
-
Encryption handling
-
Identified the NAS encryption container (LUKS header/dm-crypt or vendor equivalent).
-
Using the client’s credentials/keys (and/or extracted NAS keystore), unlocked the plaintext volume and mounted the data set.
-
-
Extraction & verification
-
Exported shares to a 10 TB external drive; verified with path-level spot checks and per-directory hash manifests.
-
Turnaround: 4 business days end-to-end.
-
Note: “Platter swaps” are almost never indicated for drop damage; properly executed HSA (head) swaps are the correct remedial action in the vast majority of shock-induced clicking failures.
Case Study 3 — QNAP: Reset Array Containing Hyper-V VHDX Files (Virtual File Server)
Issue
A QNAP with 4 drives hosted a file server via Hyper-V VMs (VHDX). An ex-employee performed a factory reset, leaving the system non-functional. QNAP support could not recover the data. The appliance contained five virtual systems critical to a large print company’s operations.
Technical Assessment
-
Factory resets on QNAP commonly rewrite array metadata and OS volumes, but user data blocks can remain intact.
-
Typical QNAP stack: mdadm (Linux MD RAID) → optional LVM (thick/thin) → EXT4/Btrfs. Hyper-V workloads appear as VHDX files (SMB share) or as iSCSI LUNs presented to Windows.
Recovery Procedure
-
Drive-first approach
-
Cloned all four disks with hardware imagers (error-map preserved). No writes to originals.
-
Recorded residual MD superblocks and partitioning to infer prior layout.
-
-
Array reassembly (off-appliance)
-
mdadm forced assembly of the prior RAID set from the clones (derived level/order/chunk).
-
Activated LVM volume groups if present; mapped logical volumes without altering metadata.
-
-
Filesystem recovery
-
For EXT4/Btrfs: mounted read-only; when reset damaged journal/trees, used
fsck
(EXT4) orbtrfs restore
to export files even if the volume couldn’t mount cleanly. -
Located VHDX artifacts either as files on the share or as block-backed LUNs (thin-pool volumes).
-
-
VHDX integrity repair
-
Validated VHDX headers/footers and BAT (Block Allocation Table).
-
Replayed VHDX log; when BAT damage existed, rebuilt block maps from extents and carved contiguous regions.
-
Mounted each VHDX as a virtual disk; repaired guest NTFS where needed via safe, image-based
chkdsk
/metadata reconstruction.
-
-
Verification & delivery
-
Brought up all five VMs to a bootable state where possible; where not, extracted complete file systems from within VHDX.
-
Delivered recovered VM directories and/or exported data with SHA-256 manifests and a brief incident report (root cause, corrective actions).
-
Outcome
-
Full logical recovery of the VM datasets; client operations restored.
-
Recommendations delivered: disable self-service reset permissions, implement off-box backups (VM-consistent), and document encryption key custody.
Final Notes
-
We image first (forensic clones) and do all repair/rebuild work on copies—protecting evidence and enabling multiple strategies without risk.
-
Where encryption is present (NAS on-box or SED SSD), lawful keys/credentials are required; once unlocked, recovery proceeds normally.
-
Early escalation saves data: powering a clicking drive or forcing rebuilds on damaged members typically reduces recoverability.