Wipe Hard Disk of Hetzner Root Server

Recently I’ve migrated my stuff to another Hetzner Root Server. But how to handle the possible artifacts of data on the hard disks before returning back the server to Hetzner? Deleting filesystems and reformatting is not sufficient these days.
In this little article I’m going to describe my steps taken to securely give back my old server to Hetzner.

Be prepared

Before wiping the hard disks of a server you should be absolutely sure to either have backed up everything you might need later on or have transferred all data to the new location. In my case I’ve ordered a new root server at Hetzner and transferred all necessary data from the old to the new server using SSH.

Incase of having to transfer a big amount of (small) file it might be useful to create a tar file of your data, as transferring one big file is much more efficient than transferring hundreds of thousands of file through your SSH connection. On the other hand, I’ve not measured if there is a real time advantage to create the tar file first. This takes its time too.

Hetzner Rescue System

If all data are safe, it’s time to wipe the hard disks. I’ve used Hetzners Rescue System for this.

Booting To Rescue System

To boot your server to the rescue system, log in to Hetzner Robot. In the servers details you’ll find the tab Rescue. Select the operating system and SSH Key, if any. After you’ve activated the rescue system in Robot, you have to manually reboot your server within 60 Minutes.

Login to your server with the provided credentials or your SSH key.

Preparing Local Filesystem

Most likely, your server might use LVM to manage storage. LVM is still active in the rescue system. Accordingly, the LVM needs to be taken out of order. To achieve this you have to:

  • Remove all logical volumes using lvremove /path/to/lv
  • Remove all volume groups using vgremove vgName
  • Remove all physical volumes using pvremove /path/to/pv

To determine your paths and names in your LVM setup you can use the commands lvs or lvdisplay for your existing logical volumes, vgs or vgdisplay for your existing volume groups and pvs or pvdisplay for your physical volumes.

root@rescue ~ # lvs
  LV                VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  docker_containers vg0 -wi-a----- 450.00g
  docker_images     vg0 -wi-a-----  10.00g
  docker_overlay    vg0 -wi-a-----  50.00g
  docker_volumes    vg0 -wi-a-----  50.00g
  home              vg0 -wi-a-----  20.00g
  root              vg0 -wi-a-----  20.00g
  srv               vg0 -wi-a----- 400.00g
  swap              vg0 -wi-a-----   4.00g
root@rescue ~ # lvremove /dev/vg0/swap
Do you really want to remove active logical volume vg0/swap? [y/n]: y
  Logical volume "swap" successfully removed.
root@rescue ~ # lvremove /dev/vg0/srv
Do you really want to remove active logical volume vg0/srv? [y/n]: y
  Logical volume "srv" successfully removed.
root@rescue ~ # lvremove /dev/vg0/docker_containers
Do you really want to remove active logical volume vg0/docker_containers? [y/n]: y
  Logical volume "docker_containers" successfully removed.
root@rescue ~ # lvremove /dev/vg0/docker_images
Do you really want to remove active logical volume vg0/docker_images? [y/n]: y
  Logical volume "docker_images" successfully removed.
root@rescue ~ # lvremove /dev/vg0/docker_overlay
Do you really want to remove active logical volume vg0/docker_overlay? [y/n]: y
  Logical volume "docker_overlay" successfully removed.
root@rescue ~ # lvremove /dev/vg0/docker_volumes
Do you really want to remove active logical volume vg0/docker_volumes? [y/n]: y
  Logical volume "docker_volumes" successfully removed.
root@rescue ~ # lvremove /dev/vg0/home /dev/vg0/root
Do you really want to remove active logical volume vg0/root? [y/n]: y
  Logical volume "root" successfully removed.
Do you really want to remove active logical volume vg0/home? [y/n]: y
  Logical volume "home" successfully removed.
root@rescue ~ # lvs
root@rescue ~ # vgs
  VG  #PV #LV #SN Attr   VSize  VFree
  vg0   1   0   0 wz--n- <1.82t <1.82t
root@rescue ~ # vgremove vg0
  Volume group "vg0" successfully removed
root@rescue ~ # vgs
root@rescue ~ # pvs
  PV         VG Fmt  Attr PSize  PFree
  /dev/md1      lvm2 ---  <1.82t <1.82t
root@rescue ~ # pvremove /dev/md1
  Labels on physical volume "/dev/md1" successfully wiped.

After you have remove all LVM related stuff, you should also stop the Software RAID:

root@rescue ~ # mdadm --stop /dev/m
mapper/ md/     md0     md1     mem     mqueue/
root@rescue ~ # mdadm --stop /dev/md1
mdadm: stopped /dev/md1
root@rescue ~ # mdadm --stop /dev/md0
mdadm: stopped /dev/md0

Swipe baby, swipe

After all preparation steps have been completed, it’s time to swipe the disks using the shred which is preinstalled in the rescue system.

root@rescue ~ # shred --help
Usage: shred [OPTION]... FILE...
Overwrite the specified FILE(s) repeatedly, in order to make it harder
for even very expensive hardware probing to recover the data.

If FILE is -, shred standard output.

Mandatory arguments to long options are mandatory for short options too.
  -f, --force    change permissions to allow writing if necessary
  -n, --iterations=N  overwrite N times instead of the default (3)
      --random-source=FILE  get random bytes from FILE
  -s, --size=N   shred this many bytes (suffixes like K, M, G accepted)
  -u             deallocate and remove file after overwriting
      --remove[=HOW]  like -u but give control on HOW to delete;  See below
  -v, --verbose  show progress
  -x, --exact    do not round file sizes up to the next full block;
                   this is the default for non-regular files
  -z, --zero     add a final overwrite with zeros to hide shredding
      --help        display this help and exit
      --version     output version information and exit

To ensure wiping the disks is not interrupted by network connectivity, your should consider to use screen or tmux to run the wipe. I’ve decided to use tumx, which needs to be installed first (apt install tmux) in the rescue system.

To determine the path of the block device of your hard disks you can use the command fdisk -l:

root@rescue:~ # fdisk -l
Disk /dev/sda: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: ST4000NM0245-1Z2
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 95A492FA-61DC-40EA-B142-3DD13C4B2504

Device       Start        End    Sectors  Size Type
/dev/sda1     4096    2101247    2097152    1G Linux RAID
/dev/sda2  2101248 7814037134 7811935887  3.6T Linux RAID
/dev/sda3     2048       4095       2048    1M BIOS boot

Partition table entries are not in disk order.


Disk /dev/sdb: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: ST4000NM0245-1Z2
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 8F3ED0D5-3B0D-418B-AA47-53044E37E58A

Device       Start        End    Sectors  Size Type
/dev/sdb1     4096    2101247    2097152    1G Linux RAID
/dev/sdb2  2101248 7814037134 7811935887  3.6T Linux RAID
/dev/sdb3     2048       4095       2048    1M BIOS boot

As we want to swipe the hole disk and not a specific partition we use /dev/sda and /dev/sdb, as these are the two hard disks in my system.

To swipe the /dev/sda execute

root@rescue:~ # shred /dev/sda --force --verbose --zero
shred: /dev/sda: pass 1/4 (random)...300MiB/1.9TiB 0%
shred: /dev/sda: pass 1/4 (random)...970MiB/1.9TiB 0%
...
shred: /dev/sda: pass 1/4 (random)...343GiB/1.9TiB 18%

Per default, shred will wipe the disk 3 times. If you want less or more iterations of wiping the disk use the argument -n n , where the second n is the amount of iterations you want.

After shred has finished its job (for all disks!) it is safe to return the server back to Hetzner.

Schreibe einen Kommentar