Monday, November 21, 2016

Create & Restore Linux System Data Using Dump Command

I was searching for a simple backup solution in Linux system that could serve in block-level backup and for disaster recovery as well, and simply cost effective. Now-e-days, one could see a lot of options, however, I wanted to try out simple and yet native Linux solution, so, I came across this "dump & restore" utility.  This may not be an ideal solution for a larger network, but one could leverage on this for smaller network wherein there is no tight schedule on recovery. So, I thought of creating a step-by-step document of the same.

Complete system can be backed up and restored using native Linux commands “dump & restore” which would facilitate backup or disaster recovery process.

What are “dump” & “restore” commands?

The “dump” would capture data files and would be restored using "restore" command.

As per the man page of dump command:

Dump examines files on an ext2/3/4 file-system and determines which files need to be backed up. These files are copied to the given disk, tape or other storage medium for safe keeping.

The restore command would restores the files copied via dump command.
Some Points: Since this only records and restores data, one must take care of disk layout and corresponding UUID or labels and should manually create it (in case of disaster recovery). This would be shown later in this document.
Build Environment:-

I’m using a VM (Virtual Machine) for this test and documentation purpose. This VM is installed with RHEL 6.8 running in Workstation 11.
This VM has got 2 hard drives (sda & sdb). Root file system is installed on /dev/sda2 and boot on /dev/sda1 & /dev/sda3 is a swap partition as shown below:

The other partition “/dev/sdb1” is being used to store disk dumps (backup data).
Some system details of the VM:-

“/etc/fstab” details:-


Here, I’d take disk level backups of partitions /dev/sda{1..3} (using dump command) on which my root file system resides, and store the dumps on /dev/sdb1 partition. Later, destroy the partition table of /dev/sda. At this stage, system would fail to boot since there is no partition table (no MBR data). Hence, restore dump data after creating proper partitions on /dev/sda.

Points To Consider Before Proceeding:-

- Make backup of partition details, /etc/fstab, blkid output and other details as required before destroying partition table which makes system not bootable.
- Good to run “sosreport” (in RHEL variants) and save complete system configurations on external device.
- We’d need partition-wise layout details for re-creating partition table while restoring.
- Also, make a note of the file system UUID or label whichever relevant.
- In case of LVM based file systems, need to backup /etc/lvm/archive which could be used to re-create required pvs, vgs and lvs.
A snap of “fdisk -cul /dev/sda” as shown below:-

A list of file system’s UUIDs associated:-

That being said, let’s start executing the plan.

Step 1: Dump partition wise data

Now, let’s backup each partition wise data using “dump” command. In our setup, we’d need to backup data from /dev/sda1 (boot file system) and /dev/sda2 (root file system) only and /dev/sda3 is a swap partition hence, can be skipped.
- Boot the system into single user mode.

- Use “dump” command to backup data from /dev/sda1 & /dev/sda2 partitions as shown in below screen image (this dump is stored on the block device /dev/sdb1):-

Dumping data would depend on amount of data actually stored on block device. In the above (test-setup) example, it took almost 5 minutes to backup /dev/sda2 (root file system).

The details of the backup is as shown here:-

  • If the system has been running for a long time, it is advisable to run e2fsck on the partitions before backup.
  • “dump” should (may) not be used at heavily loaded and mounted filesystem as it could backup corrupted version of files. This problem has been mentioned on and Red Hat Manual.
If required, backup could be stored on remote systems as shown below. In general, backup files should of course be stored in a different location.
#dump -0u -f - /dev/sda1 | ssh root@remoteserver dd of=/tmp/sda1.dump
- Now, lets delete the partition table of /dev/sda device which would make system non-bootable:-

- After reboot the system failed to boot up and received an error “Operating system not found”.

Step 2: Backup Process

Since system failed to boot, I tested system by booting from rescue image and when tried to identify Linux partitions it failed to detect (as shown below) :-

Deleting of the partition details has nullified MBR (Master Boot Record) and hence the above error.

The first step in restoring the system is to create disk layout as necessary.  So, lets create partition layout exactly as it was before in rescue mode on hard drive “sda”.

- Creating /dev/sda1 partition:-

Need to create the first partition which is for boot file system and this would require the starting sector in bytes and end sector as shown (creating exact layout as it was before):-

Same way, created /dev/sda2 partition (root file system) as well and changed the partition ID of “/dev/sda3” as swap.

Created file systems on the newly created partitions. Since /dev/sda3 is being used for swap, we’d need to run “mkswap” with same UUID as it was before as shown below:-

Likewise, need to change the UUID of /dev/sda1 and /dev/sda2 to the original UUID as it was before by referring to the backup data that we had taken earlier. To get this done, “tune2fs” command can be used.
- Next step is to toggle the boot flag and mark /dev/sda1 as boot partition.

NOTE:-  If the data is not corrupt then there is a high possibility of recovering without running mkfs.ext4 command on each partitions (boot & root).  Yes, one could run "setup" command as explained below on boot device/partition and try booting the system to check if it comes up.  If the damage was only to the partition table then system would be able to boot up, if not then need to create file system and then restore from backup on each block device.
- Now, let’s mount /dev/sda1 on /temp1, /dev/sda2 on /temp2 & /dev/sdb1 on /backup-data. Check if there is any data in /temp1 and /temp2, there should not be any since these are new file systems.

The files under /backup-data “sda1.dump” & “sda2.dump” are the backup of /dev/sda1 and /dev/sda2 file systems which are stored on device /dev/sdb1.
- Let’s restore data from dumps using “restore” command in rescue environment.

- After this, system yet failed to boot up and thrown up same error message as before “Operating system not found”.  This could happen because of missing MBR data on primary hard drive.
- Mount boot partition on a temporary mount point and check file under grub directory.

Run the “grub” command by sourcing “” file as shown below which would drop into grub prompt.  Identify the root file system disk and run “setup” command which would install missing MBR and corresponding files over there:
Command: “grub --device-map /temp1/grub/

After this exit the shell and reboot the system. System should boot up fine. That’s all!!!
NOTE:- If the underlying block devices are logical volumes (lv) then need to create physical volumes using UUID referencing to archives stored under /etc/lvm/archive folder, likewise need to create same volume group (vg) and lvs.  Saying so, there is a need to backup /etc/lvm contents.

No comments: