Set up dar backups for Debian or Ubuntu

There is no warranty of any kind on the below instructions or associated software. Use or misuse may result in loss of data, unauthorized access of data, and associated damage.

Revised and updated March 2012
Overview
The need for good backups
Basic installation
Some special backups
Making script changes after the fact
More about encryption
Remounting the USB disk
Additional warning when things go wrong
Running backups every 10 Minutes
Backing up encrypted files
Terminal typing shortcuts and help
Resetting your password
Fixing boot problems
Restoring single folders or files
Restoring the entire system disk
Warning: at 01/03/13, dar_static is not available on Debian/Ubuntu due to a bug in the encryption software. For now you will have to use plain dar instead of dar_static.

Overview

Bad things happen with computers. To deal with that, you need backups. This page explains how you can maintain a set of system backups on Linux. These backups can be of well-chosen ages, from recent to long ago. There are also alternatives in case you just want to back up your home folder, or a disk, instead of the entire system,

The instructions here are primarily intended for a Debian or Ubuntu Linux system with a single user who does not log in to root. If that does not describe your case, presumably you know enough about Linux to make the appropriate changes. For other flavors of Unix, it may not really be rocket science to make the appropriate changes.

The general instructions do not account for encrypted computers or encrypted home folders using ecryptfs and might not work properly for them. There is a special section that discusses issues in backing up encrypted folders.

Setting up the backups requires:

The need for good backups

Anyone with significant computer experience knows that bad things happen. You are likely to have seen one or more hard disk failures. Power spikes can kill them right away, but power failures and dips can wear them down quickly too. I learned my lesson many years ago; both my home and work computers are always on uninterrupted power supplies. But problems in the environment or manufacturing defects might still mean a bad hard disk. A new hard disk and a good backup can get you back to where you were.

And there are many other things that can go wrong. Software updates might break something on your computer. I have had a driver update of my DVD drive that disabled writing. (That was on a Windows machine, incidentally. It is not just Linux.) I only noticed it half a year later or so when I needed to write a DVD. There was no easy fix either. I ended up having to restore my system from a year-old backup and then my personal files from a recent one.

And one of the greatest dangers is the computer user. Over the many years that I have used computers, I have messed up important files many times. On my work computer, I have a mirror of my disk on the College of Engineering computer system. This is a subfolder of my Home folder, but it has the same name. You might guess what is coming: about a week ago, wanting to temporarily delete the mirror, I deleted my entire home folder instead. Fortunately it took only 15 minutes to restore all 6 gigabytes of files from backup. Without a recent backup however, I would have been in major trouble. Aside of these stupidities, it is not uncommon for me to find (or not) a file, and I have no clue what happened to it. Being able to go back into history to earlier versions of the file is very helpful.

Now consider what can be called solid backups. Suppose you have a USB disk with a folder 1/ on it, and on Sundays you (or rather the system does it for you) copy the entire contents of your hard disk to that folder. If your hard disk crashes on Thursday, or you get a disabling virus, or you mess up your system hopelessly, or you lose just a single file, but an important one, you can restore your system, or just that one file, to the exact state it was on on Sunday.

But what about the files you created on Monday through Thursday? They would be gone. To avoid that, you can have the system do daily (or maybe nightly) incremental backups. They simply add to Sunday's backup only the files that were added or changed since Sunday. With incremental backups, only files added on Thursday before the crash will be gone. Well, neither you nor the system can do backups every minute. (Actually, there are ways. A later section has one example.)

Now what happens the next Sunday? The system will clean out the folder 1/ to start the new week's backup. So, if your hard disk decides to crash during the new backup you are left with no system disk and no backup. And the hard disk going bad may be the very reason that the new backup fails. Or the same power failure that causes the backup to fail may also cause the disk to fail. To avoid that, you want to move the previous week's backup to another folder 2/ before doing the new full backup. This has the added advantage that you now have backups going back up to two weeks in time. If a file was created last week Tuesday, and got lost last week Wednesday, you can get it back. Sometimes you do not immediately realize you have lost a file or have a virus or system problem.

As noted above, I find that on occasion I want to go back way in time to files that I optimistically "cleaned up" because I would never need them again. Or that I messed up without noticing that I messed them up until much later. That sort of problems require backups that go way back in time. Now if you just save every backup that is removed from 2/ in a folder 3/, every one removed from 3/ in a folder 4/ and so on to folder 6/, you still only save about the last 6 weeks of backups. But instead you can save only every second backup that you remove from 2/ to 3/. That makes the backup in 3/ 2 or 3 weeks old. And you can only save only every third backup that you remove from 3/ in 4/; that makes the one in 4/ about 4 to 9 weeks old. Then if you only save every fourth backup from 4/ in 5/, that one will be 10 to 33 weeks old. If you save every fifth backup from 5/ in 6/, that one will be 34-153 weeks old. And if you save every sixth backup from 6/ in 7/, that one will be 154-873 weeks old. Since 873 weeks, 17 years, far exceeds the useful life of any normal computer, that means that the first backup is kept forever. You now have a set of backups going all the way back in time to the beginning.

Also, if you want to keep less than 7 backups, you can do full backups only about once a month. Then "weeks" above become "months." But taking all these incrementals into account while restoring files is of course a bit messier. And you are somewhat more vulnerable if a full backup goes bad. Alternatively, you can decrease the fraction of backups that is moved from one level to the next.

So what if my computer and backup disk get lost? Like when I stupidly mess up my backup disk while trying to restore my lost hard disk? (Yes, it has happened. More than once. The first time was on my Commodore 64 or 128, my memory is somewhat vague. But that was not really my fault. Both the "hard disk" and the backup disk were those unreliable floppies.) Or when the same power spike destroys both? Or both disks might fail due to the same air conditioning failure. Or the building might burn to the ground. Or the sprinkler system might go off. Or the building might get flooded. Or hit by a tornado. Or someone could manage to steal the complete computer system. Or a destructive virus could manage to strike. Or someone might hold a grudge, or want to see some records destroyed. Or something might just fall on it. Or the one thing that I did not think of might happen.

Not a big problem. Periodically I take a backup disk from my work computer home and back my home computer up on it. The same way I keep a backup of my work computer at home. (Yes, both disks are mine.) I am not sure what to do if someone throws an atomic bomb on Tallahassee and both home and work get destroyed. It is the capital of Florida, you know. Then again, I would probably not be around to care. Another way would be to back up over the Internet, of course. Come to think of it, I believe I have a big disk quota on dommelen.net that I am not using...

Basic installation

Install dar_static and tcsh. On Ubuntu or Debian systems, use the Ubuntu Software Center, synaptic, or apt-get install to do so, whichever you prefer. If you want to encrypt the backups, also install gpg, if not installed already.

Save the Fortran source backupex.f on your Desktop. To do so, in Firefox, right-click the link and select "Save link as ..." from the menu. Click on the Desktop icon or browse down your home folder to the Desktop folder. If you have the usual Intel-based computer and 32 bit Ubuntu, also put the executable backupex on your Desktop.

Similarly, save the script for doing the backups. There are four slightly different versions of this script to reduce the need for customizing. Select the one that best describes what you want to do:

Now you need to open a terminal. If there is no terminal in the Linux menus, (like for Unity on Ubuntu), try pressing t while holding Ctrl and Alt. Or browse down "File System" to folder "usr" and then "bin" and double click the xterm file.

Now in the terminal window, type in the following command, terminated by Enter:

/usr/bin/tcsh
Watch for typos, and note that the mouse does not work in the terminal (except for copy). The above command will enter the tcsh, which must be installed for this script. Next enter
cd
cd Desktop
These two lines must not produce any complaints.

Now you need the executable backupex. If you saved that executable above, consider whether you trust the source where it came from. If you do, enter in the terminal

chmod u=rwx ./backupex
./backupex a b
If it responds "*** backupex: character 7 in the file name is not an underscore!...", all is well.

If you do not get this response for whatever reason, install gfortran. Then in the terminal window do

gfortran -v
gfortran --static -o backupex backupex.f
./backupex a b
(Of course, the really careful person would check out what is in backupex.f before entering that last line.) The response to the last line should be "*** backupex: character 7 in the file name is not an underscore!...". (If it does not, and the gfortran -v command returned a proper version number, check for typos or contact the author of this web page.) You can uninstall gfortran again if you want. After you do, backupex should still work the same.

Next you need some information on how the computer thinks of you. In the terminal enter

id
The result will be something like
uid=UID(USERNAME) gid=GID(GROUP) ...
Write your UID and GID, as well as your USERNAME and GROUP down on a piece of paper. On a normal Ubuntu system, UID and GID are both 1000, and USERNAME and GROUP might be your first name. Now enter
echo $HOME
This is the actual location and name of your home folder. It is usually of the form /home/USERNAME. Whatever it is, write it down too. You can list the files in your home folder using
ls -l $HOME

Presumably you want to put the backups on some external USB disk. You need the full specification of this disk. On Ubuntu, the easiest way to find that is to right-click a file or subfolder on the USB disk and select "Properties" from the menu. The properties will have a line something like

Location:    /USB_DISK
Here /USB_DISK is typically something like /media/WHATEVER on Ubuntu. It must start with a /. "On the Desktop" does not qualify. Whatever it is, write down what /USB_DISK is on the piece of paper.

You may want to get an idea how much space there is on the USB disk, and how much you need to backup. In the terminal, enter

df
The USB disk will be a line of the form
/dev/sdBN    NNNNNNNNNNNN NNNNNNNNNNN NNNNNNNNNNNNNN NN%  /USB_DISK
where capitals can vary.

On a system that is not an Ubuntu, or even a graphical one, this is another way to figure out /USB_DISK, if you know the rough size of the USB disk. Other helpful commands here might be mount -l and blkid. To check that you got it right do

ls -a /USB_DISK
This should show the files on the USB disk, including hidden files. Make sure that you get it right, because if your backups end up on the same physical disk as the one that is being backed up, it does not provide much protection.

Note incidentally that you might want to remount the USB disk for various reasons. In particular, if the disk is sometimes called one thing, and other times something else, automatic backups become a mess. Fortunately, the latest version of Ubuntu, as of Feb. 2012, now seems to mount every disk with a unique name. But another reason you may want to remount the disk might be to ensure that it is owned by you, user USERNAME, rather than by user root. That cuts down on the required sudo prefixes when you want to keep an eye on the backups. If you decide to remount the disk, see the corresponding subsequent section for instructions. It allows you to create your own value for /USB_DISK.

The next step is to set up the backup script backXX, (i.e. backup, backhome, backfol, or backapt), that you put on the desktop. First move backupex and the script into a subfolder "bin" of your home folder using, in the terminal, (don't make typos here),

mkdir ../bin
mv -i backXX ../bin/
mv -i backupex ../bin/
mv -i backupex.f ../bin/
chmod u=rwx,go= $HOME/bin/backXX $HOME/bin/backupex
If the response to the first command is that bin already exists, that is fine too. The other lines must not create complaints. Then edit the script using
gedit $HOME/bin/backXX    (or: nano $HOME/bin/backXX or: pico $HOME/bin/backXX)
Whatever works. When using gedit, grasp a corner of the window that opens and stretch the window to the full width of the screen. Using nano or pico, no new window will open, and the mouse will not work: you need to move around with the cursor keys.

Either way, move down the script to the paragraph that starts with "# Key variables". In the second line it says

set backdisk="" # ...
Put whatever you wrote down for /USB_DISK in between the double quotes. For most scripts, this is the minimum needed to get things working.

However, if you are using script backfol, the fourth line contains

set fsroot="" # ...
You need to put the name of the disk or folder that is to be backed up inside the double quotes. Find this name the same way that you found the name /USB_DISK of the USB disk; by right-clicking a file or subfolder on the disk or folder to back up and looking at the location. It must start with a /.

If you want to use the backfol script to backup two different things, the two versions of the script need to have different names and different values for variable nam inside the script. It is recommended that the variable nam is in each script the same as the name of that script. Appending a 2 to the name of the second script and to the value of variable nam inside it is a simple way to do it.

While the above script changes are normally the minimum to get things working, you may need to make other changes. For example, you might want to make changes to the backup command, to change exactly what is being backed up. In a second terminal, enter

man dar                                          (or: dar_static --help | more)
for an extensive description of the backup options. You may also need to make changes to the number of backups that are kept, and how frequently their numbers are weeded out. Especially if the USB disk does not have that much space. Read down the script for the possible changes. You may want to have a first look at the changes that you can or cannot make after the backups have been initialized. Do not change things you do not understand.

For non Debian or Ubuntu systems, verify the operating system settings like the find command. On some non-Linux Unix systems, like Sun Solaris, FreeBSD, etc, the long names of options in the backup command backcmd, preceded by --, are not supported. You will need to change these options to their short form, preceded by -. See man dar for details. Change also --ref, --key, and --key-ref further down in the script.

After changing the script, save it and exit the editor. In nano or pico you do these things by pressing x while holding down Ctrl; then answer y on the question whether to save the file. (Unless you messed up editing the script, of course. Then answer n and edit the script again from scratch. Similarly, exit but do not save the file in gedit if you mess up.)

Now the folder of the backups must be initialized. In order to do so, in the terminal enter

sudo $HOME/bin/backXX init
There should be action on the USB disk now. Watch the light. Read the warnings before you select encryption of the back ups. The script should terminate with a message that the backup folder has been created successfully. If not, edit the script again and check for typos. Then rerun the command above.

At this stage, you can do backups using, say

sudo $HOME/bin/backXX auto
But you probably do not want to do that yourself every day. You want the system to do it for you automatically. To set this up, in a terminal window do
sudo crontab -e
This will put you in a file, maybe using the nano editor. If so, use the cursor keys, not the mouse. If it puts you in gedit, stretch the window to the full width of the screen. Either way, go to the end of the file. At the end, add 2 lines something like
HOME=/home/USERNAME
  30 2 * * * /home/USERNAME/bin/backXX yes 1
Here for /home/USERNAME, substitute whatever you wrote down earlier for your $HOME folder. And for backXX substitute the name of the script that you created. The final line will produce daily backups. The backups will start 30 minutes and 2 hours after midnight. (That assumes that your computer is powered up overnight. Otherwise use another time for the backups. If desired, you can set up the script so that it shuts down the computer after it finishes. See inside the script backXX for details.) With final argument 1 as shown, complete backups will be on Mondays and incremental ones on other days. To do complete backups on Tuesdays instead, use 2 for the final argument. And so on until 7 for Sundays. To do complete backups only about once a month (for example, to save backup disk space), set the final argument equal to auto. If you are running more than one script, put in a line for each, preferably giving each script a different time slot. (Obviously, only one script can shut down the computer. But in this case you may not want to have any script shut down the computer at all, since other scripts might conceivably still be running at that time. If you do need shutdown, start the longest running script last and let that script do the shutdown.)

Messages about the backups should appear on your desktop. Keep an eye on it. You can double-click file backXX_MM-DD-YY, or right-click it and open it with the text editor, to get an idea of what is going on. If a file backXX_FAILED shows up, something went wrong. To find out what, look in backXX_MM-DD-YY. If a file got changed right in the middle of the backup, there is little you can do about it. It will be backed up again next time. Ignore any grumbling about .gvfs. Otherwise fix the problem. Delete backXX_FAILED so that you can see the next warning.

In case of a power failure during the backup, after reboot you will see an orphaned backXX_running file. This file never got deleted since the backup never finished. Just delete backXX_running: the incomplete backup itself will be deleted the next time the script runs. (By default, a backXX_running file that is two or more days old will be deleted by the script itself. That is to prevent such an orphaned file from disabling backups forever. This behavior can be changed inside the script.) If you actually want to disable backups, in a terminal do

sudo $HOME/bin/backXX disable
To enable backups again, use enable for the final argument.

It is a very good idea to check that you are indeed backing up what you think you are backing up. The day after setting up, (or more generally, after the first backup has been done), look at the readme.txt on the backup disk, inside folder backXX. This document will tell you how you can restore single folders or files using dar_static in a terminal. Now if you append ' -e -v' at the end of the dar_static lines, (without the quotes but with the spaces), it will not actually restore the files, just show you what would happen. That allows you to verify that the important files are being backed up, and that you are able to restore them when needed. (Use the right-click Properties if you are unsure how to specify the folder or file that is to be imaginarily restored.)

Some special backups

The first backup the script does will by default be kept the maximum time. But if your computer is not yet completely set up at the time of the first backup, you might want to keep a slightly later backup for the maximum time. That can be done by running the later backup from a terminal as

sudo $HOME/bin/backXX max
where backXX is the script name.

As mentioned in an earlier section, I like to keep an occasional backup of my home computer on the backup disk of my work computer. (And vice-versa.) To do this, you must make sure that the backups do not conflict. In particular, the backups of the two computers must be in folders of different name. To achieve that, go into the script of one of the two computers, as in the "Basic installation" section, and under Key variables, change the value of the nam variable to something else. One good choice is to change the value of variable nam in the script of the work computer from backXX to backXXwork. (If you change nam after you have already initialized the backups, change the name of the backup folder on the work backup disk correspondingly from backXX to backXXwork. Also, if the variable marker in the work script contains the string $nam, replace $nam with the old value of nam, backXX. To be sure you got it right, you might want to try an incremental backup on the work computer using sudo $HOME/bin/backXX auto.)

Having done this, you can take the work backup disk home and mount it on the home computer. Then if /ALTERNATE_USB_DISK is the specification of the work backup disk on the home computer, then backups of the home computer on the work backup disk must be initialized as

sudo $HOME/bin/backXX /ALTERNATE_USB_DISK init
without complaints. After that the home computer can be backed up to the work backup disk as
sudo $HOME/bin/backXX /ALTERNATE_USB_DISK auto
The work computer can similarly be backed up to the home backup disk; swap the terms home and work.

Making script changes after the fact

There are restrictions on what you can change in the script after you you have already initialized the backup folder on the USB disk. (Unless, of course, you delete or rename that backup folder and start from scratch. This section assumes that you want to keep the backup folder and any backups already in it as is.)

If the name of your USB disk with the backup folder changes, you will need to change the variable backdisk in the script correspondingly.

Do not change the variable fsroot, being the folder that is backed up. If you want to change what is being backed up, create and initialize another script, with a different value for variable nam. That will create backups kept in a different folder on the backup disk.

You can however make changes to the backcmd in the script to adjust exactly what is being backed up.

You may have to change the value of variable nam if there is a conflict in name with a different set of backups. An example was given in another section. To change nam after the backups have already been initialized, you will have to change the name of the backup folder on the USB disk correspondingly. Also, if variable marker in the script contains the string $nam, replace $nam with whatever the old value of nam was. You may also want to update the readme.txt file in the backup folder.

If you want to turn off encryption, remove the stuff between the double quotes in variable marker and do a new backup with

sudo $HOME/bin/backXX new
The previous backups will remain encrypted and will still require the password to be used. Note that if you are using script backfol, doing this can be problematic. Instead you may want to start a new set of backups, like described in the next paragraph.

To turn on encryption, it is best to create a new set of backups. Rename the existing backup folder on the USB disk to something else. Go in the script, and put .$nam.bin inside the double quotes of variable marker, if it is not there already. Then initialize the new backup set as

sudo $HOME/bin/backXX init
Select encryption to encrypt the new backups.

The script variables n3, n4, n5, ... govern how many backups are kept and how widely spaced apart in time they are. The more nonblank values, the more backups are kept. The larger the values, the more widely spaced the backups are in time. You cannot change blank values of these parameters into nonblank values or vice-versa, after initializing the backups. But if you later decide that you have too many backup sets, simply delete the excessive subfolders 9/, 8/, 7/, 6/, ... in the backup folder and leave the corresponding n9, n8, n7, n6, ... alone. You can also increase the nonblank values of n3, n4, ... to spread future backups out further in time. You cannot normally decrease n3, n4, ... after the fact. (You can do so immediately after making a "max" backup.)

As noted, larger values of n3, n4, ... produce backups more widely spaced. If you want to see for given values precisely what backups are present at any stage, Fortran program count.f will do that. It requires gfortran or equivalent. Edit the program using gedit or equivalent, to change the values of n3, n4, ... to your liking. Then in a terminal

cd Desktop
gfortran count.f
a.out
This will show the age of the backups in folders 1/, 2/, 3/, ... after each new full backups. Numbers are in weeks, or in months if you use auto in the crontab file of the Basic installation section. The program assumes that the same values of n3, n4, ... are used from the start and no "max" backups are done.

Various other parameters behind n3, n4, ..., n9 in the backXX script can be changed if there is a real need. Use your judgement while doing so.

More about encryption

If you choose to encrypt the backups, the script has to have a password. And since you are not around to provide it at backup time, the password needs to be kept somewhere. It is kept inside the folder being backed up. This means that a bad person with access to your computer can see the password. Like someone who steals your laptop, or has access to the computer some other way. That person may also be able to figure out your bank or paypal account name from your browser history and other traces. So it is a very bad idea to use the same password for the encryption as you use for your bank or paypal account. Or for other sensitive things. To be sure, the password is being kept in a "hidden" file, superficially encrypted. But that will not stop a knowledgeable or determined person. (The one exception is if the folder being backed up is itself also encrypted, and the bad person gets hold of the computer when it is turned off or is on screen lock.)

If you discover at the time you are trying to restore the entire lost folder that you forgot the password, you are out of luck. Wait a few days, and maybe it comes back. However, if you just lost an individual file and can still see the rest of the files in the folder being backed up, you can refresh your memory about the password. Just like the bad person could do it. A bit of thinking will suggest how to do that; it involves temporarily renaming the backup folder backXX.

Remounting the USB disk

There might be various reasons you would want to remount the USB disk on which the backups are stored:

The instructions below are for a Linux system. It is further assumed that you have opened a terminal and know your UID and GID, and USERNAME and GROUP, as found in the Basic installation section. And that you know the current full specification of the USB disk, call it /USB_DISK, also found as in the Basic installation section.

To be safe, take out any USB sticks that might just confuse things. Leave only the USB disk that you want to put the backups on. And have a look at the contents of the USB disk, so that you can recognize the disk. In a graphical environment, double click the disk and under View, select "Show Hidden Files." Try to also look at it using a terminal:

ls -a /USB_DISK
If that does not show up the same files, you have /USB_DISK wrong; try again.

The instructions below will remount /USB_DISK as /media/usbdisk. You could change the name "usbdisk" to something else, but spaces in the name tend to be a nuisance. "Backup_Disk" is better than "Backup Disk". I further dislike typing capitals.

The first step is to identify the UUID of the USB disk. In the terminal, allow root login by entering

sudo passwd root
and enter your password three times. Then enter
su   (you are now no longer user USERNAME but user root, test with whoami)
tcsh
df
Study the output of the final command carefully and find the line of the USB disk, It should be something like
/dev/sdBN    NNNNNNNNNNNN NNNNNNNNNNN NNNNNNNNNNNNNN NN%  /USB_DISK
where the capitals may vary. Write down what /dev/sdBN is. Next enter
blkid
and find the line with /dev/sdBN found above, something like:
/dev/sdBN: LABEL="..." UUID="SOMEUUID" TYPE="SOMETYPE"
Write down what SOMEUUID and SOMETYPE are. Don't make typos and keep O and 0 apart.

Now you first need to create the new USB disk location. In the terminal do

cp -i /etc/fstab /etc/fstab.save  (this will provide a sanity check later)
mkdir /media/usbdisk   (leave out /media here and below if this complains)
chown UID:GID /media/usbdisk (substituting whatever your UID and GID were)
ls -ld /media/usbdisk       (should show your USERNAME and GROUP as owner)
The /media part is specific to Debian or Ubuntu. Other systems might use /mnt. Even on Debian or Ubuntu, you do not really need the /media part, but if you do have it, it may put an icon for the disk on the Desktop. (At least it does if you use gnome-session-fallback and enable these icons using gnome-tweak-tool like I do.)

Now comes the real work. In the terminal, enter (don't make typos)

gedit /etc/fstab                 (or: nano /etc/fstab or: pico /etc/fstab)
Whatever works. When using gedit, grasp a corner of the window that opens and stretch the window to the full width of the screen. Using nano or pico, no new window will open, and the mouse will not work: you need to move around with the cursor keys. Go to the end of the file and add a line of the form
UUID=SOMEUUID /media/usbdisk auto defaults,uid=UID,gid=GID 0 0
Here SOMEUUID must be what you read off earlier, and the same for the UID and GID further down the line. If you changed /media/usbdisk into something else, do the same thing here. Carefully check for typos; especially in the UUID and with the commas. That takes much less time than restarting the computer and then discovering that the disk will not mount. Keep O apart from 0.

(Note: If SOMETYPE before was ntfs, and you are using a very old versions of Debian or Ubuntu, you may want to install ntfs-3g using Synaptic or apt-get, and then replace auto above by ntfs-3g. I understand that nowadays ntfs and ntfs-3g are equivalent. As long as you were able to write to the disk, there should be no problem anyway.)

After changing /etc/fstab, save it and exit the editor. In nano or pico you do these things by pressing x while holding down Ctrl; then answer y on the question whether to save the file. (Unless you messed up editing /etc/fstab, of course. Then answer n and edit /etc/fstab again from scratch. Similarly, exit but do not save the file in gedit if you mess up.)

Now enter

diff /etc/fstab /etc/fstab.save
This will show what lines are different after editing /etc/fstab. It should show only the last line to be different. If any of earlier lines are changed, you system might no longer turn out to be bootable. In that case immediately enter, without typos,
cp /etc/fstab.save /etc/fstab
and either give up or return to the gedit /etc/fstab stage above and try again.

If only the final line in /etc/fstab has been changed, and correctly, exit the su process,

exit                   (you are now user USERNAME again, test with whoami)
Restart the computer. After reboot, check that the USB disk is indeed correctly mounted and that you own it. In a terminal, enter
mount -l
df
ls -ld /media/usbdisk
ls -a /media/usbdisk

The second last command should show that you own the disk, and the last should show the files on the USB disk. If they do not, in the terminal enter

sudo cp /etc/fstab.save /etc/fstab
sudo rmdir /media/usbdisk
with no typos. Restart the computer to restore the disk to what it was. Then ask someone more experienced for help. Or just leave the disk as it is now.

Additional warning when things go wrong

By default the backup scripts put messages on your desktop. But these messages may be obscured by open windows. This section has some ways to make the error messages more visible.

Put a mailbox picture on your Desktop

The steps below will put a picture of a mailbox on your desktop. When a backup error message shows up, the arm on the mailbox will be raised.

To enable this, in the startup applications add an application that runs

xbiff -f /home/USERNAME/Desktop/backXX_FAILED
Here /home/USERNAME should be your HOME folder as found in the Basic installation section, and backXX is backup, backhome, backfol, or backapt, depending on the script you run. At login, this will put a picture of a mailbox on your Desktop that you can drag to a visible place. If a message backXX_FAILED shows up on your desktop, the arm on the mailbox will be raised.

I have been unable to find a graphical "alarm clock" sort of program that will do the same thing in a more neat way. That does not mean that they do not exist, just that the ones I looked at had no mechanism to look at an arbitrary file.

Get an interactive warning at login

This section tells you how you can be warned at login time that there are backup error messages. In particular, if there are error messages, a "Run Me" icon will be put on your desktop at login. Clicking this icon will give an interactive warning. (And it can also do other interactive tasks that you want to execute at login time, like mount encrypted disks or run backups.) Afterwards the Run Me icon will disappear.

To set this up, put scripts startup, startup2, and icon startup2.desktop on your Desktop, by right-clicking them, selecting Save As, and clicking Desktop or browsing down Home to Desktop. Preserve the names while doing so. You should already have a subfolder bin in your home folder from the script installation. Otherwise create one. Then in a terminal do:

tcsh
cd
mv -i Desktop/startup Desktop/startup2 bin/
chmod u=rwx,go= bin/startup bin/startup2
mkdir Desktop_saved
mv -i Desktop/startup2.desktop Desktop_saved/
chmod u=rwx,go= Desktop_saved/startup2.desktop

Go into Startup Programs (gnome-session-properties), click Add, then put Start Up behind "Name:" and /home/USERNAME/bin/startup behind "Command:". Here USERNAME is your username (from whoami). Press Save and Close.

That should do it. However if desired, you can add your own noninteractive commands that you want executed at login time. To do so, in the terminal do

gedit bin/startup    (or: nano bin/startup  or: pico bin/startup)
Add the noninteractive commands immediately behind the line
# Noninteractive commands to be executed:
Save startup and exit the editor.

If you have interactive commands to be executed at login time, do not put them in startup like that. Interactive commands are commands that require input, like passwords say, from the user. Put such commands in startup2 using

gedit bin/startup2    (or: nano bin/startup2  or: pico bin/startup2)
Add the interactive commands immediately behind the line
# Interactive commands to be executed:
Save startup2 and exit the editor. In addition, do
gedit bin/startup    (or: nano bin/startup  or: pico bin/startup)
Assuming that the interactive command has to be executed on every login, find the line
#goto do_startup2
and remove the # from it.

If the interactive command has to be executed only occasionally, you need to learn a bit about tcsh if statements. Then follow the general ideas of the example already there that looks if there are ..._FAILED files. If you do not have time for that, just remove the # in startup as above and then inside startup2, execute your INTERACTIVE_COMMAND as follows

echo -n "Do you want to execute INTERACTIVE_COMMAND? [y/N]: "
askcmd1:
set ans="$<" #"
if ("$ans" == "") set ans=n
if ("$ans" == n) goto endcmd1
if ("$ans" == y) goto docmd1
echo -n "\aPlease answer y for yes or n for no: "
goto askcmd1
docmd1:
INTERACTIVE_COMMAND
endcmd1:
If you have another interactive command like that, use similar lines for it, but change cmd1 into cmd2. If you login and out a lot, you will grow to love learning tcsh if statements.

E-mail yourself the errors

Of course, the best warning would be to send yourself an e-mail message if a problem occurs. Nowadays almost everyone reads their e-mail at least once a day. That is ideal. The difficulty is that most people use an external mail server, like a company one, gmail, or whatever. That produces password problems.

What follows are two possible ways to deal with that. The first way only applies for a special set of users. The second way applies quite general, but it has its password problems.

Using internal mail

If you actually read internal mail, like in /var/spool/mail say, it is not difficult. This might apply to someone who uses fetchmail to get mail from a server and then puts it on the local machine. Or it may apply to someone who actually runs a mail server on the local machine.

For those few, the last time I looked, Debian had a working internal mail system by default. However, if you logged in to root, you did not get mail there. What I had to do is create a link to the nonroot user mailbox as

ln -s /var/mail/USERNAME /var/mail/root
where USERNAME was the nonroot user.

On Ubuntu, internal mail is not enabled. You need to install the mailutils package. In Synaptic, that will not install unless you first go into Preferences and enable "Apply changes in a terminal window." In particular, during the installation you will need to tab to an OK button and then use the cursor keys to make a selection from a menu. Root will get e-mail on Ubuntu.

Then you can make the backup script send you local email if a problem occurs. To do so, save the script showstat1 on your desktop. Then in a terminal, following similar procedures as in the Basic installation section

cd
cd Desktop
mv showstat1 ../bin
chmod u=rwx,go= $HOME/bin/showstat1
gedit $HOME/bin/showstat1
                   (or: nano $HOME/bin/showstat1 or: pico $HOME/bin/showstat1)
In the script, change name@address into USERNAME as determined in the Basic installation section. Save and exit. Try
echo "Test message text" | $HOME/bin/showstat1 "Test subject"
and see whether it arrives. If it does
gedit $HOME/bin/backXX    (or: nano $HOME/bin/backXX or: pico $HOME/bin/backXX)
with backXX equal to backup, backhome, backfol, or backapt, depending on the script you are running. Go down the script until the line
set mailprog=""
and put $HOME/bin/showstat1 within the double quotes. Save and exit.

The above solution does not even apply to me. While I do use fetchmail, that is on another machine.

Using a mail server

The alternate method to send e-mail from the script is to use a mail server. But this mail server will demand a password before it accepts the e-mail. And if a script sends out the e-mail, then this password will be stored on your computer in a way readable to any bad person who manages to get access to your computer by hook or by crook. The same bad person might be able to figure out your bank account and that you use paypal from things like browser history, your documents, old e-mails, and such. So if you use the same password for your e-mail as for your bank or paypal account, the bad person will be much enriched. But you will be the poorer for it. And even if you only use the e-mail password for e-mail, you might not want someone bad going through your personal e-mail.

(I have been informed that mail readers like Mozilla Thunderbird will in fact leave your mail server password ready for anyone to read unless you select a master password. I know that the dial-up program wvdial does. It stores your password all over the machine in fact, in case someone might miss it. And so presumably do other programs that ask your password. Personally, I try to confine any plainly visible passwords to an encrypted disk, using symbolic links. Then if someone walks off with my laptop, they hopefully don't also walk off with my bank account. Then there are complete morons like scientific journals who demand a secure password, and then e-mail you back that password over unprotected plain e-mail. But the shortcomings of others are not a topic here.)

Anyway, if you have an account on an e-mail server for which you do not worry about the password, then there is a relatively easy solution. First install package sendemail, plus its suggested packages libio-socket-ssl-perl and libnet-ssleay-perl. (The latter two packages are not automatically installed; you must do so explicitly.)

Next save the script showstat2 on your desktop. Then in a terminal, following similar procedures as in the Basic installation section

cd
cd Desktop
mv showstat2 ../bin
chmod u=rwx,go= $HOME/bin/showstat2
chmod u=rwx,go= $HOME/bin
gedit $HOME/bin/showstat2
                   (or: nano $HOME/bin/showstat2 or: pico $HOME/bin/showstat2)
The example is set up to use a google gmail account, but it can be modified for any typical e-mail server, like the one of your company or ISP. In the script, change name@address into the e-mail address at which you want to receive e-mail about backup problems. Change smtp.gmail.com into the address of the e-mail server of your company or ISP, if not gmail, that will send out the e-mail. Change gmail_login_name and gmail_password into your login name and password for that server. Change gmail_login_name@gmail.com into your address on that server. Save and exit. Then try
echo "Test message text" | $HOME/bin/showstat2 "Test subject"
and see whether it arrives. If it does
gedit $HOME/bin/backXX    (or: nano $HOME/bin/backXX or: pico $HOME/bin/backXX)
with backXX equal to backup, backhome, backfol, or backapt, depending on the script you are running. Go down the script until the line
set mailprog=""
and put $HOME/bin/showstat2 within the double quotes. Save and exit.

Of course it can never hurt to move showstat2 to an encrypted location, if you have one. In particular, if you have an encrypted Private folder as described in a later section, put showstat2 inside Private and put $HOME/Private/showstat2 within the double quotes of mailprog above. Then if someone gets hold of your computer when it is turned off they are out of luck. Unless of course they can guess the password. But that is another matter. Remember to check the bin folder in HOME for hidden backup files of showstat2 that might be left (in the terminal do: ls -a $HOME/bin).

Running backups every 10 minutes

Suppose you do daily backups overnight. Then if the hard disk crashes, you can get back all files that existed last night. But what about the files you created today? They will be lost. If you are super loath to lose any files at all, you could consider doing backups much more frequently.

You would not want to do this for your entire system. But backing up just your home folder every twenty minutes or so might be more realistic. If you want to do so, it is a good idea to go back inside script backhome and find the line

set skip=""
Put inc between the double quotes. Also, with that frequent backups, you might want to say double the values of n3, n4, n5, n6, and n7 in the script. (Alternatively, if you also backup your complete system, including your home folder, with the backup script, you may only want to keep the latest of these frequent backups. That is done by putting a 2 within the double quotes of variable delete_backup.)

To get backups to run every 20 minutes on weekdays, with a full backup over lunch, edit the crontab file as in the basic installation instructions. However, make the final lines:

HOME=/home/USERNAME
  10-59/20 8,9,10,11,13,14,15,16 * * 1-5 /home/USERNAME/bin/backXX yes auto
  10 12 * * 1-5 /home/USERNAME/bin/backXX yes new
The second line causes backups to run at 10, 30, and 50 minutes after the hour, for the hours 8, 9, 10, 11, 13, 14, 15, and 16, (i.e. basically 8 to 5, except for the 12-1 lunch hour), on Monday to Friday. The final line causes a full backup to be done during lunch starting at 12:10. To get backups every 10 minutes, replace 10-59/20 by 5-59/10 and 10 12 by 5 12. Every half hour is 15-59/30 and 15 12.

Note that you might get some complaints about files changing right while they were being backed up. And if 10 minutes are not enough to do the backup, you will get an earful.

There is no point trying to back up more frequently than 10 minutes, because it will take you a lot more time to restore the backup. Just retype what you did in those 10 minutes.

Backing up encrypted files

Suppose that you travel and that on your portable you have sensitive documents. If someone steals your portable, they will be able to read those documents by simply starting the computer from a live disk. Not such a pleasant idea, not? If you dial in to your Internet service provider using wvdial like I used to do, the thief can also read your password for your ISP. Not so great either, especially if you use the same password or something similar for paypal or your bank account. Mozilla Thunderbird will make your e-mail password available in readable form to the thief. In fact, essentially any application on your computer that asks for a password is likely to leave that password around in readable form. Then there are old e-mails that might contain sensitive information, or even passwords.

What can be done? Well, you can encrypt the files. Then they cannot be read without providing a password in some form. There are various ways to implement that, as discussed in the next subsections.

Note first that if the thief cannot read the files without a password, neither can you if you forget the password. Be careful. Write down the password and put it in a secure location.

Encrypting the entire system disk

If the entire system disk is encrypted, all files are protected against prying eyes when the computer is turned off. (To be picky, the files in /boot will still be readable.)

I have no personal experience with encryption of the entire system disk. But as far as I understand the concept, it should not affect backups. Backups can only be done if the system is fully started up, and at that time all files should be readable to root. And there should not be visible encrypted copies of files. (That is a problem with the ecryptfs encryption of the next couple of subsections.)

If anyone has better information, let me know and I will put it here.

Encrypting just your home folder

Encrypting just your home folder makes all files in it off-limits. Note that if you use wvdial, it will also put your dial-up password in system files, which would still be readable to preying eyes. A later subsection has some tricks to work around that.

If you want to encrypt your home folder, I strongly recommend that you use System Settings / User Accounts to create a user called Alt with administrator privileges. And do not forget to set a password for this user. The reason is that you can do certain things only easily when the user with the encrypted home folder is not logged in. And there is no need to encrypt the home folder of Alt if you do not put anything in there.

I am unable to get ecryptfs-migrate-home to work with Ubuntu 11.10 and gnome-session-fallback. I get fatal messages that the encrypted home folder is not properly setup. Apparently, that is a known Gnome 3 problem. Therefore I cannot say with confidence how to best back up your encrypted home folder.

I can however say with confidence that you should solidly back up your home folder before you encrypt it. I was unable to login after encrypting my home folder. If this happens to you, login as Alt instead. Then in a terminal do

su         (may need to do sudo passwd root first)
tcsh
cd /home
ls
mv -i USERNAME save_encrypted
mv -i USERNAME.!@#$% USERNAME
where USERNAME is your normal username and !@#$% can vary, use ls or Tab. If you can login normally now and all is well again, do
su
tcsh
cd /home
ls
rm -r 'save_encrypted'  (MUST use quotes)
cd .ecryptfs            (MUST NOT COMPLAIN)
pwd                     (MUST SAY /home/.ecryptfs)
ls ../USERNAME          (SHOULD SHOW YOUR NORMAL HOME FOLDER)
ls USERNAME             (SHOULD SHOW CRAP)
rm -r 'USERNAME'        (ONLY IF YOU MEET THE TESTS ABOVE, USE THE QUOTES)
Then give up and try the method of the next subsection.

From what I saw from the partially completed ecryptfs-migrate-home, my educated guess is that the readable home folder /home/USERNAME is stored in encrypted form in hidden folder /home/.ecryptfs/USERNAME. So if USERNAME is logged in, there are two copies of each file, a readable one in /home/USERNAME and an encrypted one in /home/.ecryptfs/USERNAME. When USERNAME logs off, the readable files disappear. Files popping up and disappearing is an obvious concern for backups. Having two different copies of the same file, one encrypted and one readable, is too.

If the /home/USERNAME home folder is backed up with the backhome script, the big concern is to ensure that USERNAME is normally logged in when the backups are done. If the files are not readable, nothing can be backed up. And in particular, backhome init must be run when the user is logged in and the files in the home folder are readable. This is essential. Also, it is essential that in the backhome script, variable nam is not blanked out. That is to ensure that an occasional case where USERNAME is in fact not logged in at backup time is correctly handled. When the need arises, backups should be restored logged on to USERNAME with the home folder in unencrypted state.

Now consider the case that instead the entire system is backed up with the backup script. Then I would think you would want to back up the encrypted version of the home folder. Dealing with nonencrypted files that may pop in and out of existence would be a big mess. Dealing with two versions of the same files, encrypted and unencrypted, would be an even bigger mess, at restoration time when you need problems least. To avoid backing up the nonencrypted files, put a line

   --prune home/USERNAME \
inside the backcmd alias in the script, like immediately before the line with --alter=no-case.

Now consider restoration time with this scheme. When the backups are used to restore the entire system, logged in as Alt above, with USERNAME logged off, only an empty folder /home/USERNAME will be created. But as far as I see, the only additional thing you would have to do now is

su   (may need to do a sudo passwd root first)
cd /home/USERNAME
ln -s /usr/share/ecryptfs-utils/ecryptfs-mount-private.txt README.txt
ln -s /usr/share/ecryptfs-utils/ecryptfs-mount-private.desktop \
   Access-Your-Private-Data.desktop
chown -h USERNAME:GROUP *
where on Ubuntu GROUP is normally the same as USERNAME. (Use
ls -ld .
to check.) The files should now be back when USERNAME logs back in.

The bad thing about using the backup script this way is restoring a single file or so. I assume what you could do is have USERNAME log out. Then logged in as Alt, move the encrypted home folder out of the way:

su   (may need to do sudo passwd root first)
mv /home/.ecryptfs/USERNAME /home/.ecryptfs/USERNAME_org
Restore the /home/.ecryptfs/USERNAME of the desired date from backup following the corresponding section. Let USERNAME log in, get the desired files out of /home/USERNAME and put them somewhere else, then log out again. Move the original encrypted home folder back in place:
mv -i /home/.ecryptfs/USERNAME /home/.ecryptfs/USERNAME_restored
mv -i /home/.ecryptfs/USERNAME_org /home/.ecryptfs/USERNAME
When the user logs back in, the restored files can be moved back into the home folder.

If all is well, delete the restored encrypted home folder

rm -r '/home/.ecryptfs/USERNAME_restored'  (MUST use quotes)
Be sure to use the quotes as shown. The rm -r command can do lots of damage on typos or accidentally hitting space or Enter.

The alternative I recommend is to do backhome script backups as described earlier in addition to the backup script backups. Then the backhome backups can be used to restore individual files or folders. This also provides an additional amount of security. Another section might also be relevant here.

I guess if USERNAME is always logged in, you could instead have the backup script backup the readable copies of the files. In that case, the prune line in the backup script would be

   --prune /home/.ecryptfs/USERNAME \
Restoration of individual files or folders is possible logged on as USERNAME in the encrypted folder. Restoration of the entire system would produce an unencrypted home folder. With USERNAME logged off, you could temporarily take the restored readable files out of the folder into some other folder. Then you could try doing the ln -s stuff above, and see whether the user can properly login. If yes, logged in as USERNAME it should be possible to move the files back into the home folder, which would reencrypt them on the way. I think this scheme is messy and would not recommend it.

Let me know if you have additions or corrections to the above.

Encrypting just a folder Private

One approach that I did get to work on Ubuntu 11.10 with gnome is just having a single folder Private that is encrypted. You can then put all your sensitive documents and stuff in that folder. A thief can see your other documents, but not what is in folder Private.

There are two ways to do it. This section will cover the standard Ubuntu encryption method ecryptfs. The next section will discuss how to do the same using TrueCrypt. TrueCrypt allows you to use the encrypted files from both Linux and Windows.

If you want to use ecryptfs, I recommend that you use System Settings / User Accounts to create a user called Alt with administrator privileges. And do not forget to set a password for this user. The reason is that you can do certain things only easily when the user with the encrypted Private folder is not logged in. (Note that you could probably wait with creating Alt to when it is needed.)

To create a folder Private, install ecryptfs-utils and cryptsetup. Then in a terminal do from your user account

man ecryptfs-setup-private
ecryptfs-setup-private --no-fnek --noautoumount
Be careful: there is a u between noauto and mount. The above options are chosen for making backup and restoration easier. The --no-fnek option makes it possible for someone with access to your computer to see the files inside folder Private but not their contents. The --noautoumount keeps the files inside folder Private readable for the root user even after you log off. So they can still be backed up. That may of course be a security risk if someone gets access to the computer while it is still turned on and can guess the root password. When you turn off the computer, (as in shut down, not hybernate), the files are no longer readable.

You will be asked for both your login password and a password for the folder. Do not forget either password! Write them down and put them in a safe location. A thief cannot get at your files without the password, but neither can you.

Note that from now on, there are two copies of each file involved. There is the file that you see inside folder Private, but there is also an encrypted copy of the file in a hidden folder .Private. Do not delete hidden folder .Private or any of the files in it. That would be equivalent to deleting the files in the readable folder Private: you would lose them. The safe thing is to keep your hands off hidden folder .Private.

What I would do to back up in this case is as follows. Prune the Private folder from your regular backups. In particular, if you use the backhome script, add a line

   --prune Private \
in an appropriate place inside the backcmd alias, like immediately before the line with --alter=no-case. If you use the backup script instead, make that line
   --prune home/USERNAME/Private \
with USERNAME the user name being backed up.

In addition set up separate backups of the files in Private using the backfol script. In the script, the value of fsroot should be /home/USERNAME/Private, and nam should not be blanked out.

If you did not use option --noautoumount above, make sure to run the backfol script at a time when the user is normally logged in. Otherwise the script cannot see the files. If that is a problem, one possibility is to have the user him or herself run the backup immediately upon login using the command

/home/USERNAME/bin/backfol auto
To make that easier, you could add the above command to the startup2 script discussed in another section. (This must be an interactive application, or the error messages would get lost.) Then the user only needs to double-click an icon on the desktop. Either way, the user should not log out before the backup is done. And occasionally a file may be reported as changing during the backup. That is why I recommend --noautoumount if possible. In principle however, it is possible to not do backfol backups; see below for more.

Use the backfol backups to restore individual files or folders, logged in as USERNAME, as per instructions in the corresponding section.

To restore the entire folder however, log in as Alt above, with USERNAME logged off, and follow the appropriate section. Doing this, the backup or backhome backups will create an empty folder Private. Do

cd /home/USERNAME/Private
ln -s /usr/share/ecryptfs-utils/ecryptfs-mount-private.txt README.txt
ln -s /usr/share/ecryptfs-utils/ecryptfs-mount-private.desktop \
   Access-Your-Private-Data.desktop
chown -h USERNAME:GROUP *
where on Ubuntu GROUP is normally the same as USERNAME (use id USERNAME to check.) On login to USERNAME the files restored from backup should be back.

In absence of backfol backups, I guess you might also restore individual files or folders in Private from the backup or backhome backups. It is just a bit more complicated. First, consider the case you used the --no-fnek option as I recommended above. In that case, just follow the instructions in the appropriate section to get the backed-up hidden folder .Private into temporary_folder. Then just move the files that you want to restore out of temporary_folder into the appropriate places in .Private (not Private). The procedure is unchanged, except that .Private is hidden. (Use ls -a to see hidden files in the terminal. Graphically, there is an option in the view menu.)

If you did not use the --no-fnek option, the above will not work, because you will not be able to make sense out of the filenames in .Private. Instead, logged in as Alt, with USERNAME logged off, restore the entire home/USERNAME/.Private (backup script) or .Private (backhome script) in temporary_folder as per instructions in the corresponding section. Move the restored .Private out of temporary_folder to /home/USERNAME/.Private_restored. Then, without typos,

cd /home/USERNAME
mv -i .Private .Private_org
ls .Private     (must say: No such file or directory)
mv -i .Private_restored .Private
Login as USERNAME and get the desired restored files out of the backup Private and put them somewhere else. Log off USERNAME and return to Alt, then
su
cd /home/USERNAME
mv -i .Private .Private_restored
ls .Private     (must say: No such file or directory)
mv -i .Private_org .Private
Log in as USERNAME and put the restored files back in the normal Private. If all is well, get rid of the unused restored files:
sudo rm -r '/home/USERNAME/.Private_restored'  (MUST use quotes)
Be sure to use the quotes as shown. The rm -r command can do lots of damage on typos or accidentally hitting space or Enter.

Encrypting just a folder Private, but using TrueCrypt

A different way to create an encrypted folder Private is to use TrueCrypt. This is what I do myself. The advantage of TrueCrypt is that you can access the files in Private from both Linux and Windows. Sometimes I get stuck with a MS Word document that Open/LibreOffice simply does not handle (especially the equations).

To set up TrueCrypt encryption, download TrueCrypt from its web site, put the contents on the desktop using say Archive Manager, then in a terminal, logged on as USERNAME, do

cd Desktop
sudo ./truecrypt-...setup...
rehash
whoami    (will tell you your USERNAME)
truecrypt
Select Create Volume / Create encrypted file container / Standard volume. Make the file container /home/USERNAME/.Private.tc with USERNAME your username. Next, next. Select a suitable size big enough to hold your documents and a password that is secure and you will not forget. Write down the password and put it in a safe location. Select FAT type. Move your mouse randomly before clicking Format. After creation of the container, exit TrueCrypt and in the terminal do
cd
pwd            (should say /home/USERNAME)
mkdir Private
chmod u=rwx,go= Private
Make sure to keep your hands off (hidden) container .Private.tc. Do not delete it! That would be like deleting all your files in Private.

If you want to share the files with Windows, it may be best to move .Private.tc from /home/USERNAME to a suitable location on the Windows disk, so that you can find it from Windows. When doing this from the graphical environment, first turn on View / Hidden Files. (A better approach might be to create .Private.tc from Windows in the first place. That allows ntfs to be used, for one.)

Next in the terminal, mount the device

cd
sudo truecrypt /LOCATION/.Private.tc Private
where /LOCATION is the location where you moved .Private.tc. (From the graphical environment, with View / Hidden Files turned on, you can right-click .Private.tc and select Properties. This will show the current location. Or if you left .Private.tc in your home folder, you can simply leave out /LOCATION/.) Enter your sudo password and then the container password, and the volume should now be mounted. To check, in the terminal enter
truecrypt -t -l
This should say
1: /LOCATION/.Private.tc /dev/mapper/truecrypt1 /home/USERNAME/Private

Create a Windows-readable test document inside the subfolder Private of your home folder, like an MS-Word one using Open/LibreOffice Writer. Restart the computer into Windows and install TrueCrypt in Windows. Then mount .Private.tc as some folder according to the Windows instructions and check that you can see the test document.

Back in Linux, any files that you put inside the subfolder Private of your home folder will really be placed in the container .Private.tc and be secure. But that is only true as long as .Private.tc is mounted. Each time you start up the computer into Linux, you need to remount .Private.tc on login. You can do that from a terminal with the sudo truecrypt command as described above.

But of course, it would be simpler to add the needed command as a Startup Application. Then you do not have to type it in manually. And more importantly, you will not forget to do it. But the command is interactive, it needs passwords, and that does not seem to work from Startup Applications. Therefor I would recommend that you follow the instructions of another section to set up login warnings. After following the instructions there, in a terminal do

gedit bin/startup    (or: nano bin/startup  or: pico bin/startup)
Find the line:
# then add conditional goto do_startup2 commands below this line:
and immediately behind it put the lines
cd
truecrypt -t -l Private >& /dev/null
if ($status) goto do_startup2
Save startup and exit. Also do
gedit bin/startup2    (or: nano bin/startup2  or: pico bin/startup2)
Find the line:
# Interactive commands to be executed:
and immediately behind it put the lines
cd
truecrypt -t -l Private >& /dev/null
if ($status) sudo truecrypt /LOCATION/.Private.tc Private
Here /LOCATION/ is as before, or blank if you left .Private.tc in your home folder. Save and exit.

After this, when you log in and Private is not mounted, there will be a "Run Me" icon on the desktop that you can simply double-click to mount the folder. The icon will then disappear.

The only proper way to backup the files in Private is separately using the backfol script. Set variable fsroot in the backfol script equal to /home/USERNAME/Private. Leave nam as it is, do not blank it out. Truecrypt volumes remain readable after log out, so that is not a concern. Make sure Private is mounted when you run backfol init.

Of course, you will also use the backup or backhome script to back up the rest of your home folder. You will need to modify these so that they do not backup .Private.tc. .Private.tc might be as big as 1000 MB, but is then probably almost completely empty. You do not want to back up all that unused empty space. Moreover, if you make a tiny change to a single file in the Private folder, an incremental back up of .Private.tc would be forced to back up the entire 1000 MB container again from scratch. To avoid this, for the backhome script, add a line

   --prune Private --exclude .Private.tc \
in an appropriate place inside the backcmd alias, like immediately before the line with --alter=no-case. To do the same for the backup script, make that line
   --prune '"home/USERNAME/Private"' --exclude .Private.tc \
where USERNAME is your username.

Folder Private or any files in it will always need to be restored from the backfol script backups. Make sure that Private is mounted before doing so. In a terminal do

truecrypt -t -l
If Private is not mounted, do so as described above. (If .Private.tc got lost completely, first recreate a new empty .Private.tc and Private as described above.) After Private is mounted, you can follow the appropriate instructions in the appropriate section and in the readme.txt file in the backfol backup folder. In fact, it is a very good idea to edit file readme.txt as soon as you have done backfol init, and add a warning that Private must be mounted at restoration time.

For a multiuser machine, the ecryptfs approaches of the previous subsections seem much more appropriate. Though I am sure you could set something up if you are determined.

Securing files with passwords

As noted, various programs will ask you for our password, carefully concealing it when you type it in. Then they will take that same password and put it in files that are plainly readable for anyone with physical access to your computer. Like the thief who steals it.

What to do? Well, you can secure these files by moving them into your encrypted Private folder. (If your entire home folder is encrypted, anything in your home folder is already secured. But files outside your home folder are not unless the entire system disk is encrypted.)

Check the documentation of any program on your computer (not the Internet) that has asked you for your password. You might also be able to find some of these passwords using searches like

find "$HOME" -type f -exec grep -F -H -m 1 'PASSWORD' '{}' ';'
sudo find /var -type f -exec grep -F -H -m 1 'PASSWORD' '{}' ';'
sudo find /usr/var -type f -exec grep -F -H -m 1 'PASSWORD' '{}' ';'
sudo find /etc -type f -exec grep -F -H -m 1 'PASSWORD' '{}' ';'
(If there is a ! in PASSWORD, type it as \! and if there is a single quote, type '"'"' for the single quote. That is quote, double quote, quote, double quote, quote.) Any file with PASSWORD in it will be listed as
/LOCATION/NAME:....
where NAME is the part after the final slash before the first colon. To secure this file if it is in your home folder and your home folder is not encrypted, but Private is, do
mv -i "/LOCATION/NAME" "$HOME/Private/SOMENAME"
ln -s "$HOME/Private/SOMENAME" "/LOCATION/NAME" 
where you can take SOMENAME the same as NAME if there is no conflict. Otherwise add a 2 or so.

To secure such a file if it is in not your home folder, do

sudo mv -i "/LOCATION/NAME" "$HOME/Private/SOMENAME"
sudo ln -s "$HOME/Private/SOMENAME" "/LOCATION/NAME" 
If you do not yet have a folder Private, (because your entire home folder is encrypted), first do
mkdir $HOME/Private
chmod u=rwx,go= $HOME/Private

Incidentally, if the find commands above find files where PASSWORD is not really your password, then there is a very serious problem with your password.

If you use wvdial, you will find your password in file .wvdialrc in your HOME folder, as well as in system files /var/ppp/pap-secrets and /var/ppp/chap-secrets. That is in case a thief might have forgotten one of the file names, and would have to do one of these slow and inconvenient searches above for the string "password=".

Another security risk is that mlocate (or equivalent like locate or slocate) keeps a list of all file names on your system in readable form. This is a potential security hazard because you will be unable to deny the existence of files in encrypted folders. Some countries, like the USA, might use strong-arm techniques to force you to provide the password. The file with the filenames is /var/lib/mlocate/mlocate.db. I guess you could move this file also into your Private folder. But I do not see a reason to have mlocate around in the first place. So I uninstalled it. I never use the locate command and I dislike having some program slowing down my system by constantly searching my disk to update 10+ MB worth of file names. Windows does something similar too; I always disabled it when I still used Windows fairly regularly. In the rare (nonexistent) cases that I do not have at least a general idea where a file is, I allow the system to take its time doing a needed search of every nook and cranny in the computer at that time. But not put the results in a file for any unauthorized person to data-mine.

Securing the swap disk

Encrypting your home folder or a folder Private does not necessarily ensure that a thief cannot read the files in those folders. Readable copies of the documents may end up on the so-called swap disk. Maybe this is a more uncertain thing for a thief, and more inconvenient to find, but it is a liability.

Fortunately there is a simple solution. (One that actually works for me without any problems.) Having already installed ecryptfs-utils and cryptsetup, just run

sudo ecryptfs-setup-swap
Note that after this, hybernate to disk will no longer work on your laptop. Suspend will still work, but if you do not use the laptop for long times, you will need to shut it down completely. (That is probably safer anyway.)

If, with TrueCrypt, you also look at the documents from Windows, you also need to encrypt the Windows swap (or paging) file. See the TrueCrypt documentation for that.

Terminal typing shortcuts and help

When using the terminal, you can often cut down on typing and typos by pressing the Tab key once or twice. This will try to complete the command or filename you are typing in. If the completion is ambiguous, it will show you the possibilities. (In old versions of tcsh, you need to press Ctrl+d to see the possible completions, which is a pain.) It is a great help with those long filenames.

The cursor keys will work, but the mouse does not. However, you can still select a piece of text with the mouse and then select Copy from the Edit menu. If you then select Paste, the copied text will be inserted at the cursor, (not where you click), just like if you typed it in. If you copy a UUID that way, make sure you get all of it. Insert it at the cursor in say /etc/fstab using Paste from the Edit menu.

To abort something that you are doing in the terminal, usually you can press c (think cancel) while holding down Ctrl.

If you want help with a command, try prefixing man to the command, like say

man dar_static
Use the Return or Space key to scroll. Often, the command itself may also produce brief instructions, like
dar_static -h | more
Some programs use --help instead of -h.

Using 'ls' (without the quotes) shows the nonhidden files in the current folder, 'ls -a' shows all files, including the hidden ones, and 'ls -al' shows all files including additional information such as which user owns the file and what permission (read, write, execute) that user, that user's group, and other users have for the file.

To view the contents of a file, assuming it is a readable file, do 'more FILE' where FILE is the name or location and name of the file. Press 'h' for help in more. To search the current folder and its subfolders for a file named FILENAME, use 'find . -name "FILENAME"'. To search the entire computer, use 'sudo find / -name "FILENAME"', but that will take a long time. If you do not know the exact name, learn about wildcards using 'man -S7 glob'.

Also of interest, 'whoami', or 'id', show you what user the system thinks you are, 'ps -u USERNAME' shows what processes user USERNAME has running, 'pwd' shows the current location, 'echo $HOME' shows the home folder location, 'cd' changes location, 'exit' exits the terminal, or just being the root user after su, 'du -k' shows the disk storage being used by the files in the current folder, 'df' shows the amount of space on the disks, 'sudo mount -l' shows mounted devices.

More advanced commands: 'chmod' changes protection of a file or folder and 'chown' changes its owner, 'ln -s' makes logical links, 'umount' unmounts disks, and 'mount' mounts them. Be careful with these commands.

Resetting your password

If you forgot your password, normally you do not want to use backups to recover. One possible exception that I can imagine is if your entire system is encrypted, and you do not remember the password(s) even after waiting a few days, and cannot guess it with some trial or error, and your backups are not encrypted or you do remember the password for those. In that special case you might indeed be forced to use the instructions on restoring the system disk.

In most cases a password can simply be reset. To do so, restart the computer and from the boot menu select Ubuntu in recovery mode. Go into the root shell mode. That puts you in a simple terminal. Refresh your memory about your username by entering

ls /home
Your username USERNAME will be listed somewhere. Then enter
passwd USERNAME
Type in what you want your new password to be, twice (it will be invisible), then
exit
and restart the computer. That is it.

If there is a password on root and you do not remember that either, you will need to reset that password using a live disk. In a subsection of the final section there is info on using live disks. It explains how you can mount the hard disk inside the computer under some name /SYSTEM_DISK. And how to become root in a terminal. Having done so, in the terminal

cp -i /SYSTEM_DISK/etc/shadow /SYSTEM_DISK/etc/shadow.save
gedit -s /SYSTEM_DISK/etc/shadow   (or use nano or pico instead of gedit -s)
Find the line starting with root: and take out the stuff between the first and second colon in the line. That will make the password of root blank. Restart the computer into root shell mode, and make absolutely sure to immediately set a secure new root password with the 'passwd' command before doing your own. (Some old unix systems might use file passwd instead of shadow.)

Note that a bad person with physical access to your computer, like the person who steals your laptop, can do all these things too.

The above procedures may however run into problems when encryption is used. In that case you may really need to restore relevant parts from backup, assuming that your backups are not encrypted or you do remember the password for those.

Note that I would always try to keep the current encrypted files even if the password is lost. I would move (mv -i) /home/ecryptfs/USERNAME or /home/USERNAME/.Private to somewhere else. A copy of the password may be found. And even if it does not, there is information in the number of encrypted files, their sizes, and if you used the --no-fnek option, also in their file names.

If you have an encrypted home folder, the old password is needed to unencrypt the home folder. All files will be unusable unless you can recover the password somehow. Similar considerations apply if not your entire home folder, but just a folder Private is encrypted. Then the files in Private will be unusable.

One possible exception to this exists. When you set up the encrypted folder, you chose or received a folder password FOLDERPASSWORD and was told to write it down and put it in a safe place. If you still have this FOLDERPASSWORD, it is enough to recover the Private folder. To do so, login with your NEWLOGINPASSWORD. In a terminal

cd
cd .ecryptfs
mv -i wrapped-passphrase wrapped-passphrase.old
printf "%s\n%s" 'FOLDERPASSWORD' 'NEWLOGINPASSWORD' | \
   ecryptfs-wrap-passphrase wrapped-passphrase
Note: type any ! as \! and any ' as '"'"' in the passwords. That is quote, double quote, quote, double quote, quote. Logout and login again and the folder should be recovered. Or just double click the link inside folder Private, enter your new password in the terminal that opens, normally, and then refresh the Private folder.

Note also that without your old password, your keyring, if any, will no longer work; you need to delete it and reset all passwords in it. Much better not to forget your password in the first place...

Fixing boot problems

Sometimes the boot, (i.e. computer start up), procedures can become messed up. Typically that happens when you do some disk partitioning. Or when the files in /boot become messed up. Or /etc/fstab. In Ubuntu, the computer will then no longer start up but may just sit there with a "grub>" prompt. Or it may not even get that far. Restoring backups may not be the right answer here.

If your computer does not start up from the hard disk inside the computer, you will need to start it up from some sort of "live disk". A subsection of a later section has info on how to set your computer so that it can boot from a live disk. If you did not yet do so, that is always the first thing to do.

Next, if you clobbered disk partitions while partitioning the disk, or running some disk defragmentation utility or something like that, you may want to try a disk utility like TestDisk to get it back. In that case it is important not to keep messing around with the disk after the partitions have been clobbered. Doing more partitioning or trying to restore backups may destroy the information that could be used to recover the partitions. Run the utility immediately after the problem arises. There is quite extensive documentation with multiple examples on the TestDisk web page.

Note that restoring backups does not fix partitions. In fact, if the partitions are messed up, you will first need to restore them or create new ones before you can restore backups. If in doubt, there is a subsection of a later section that explains how you can take a closer look at your hard disk using a live disk.

Note also that partition managers like gparted have data recovery capabilities. If you have important files that are not in your backups, you should try to recover them before restoring backups. Restoring backups will overwrite what is currently on the disk.

If the problem is that files needed during the startup have become corrupted or deleted by mistake, it may also prevent the computer from starting up normally. But in that case the partitions should be fine. So restoring the appropriate up-to-date backups should fix things. Ubuntu and Debian use the "grub boot loader", which uses files in folders /boot and /etc. There are other boot loaders like lilo; see their documentation on the web. Any boot will need a correct file /etc/fstab. To restore the boot files requires backups made with the backup script. Scripts like backhome or backfol do not back up system folders like /boot or /etc.

Since the system will not start up, to restore the backup script backups you need to use a live disk following the procedures in the section on restoring the system disk. However, in the subsection where the backups are restored, where it says to move * into saveorg, move just boot and/or etc instead of *. And in the dar_static commands where you restore the backups, append

 -g boot
and/or
 -g etc
with the leading space as shown, at the end of the lines. Do not forget these -g parts or you may create a mess. Try restarting your computer (the live disk should eject itself during the restart) to see whether it starts normally.

If you do not have backups made with the backup script, you cannot do that. But there may be ways to recreate the boot files. The easiest solution may be to use a helper live disk like Ubuntu's Boot Repair or Super Grub Disk. Unfortunately, this author does not have experience with them. See also the Ubuntu community documentation on grub.

Sometimes however, all that it takes to set things right is to rebuild grub. If you know that the boot files have become corrupted while the computer is still started up, you could try to rebuild grub as

sudo update-grub
sudo grub-install /dev/sda   (assuming /dev/sda is your hard disk)
sudo grub-install --recheck /dev/sda
maybe after reinstalling package grub-pc.

But of course, normally you only notice something is wrong when the computer does no longer want to boot up. Then you need to run grub-install from a live disk. Instructions can be found in the section on restoring the entire system disk. Follow the subsection that explains how to to start from a live disk and mount the normal system disk as /SYSTEM_DISK. Skip the subsections on finding and restoring backups, but do follow the subsection on making the disk bootable. If you want to try this, your live disk should have the appropriate version of grub-install. At the time of writing, Feb 2012, the Ubuntu 11.10 installation/live disk has grub-install. (If your live disk does not have grub, or not the right version, the chroot method discussed in the subsection may allow you to run grub-install from the system on the hard disk.) Note that these instructions do not account for RAID disks or systems that are completely encrypted. For those, have a look at the Ubuntu community documentation on grub.

Restoring folders or files

This section explains how you can restore individual files or entire folders from the backups you made. The instructions assume that the computer system itself is operational. See the next section if your system is no longer operational. Open a terminal. There is none in the menus on the Ubuntu 11.10 Unity, so press t while holding down Ctrl and Alt. Or browse down Computer / File System / usr / bin and double-click gnome-terminal or x-terminal-emulator or xterm. (To abort something you are doing in the terminal, try pressing c while holding Ctrl.) In the terminal do

whoami               (will list your USERNAME)
su                   (may need to do 'sudo passwd root' first)
whoami               (must say root)
tcsh                 (provides a friendlier user environment)

In a graphical environment, find the USB disk with the backups to restore. (If you are not in a graphical environment, or cannot find the disk for some reason, there are some hints on finding the disk in a subsection of the next section.) Then find the folder backXX (i.e. backup, backhome, backfol, or backapt) on this disk that contains the backups. Right click the folder and select properties. The value of the location of backXX will be called /USB_LOC from now on. Write down what it is. It should start with a /; "On the desktop" does not qualify. If you do not have a graphical environment, the subsection mentioned above shows how you can use the find command in the terminal to find /USB_LOC.

Next check the presence of the backups. The most recent backups should be in a subfolder 1 of backXX. If you want to restore an older backup than can be found in 1 for some reason, check subfolders 2, 3, ... for whatever date you want. For example, you might need to restore a file that has been missing for quite some time. You should only restore backups from a single subfolder. In the various commands below, the chosen subfolder will be indicated as /N/ in which N is the number of the subfolder. In particular /N/ is /1/ if you want the latest backup. Write down what /N/ is.

Note that the backup file names start with the date MMDDYY (month day year) that they were made. If more than one backup was made on the same day, you can check the times they were made in the terminal as

ls -l /USB_LOC/backXX/N/
or if needed as
stat -c %y /USB_LOC/backXX/N/MMDDYY_NNNNNNN.log

Check the .log files in subfolder N to see whether there were any problems during the backups. Ignore grumblings about .gvfs. Also, in any such .log file, reading down the first line, you will find a part

... /USB_LOC/backXX/1/MMDDYY_NNNNNNN --fs-root /FSROOT --noconf ...
Write down what /FSROOT is. This is the folder which has been backed up, with its subfolders. You need that information below.

In folder backXX, there is also a text document called readme.txt with restoration instructions. These instructions may be more specific than the ones here. However, they were made during backXX init, and something might have changed since then. For example, if you moved the backups to a different USB disk, the value of /USB_LOC will have changed from what it was when the readme.txt file was created. In that case, go with the instructions on this web page.

In a nongraphical environment, things like the ones above can be done using commands like

ls /USB_LOC/backXX/
ls /USB_LOC/backXX/N/
more /USB_LOC/backXX/N/MMDDYY_NNNNNNN.log
more /USB_LOC/backXX/readme.txt
with /USB_LOC, backXX, and /N/ as found above, and MMDDYY_NNNNNNN as listed by the preceding ls command.

Now you will want to list at least the files in the full backup, as a sanity check that you can access the backups and that the expected files are there. Do so as

dar_static -l '/USB_LOC/backXX/N/MMDDYY_NNNNNN0'
(That is -l, not -1). While typing those long filenames, try pressing the Tab key once or twice; it might partially complete the filename for you. The quotes are only needed if USB_LOC contains spaces. Note that the full backup file name is typed out to, but not including, the point following the 0. If you encrypted the backups, you need to add
 --key ':PASSWORD'
with the shown leading space, at the end of the dar_static line. In case PASSWORD contains exclamation marks, put a backslash in front of each, so type ! as \! within PASSWORD. Similarly type a single quote ' as '"'"' within PASSWORD. That is quote, double quote, quote, double quote, quote.

The dar_static command above will probably list a lot of files. If you want to check for the presence of a specific file, say an Microsoft Word file named Resume.doc, do so as

dar_static -l '/USB_LOC/backXX/N/MMDDYY_NNNNNN0' | grep -F 'Resume.doc'
If you do not remember upper or lower case, change -F into -iF. You can also specify only part of the name. If you want to check what files are in an incremental backup set, use
dar_static -l '/USB_LOC/backXX/N/MMDDYY_NNNNNNN' -as
For an incremental backup, the final N is a digit greater than 0. Note that the next 'digit' after 9 is A, then B, etcetera, possibly all the way to Z. (But never further than that.) You will see that incremental backups contain much less files than full ones. That is because files that are unchanged since the last backup are not included.

Next, things will depend on whether you want to restore the entire folder /FSROOT (because it has been completely lost or irreparably messed up) or just restore one or more individual files or individual subfolders. In the former case, proceed with the next subsection, in the latter case proceed with the subsection after that.

Restoring the entire folder /FSROOT

If the folder /FSROOT has disappeared completely, you will first need to create an empty one. To see whether it is gone, try to create a new one. In the terminal do
mkdir '/FSROOT'
If it complains
mkdir: cannot create directory `/FSROOT/': File exists
folder /FSROOT still exists. Skip forward to the part where you move to that folder with cd.

If it complains

mkdir: cannot create directory `/FSROOT/': No such file or directory
then /FSROOT consists of pieces and you need to create the previous piece first. For example, if /FSROOT was /home/USERNAME, then the /home folder is gone too, and you have to mkdir '/home' first, then try again mkdir '/FSROOT'.

After mkdir succeeds, you will need to set the proper owner and protection. The correct commands to do so can be found in file oldfsroot.txt in the backXX backup folder (along with readme.txt):

more /USB_LOC/backXX/oldfsroot.txt
But if that file got lost for some reason, in a second terminal, without using su, do
id
It will produce something like
uid=UID(USERNAME) gid=GID(GROUP) ...
Then in the first terminal enter
chown UID:GID '/FSROOT'
chmod u=rwx,go=rx '/FSROOT'
making the appropriate substitutions. You can kill the second terminal if any. What is behind go= can vary, but the above is the default for your home folder on Ubuntu. The r allows anyone to see what files you have, which you might not want on a multi-user computer. But on a secure single-user computer it may be OK. The author prefers u=rwx,go=x anyway.

(In case you also had to create the /home folder above, do 'chmod u=rwx,go=rx /home'. The home folder should be owned by root, so that is OK as is.)

You should now have a folder /FSROOT. In the terminal, move to that folder:

cd '/FSROOT'
pwd         (MUST say /FSROOT)
ls
The second command must show that your location is indeed /FSROOT. The third will list the files that are still left in the folder, if any.

If there are still files left in the folder /FSROOT, you need to move them out of the way so that you do not lose them. (You might find you still want some.) Do this as

mkdir saveorg
mv -i * saveorg/
ls
ls saveorg
The second command will complain. But the third should show that all files are out of the way, and the fourth that they are safely in saveorg.

Now restore the backed up files as

dar_static -x '/USB_LOC/backXX/N/MMDDYY_NNNNNN0' --verbose
dar_static -x '/USB_LOC/backXX/N/MMDDYY_NNNNNN1' --verbose --no-warn
dar_static -x '/USB_LOC/backXX/N/MMDDYY_NNNNNN2' --verbose --no-warn
and so on until there are no more incremental backups, or until you have reached the incremental backup you want if not the latest. Note the increasing final digit. Add the password again if encrypted.

You are done. Note that saveorg might conceivably contain versions of files more recent than the backup. If so, you may want to get them out of saveorg and into their original folder. (Don't get confused, the same folder names are inside saveorg as outside it. Open a left-hand folder window in which you go into saveorg. Open a right-hand folder window in which you do not go into saveorg. Then move the files from left to right.)

When sure that all is well, get rid of saveorg,

sudo rm -r '/FSROOT/saveorg'   (MUST use quotes)
Be sure to use the quotes as shown. The rm -r command can do lots of damage on typos or accidentally hitting space or Enter.

Restoring individual files or folders

If you want to get an individual file or folder out of the backup set, or a few of them, you need to be able to specify which one. On computers, complete file and folder specifications take the form /LOCATION/NAME, where NAME is the name of the file or folder and /LOCATION its complete location.

Consider a simple example. Suppose your resume is a Microsoft Word file Resume.doc in your Documents folder. Then if you right-click Resume.doc in Ubuntu and select properties, it will list the NAME as Resume.doc and the /LOCATION as /home/USERNAME/Documents. So /LOCATION/NAME is /home/USERNAME/Documents/Resume.doc.

So far so good. But if Resume.doc is messed up and you want to get the last good version from your backups, you need to specify a LOCATION/NAME relative to /FSROOT. More specifically, you must omit /FSROOT/ from /LOCATION/NAME, where /FSROOT is what you wrote down earlier. For the example /home/USERNAME/Documents/Resume.doc, if you use the backhome script backups, /FSROOT will be /home/USERNAME, and the relative LOCATION/NAME will therefore be Documents/Resume.doc. If you used the backup script however, /FSROOT is just /, and you get your resume out of backup as home/USERNAME/Documents/Resume.doc.

Note in particular that a relative LOCATION/NAME never starts with a /.

(If you forget the exact filename, you could get the entire Documents folder out of backup. In that case, LOCATION/NAME would be Documents for the backhome script, and home/USERNAME/Documents for the backup one. Note that if you wanted to get your entire home folder for the backhome script, LOCATION/NAME would be blank.)

Now it is time to get to it. In the terminal, move to the /FSROOT folder

cd '/FSROOT'
pwd           (MUST say /FSROOT)

Make a temporary folder to extract the backups in without clobbering anything that is already there:

mkdir temporary_folder
cd temporary_folder
pwd                    (MUST say /FSROOT/temporary_folder)
The final command should produce /FSROOT/temporary_folder or there is something wrong that needs to be corrected right now.

Now get the desired file or folder out of backup:

chmod a+rx .             (note the final point)
dar_static -x '/USB_LOC/backXX/1/MMDDYY_NNNNNN0' -g 'LOCATION/NAME' --verbose
dar_static -x '/USB_LOC/backXX/1/MMDDYY_NNNNNN1' -g 'LOCATION/NAME' \
   --verbose --no-warn
dar_static -x '/USB_LOC/backXX/1/MMDDYY_NNNNNN2' -g 'LOCATION/NAME' \
   --verbose --no-warn
and so on until there are no more incremental backups, or until you have reached the incremental backup you want if not the latest. In particular if you want to restore a deleted file, you must stop before the backup in which the file is deleted. Note the increasing final digit. Add the password again if encrypted. Also, if you are retrieving all of /FSROOT, LOCATION/NAME will be empty; in that special case you must leave away the complete
 -g 'LOCATION/NAME'

Finally, you must move the file or folder you want out of temporary_folder and into its original location. Do that as

mv -i 'LOCATION/NAME' '../LOCATION'
This command might say that the file or folder already exists. In that case, answer n on the question whether to overwrite the original and do
mv -i 'LOCATION/NAME' '../LOCATION/NAME_restored'
Then afterwards you may want to use your graphical environment to figure out which of the two versions you want to keep or delete. If you were not sure about LOCATION/NAME before and restored more than you wanted, this is the time to find out the correct LOCATION/NAME. Pressing the Tab key while typing the first LOCATION/NAME in the mv command will be a great help here; it will show the possibilities. You could also open a graphical folder window in which you enter the temporary_folder inside /FSROOT to see what is there. (If you want to move the file or files completely graphically, do not get confused. The same folder names exist inside temporary_folder as outside it. Put a second folder window to the right of the first one, and in that folder window, do not go into temporary_folder. Then move files from left to right.)

When sure that all is well again, in a terminal do

sudo rm -r '/FSROOT/temporary_folder'   (MUST use quotes)
Be sure to use the quotes as shown. The rm -r command can do lots of damage on typos or accidentally hitting space or Enter.

Restoring the entire system disk

This section discusses the cases

If your system is OK, but only a single folder, (which might well be your entire home folder for that matter), is messed up, this section does not apply; in that case read the section on restoring individual files or folders.

Typically, a bad system exists if you are no longer able to start up the computer at all. If the only problem is that you have forgotten your password, do not use this section. Instead see the section for resetting passwords. Another case in which this section does not apply is if just the boot (i.e. computer startup) procedures got messed up. Typically that happens after a partitioning of the hard disk, or if the files in /boot have become messed up. Then if you start up the computer, it may just sit there stupidly not knowing what to do with a "grub>" prompt for Ubuntu. Or it may not even get that far. In that case, restoring backups may not be needed or even be helpful; have a look at the section on fixing boot problems. In particular, if you killed off your partitions while partitioning, you may want to try an application like TestDisk mentioned in that section first before touching your disk and making such a recovery impossible.

Otherwise, proceed with this section. The rest of this section consists of subsections. Read from the start and follow the instructions on which subsections to use or skip.

Note first however that these instructions do not allow for RAID systems or systems that are completely encrypted. For those the simplest approach might be to follow the instructions below that assume you have no backup script backups, even if you do. Then when you have a working system back, you can restore selected parts, like user home folders, following the instructions of the previous section. Presumably, if your live disk will mount your RAID disks as a RAID and you follow the Ubuntu community documentation on grub, you should be able to restore the entire system using procedures similar to the ones here. Or if you are able to mount your encrypted system partitions using your live disk, you may be able to follow the instructions here. But this author has no experience. The Ubuntu web pages may have more precise information, but it seems to be scattered around.

Putting in a new hard disk

If your hard disk did not go bad physically, skip this subsection and proceed with the next one. If you can still run Windows correctly from the hard disk, it is not physically bad. If you just messed up files, or partitioning, the disk is not physically bad. If you are not sure whether your disk is physically bad, for now skip this section. You can always come back here if it becomes obvious the disk is really bad.

A hard disk that goes physically bad typically starts behaving erratically even though you have not done anything unusual. It is likely to make weird and unusual noises. Or it may just stop responding altogether. There is often a clear reason why the disk went bad, like a very close lightning strike or an environmental failure in the room that the computer is in. (But poor power quality can also wear them down over time. If you have a UPS, this is less likely. On the other hand, if you do not have a UPS and live in Tallahassee, this is almost sure to happen.)

If your hard disk is physically bad, you will of course first need to buy a new one and put it in the computer. Do not put it in in a electrostatic environment. And in any case, keep your fingers well away from the metal on connectors. On the other hand, touching the metal computer or disk case itself is a good idea to reduce electrostatic risks.

Read the instructions that come with the disk. If you have only one hard disk, it is the "master". Write down how things were configured (connectors, position, screws, etc) on the bad hard disk before taking it out and putting in the new one.

Then if your computer was a Windows one on which you installed Linux as a second operating system, you will need to reinstall Windows following the computer manufacturers instructions. After that, continue with this discussion.

Booting from live or installation disks

What you need in order to do the recoveries described in this section, besides backups, is a "live disk," and maybe also an "installation disk." A live disk is a CD or DVD (or USB stick, for that matter) with a working Linux operating system on it. An installation disk is a CD or DVD (or USB stick) that can install a fresh version of Ubuntu (or whatever operating system you use) on a computer.

For a Debian or Ubuntu system, you really want a live disk with grub and gparted on it. For systems that use lilo as boot loader instead of grub, you want lilo on the live disk. You will also need the documentation on how to use lilo, in case the hard disk needs to be made bootable again. The present author has no knowledge about lilo, and little about grub, for that matter.

You can find plenty of live disks for free on the Internet, but you may already have one. My Ubuntu 11.10 installation CD will in fact double as a live CD. When it starts up, simply select "Try Ubuntu" instead of "Install Ubuntu". Bingo, a live disk. And it has grub and gparted.

If you need either disk, using some functional computer you can download an ".iso" image from Ubuntu or elsewhere and "burn" it to a CD or DVD. (Try right-clicking the downloaded .iso image). You can also order installation disks for a fee.

The next step is to ensure the computer can boot from live and installation disks. If a live or installation disk is in the DVD drive (or USB slot) of your computer while it is starting up, the CD or DVD (or USB stick) is supposed to become the active system disk. That means that the system disk on the hard disk inside your computer is not active. So you can make the needed changes to it.

Note that it says that the CD or DVD or USB stick is supposed to become the system disk. Most computers seem to come from the factory set so that they will not. You may already have solved that problem when you installed Ubuntu. If not, you need to start up the computer and somewhere in the beginning press some key. Typically the key is F2 or F12, and is shown on the screen shortly after power up. You will probably have to press it very quickly at that point. It should open text menus. Search through those menus for something called a boot order or startup device order. Change this boot order so that the computer first tries to start up from the DVD drive, or USB stick, before it tries the hard disk. Save and exit the menus. See the documentation of your computer, if accessible, for more.

Starting up a live disk recovery session

This subsection assumes that you have a live disk and have enabled startup from such a disk as described in the previous subsection. Now the first step is to take any unneeded USB sticks and other stuff out of the computer. These will just confuse things.

Then put the live disk in the DVD drive (or USB slot on the computer itself) and restart the computer. The live disk will then become the acting system disk. Note that starting up from a CD or DVD is a slow process. Give it time. If you see occasional action of the DVD or USB light, the process is probably still working.

When started up, the acting Linux system disk is the live disk. It will not be as fast as usual. The point is however that the normal Linux system disk, the one on the hard disk inside the computer, is now not the acting system disk. So you can make changes to it.

Now open a terminal. Try the menus, try pressing t while holding Ctrl+Alt, or browse down Computer / File System / usr / bin and double-click gnome-terminal, x-terminal-emulator, or xterm. (If there is no graphical environment, you are already in a terminal.) In the opened terminal, do

su         (may need to do sudo passwd root first, making up some password)
whoami     (MUST say root)
tcsh       (ignore it if this complains; tcsh is not really needed, just nice)
gparted &  (to have a first look at the hard disk)

The final command above should open a new window. That assumes that your live disk has gparted. If not, you can get equivalent textual information from commands like 'fdisk -l' or 'parted -l' (without the quotes.) This discussion will however assume gparted. The gparted window will probably give you a look at '/dev/sda', normally your hard disk. There is a drop-down box to see other devices, like maybe your USB disk. You can also use the menu to see the other devices.

You now want to find your hard disk among the drop-down box devices. Check the sizes of the various devices to get a clue what would be your hard disk and what your USB disk, etcetera. The size of a disk determines how expensive it is, so it will always be stated when you buy the disk or entire computer. The stated number may not be exactly what you see here, however. (Roughly speaking 1 GB = 1,000 MB = 1,000,000 kB = 1,000,000,000 B, where B stands for byte.) In the simplest case the hard disk is /dev/sda, but there is no guarantee. (And if you have a RAID system, there will be multiple hard disks working together as a unit.) Looking at the manufacturer, under View / Device Information, might also help clarify things.

The listed contents also gives a clue which device is the hard disk. A physical hard disk is subdivided into partitions, really subdisks. They vary greatly with hardware, what is on the disk, computer manufacturer, how big the disk is etcetera. But just to give you an idea, here is what gparted shows for my own physical hard disk, which is /dev/sda:

partition      type      size      flags
-------------------------------------------
/dev/sda1      fat16     40 MB   boot, diag  
/dev/sda2      ntfs     260 MB
/dev/sda3      ntfs      60 GB
/dev/sda4    extended
  /dev/sda5    ext4     160 GB
  /dev/sda6 linux-swap    9 GB
/dev/sda1 is the primary "active partition", which has the boot flag set. It is very small at 40 MB. It has a simple (sub)disk format called fat16. Next /dev/sda2 is a primary partition on which Dell put a backup copy of Windows. At 260 MB, it probably about as small as it can be to hold the backup. The ntfs format is the standard (sub)disk format that Microsoft Windows uses nowadays. Next /dev/sda3 is the primary partition that holds the actual Windows "disk". All my Windows documents and all Windows programs are in partition /dev/sda3. Note how much bigger this partition is than the previous two: 60 GB is equal to 60,000 MB. It is again ntfs as expected.

The remaining space on my hard disk is taken up by an "extended" partition /dev/sda4. This extended partition is in turn subdivided in logical partitions (subpartitions) /dev/sda5 and /dev/sda6. Logical partition /dev/sda5 is my linux system (sub)disk. It has all my linux files and all linux programs. This is by far the biggest partition at 160 GB. The disk format is ext4, which is the most modern linux disk format as of Feb. 2012. Older versions of linux might use ext3 or ext2. Note that it is not unusual to move parts of the linux disk to other partitions. For example, folder /home is often made a separate partition. Fortunately, the default is to put everything together in the same partition as it is in my case above. That is by far the simplest setup.

Finally, the swap (sub)disk /dev/sda6 is scratch space for the operating system to use. (If encrypted, it will list as format unknown instead of linux-swap.)

For another, more ambiguous, clue, what the system disk is, you can find the folder /USB_LOC/backup that holds the backups made with the backup script, if any. (How to do that is discussed in the next subsection.) Then in the terminal do

more /USB_LOC/backup/oldmount.txt
The first line might be something like
/dev/sdaN on / type extN
That shows that the system disk used to be /dev/sdaN, because the solitary slash means the system disk, at the time the backups were initialized. If nothing happened to the system disk since then, it should still be /dev/sdaN. (But replacing the disk, or partitioning it, or putting another operating system on it, might change that.)

Having identified your hard disk, the next thing is to identify what is the Linux partition on the hard disk. (Or what are the Linux partitions if more than one partition is used.) Normally speaking, Linux partitions use ext2, ext3, or ext4 format. There are other linux disk formats than that, but then you probably already know what yours is. Otherwise try 'man mount' in the terminal.

Of course, your disk may not yet have a linux partition. If you just put a brand new disk in your computer, and only installed Windows on it so far, if anything, there should be no linux partition. You will need to create one. Jump to the subsection that discusses how to do that, then come back here as needed.

Always remember: do not fix things that are not broke. If the problem is just that large amounts of system files got deleted by mistake, or that an installation made a mess of your system, your linux partition itself should still be perfectly fine. Do not change it. Also, if the partition got lost, you might want to attempt data recovery, as various partition managers like gparted implement, first. Consult the documentation of those programs. In gparted, just select help. See also the gparted help on recovering partition tables if somehow the partition information on the disk got wiped out. And in any case, before touching your partitions, first check the integrity of your backups, as discussed in the next subsection. Then you know where you stand. If you do need to create a new linux partition, skip to the mentioned subsection.

If you have some partition, say /dev/sdaN where N is some number, that looks like the Linux system partition, see whether gparted has a mount point listed. If it is not mounted, try mounting it yourself as

mkdir /sdaNmount
mount /dev/sdaN /sdaNmount
making the appropriate substitutions for your case. Enter 'man mount' for more on mounting.

Now that the partition is mounted, you should be able to see what is on it without gparted. In the terminal, enter

df
That should produce, among others, a line something like
/dev/sdaN  NNNNNNNNNNN NNNNNNNNNNNNNN NNNNNNNNNNNN NN% /SYSTEM_DISK
What you really want to know is the final field /SYSTEM_DISK. If you mounted the disk as above, /SYSTEM_DISK should be /sdaNmount. Now you must make sure that the found value of /SYSTEM_DISK is indeed the correct Linux system disk. Check the total and used disk area against what you know of the linux system disk. (If you have backup script backups, you can see what the corresponding numbers were at the time the backups were initialized using 'more /USB_LOC/backup/olddf.txt'.) Check what files are on it:
ls /SYSTEM_DISK   (with /SYSTEM_DISK as above)
What you must be seeing here are typical Linux system folders like bin, etc, lib, and sbin. One exception would be if you just created the partition using partitioning, in which case no files at all should be listed. The other more or less conceivable exception would be if you mistakenly deleted large amounts of system files, which requires sudo or su to do so, including these four folders.

If the partition is not the linux system one, unmount it as

umount /SYSTEM_DISK
and try a different partition or even different device. If there is ambiguity, mount all likely partitions and compare their contents. Then unmount, like in 'umount /sdaNmount', the ones that are not the correct /SYSTEM_DISK. Make sure that the correct /SYSTEM_DISK stays mounted; do a refresh using the gparted menu as a test.

In case /SYSTEM_DISK has the normal system folders mentioned above but not a folder called boot, you may have a separate boot partition. In particular, totally encrypted Debian and Ubuntu computers will have one. (But not all flavors of unix have a boot folder. If you have backups made with the backup script, 'more /USB_LOC/backup/oldslash.txt' lists what was in / at the time the backups were initialized.) If you have a separate boot partition, you need to figure out what partition it is. It will not be very big. Try mounting a few likely candidates and see what is on there. For Debian or Ubuntu, you expect to see a subfolder grub in such a partition, and System.map..., memtest..., vm... etcetera files. When you have found the partition, you need to umount it if mounted, and then remount it as

mkdir /SYSTEM_DISK/boot
mount /dev/sdaM /SYSTEM_DISK/boot
where M is the partition number of the boot partition.

If you have a separate home partition, (which contains the user home folders, including your own if you do not login to root), mount it similarly. (Note that some linux systems do not put user files in a home folder. So the absence of home in /SYSTEM_DISK does not necessarily mean that there is a separate home partition for these systems. If you have backups made with the backup script, use again 'more /USB_LOC/backup/oldslash.txt' to see what was in / at the time the backups were initialized.) Some servers also use a separate var partition; if so, mount it too. Do not worry about tmp, just do

mkdir /SYSTEM_DISK/tmp
if it is not there already.

If the system disk looks OK, but it simply does not want to boot, at this stage you may want to consider trying the section on fixing boot problems before resorting to more drastic procedures. Do not fix what is not broke. Of course, if you just put in a brand new hard disk, or if you know that the system does not boot because critical system files besides boot ones got wiped out, that does not apply. On the other hand, if the partitions seem corrupt or missing, or the problem was something like partitioning or running a disk defragmentation utility, do look first at the section on fixing boot problems. The rest of this section will assume that the system disk must indeed be restored from backup to be made bootable.

In that case, if you do not have backups made with the backup script, but just with the backhome or backfol ones, there seems to be little other option than creating a new system disk as described in a later subsection. If the system will not startup normally, restoring backhome or backfol backups will not change that. The subsequent two subsections will assume that you do have backups made with the backup script and want to restore them to the mounted /SYSTEM_DISK.

Finding the backups

To restore the backups, you need to find the USB (or other) disk with the backups. This may be complicated by the fact that a live disk operating system may behave a bit different from your regular operating system.

Start simple. In a graphical environment, look around: the USB disk is likely an icon on the desktop. Or try the "Places" menu item. Otherwise look in Computer. Or look at your home folder; the USB Disk might be in the side panel. Or try browsing down Computer / File System, and look in folder media or mnt. If you find the disk graphically, open it. Right-click a file or subfolder on the disk and select Properties. The string listed as Location is what you want, it will be indicated as /USB_DISK from now on. Write down what it is.

If the above did not work, in the terminal try

df
In the output, look for a line something like
/dev/sdXN  NNNNNNNNNNN NNNNNNNNNNNNNN NNNNNNNNNNNN NN% /USB_DISK
with the right amount of total and used space to be the USB disk. If you think you found it, write down what /USB_DISK is. (In a graphical environment, you should now be able to browse down from Computer / File System down the parts of /USB_DISK above to find it.)

If the df list above does not seem to have the USB disk, the disk may not be mounted. If you did not already in the previous subsection, open a terminal and run gparted,

su         (may need to do sudo passwd root first, making up some password)
whoami     (MUST say root)
gparted &
(If you do not have gparted, try fdisk -l or parted -l for a noninteractive listing). In gparted you can examine various devices, such as /dev/sda, /dev/sdb, ... using a drop-down box or the menu. The hard disk is typically /dev/sda and the USB disk might be /dev/sdb or /dev/sdc or so. If you find what seems to be the USB disk based on say disk space and manufacturer, see whether a mount point is listed. If not, the problem is that the disk is not mounted. Note the device specification at the start of the line; it is normally of the form /dev/sdXN, where X is a letter and N is a number. Mount the disk as
mkdir /sdXNmount
mount /dev/sdXN /sdXNmount
making the appropriate substitutions for the capitals. At this stage, /sdXNmount is your tentative value for /USB_DISK. (For specially formatted disks, you might need additional mount options; if so, you need to consult the documentation of the disk. Very old live disks may not support the Windows ntfs format.)

If the USB disk is not in gparted (or fdisk -l or parted -l), then there is a fundamental hardware issue beyond the scope of this discussion. Try unplugging the USB disk for a few seconds and then plugging it back in. And make sure it is plugged directly into the computer, not into some USB hub that the live disk may not recognize. Turn off its power for a few seconds. Try a different and/or more recent live disk. Or copy the backups to a different disk or USB stick that is recognized using a different computer.

Having a tentative value for /USB_DISK, you need to check that you got it right. In the terminal do

ls /USB_DISK
This must show the files and subfolders in the top folder of the USB disk. If not, return to "Start simple" above and try again. Next in the terminal do
find /USB_DISK -type d -name backXX
where backXX is backup, or more generally the name of the folder with the backups. The above command should come back with a line of the form
/USB_LOC/backXX
Here /USB_LOC is the same as /USB_DISK unless the folder with backups was put in a subfolder on the disk; in that case the name of that subfolder will be appended behind a /. Whatever it is, write down what /USB_LOC is. (If you get two different backup folders on the find command, figure out which is the right one; 'ls /USB_LOC/backXX' will list their contents. If you do not get anything, the backups are not on the USB disk you found or they are for some reason in a folder of a different name than you think. Use the graphical environment or the ls command to find them. Or change backXX in the find command into 'back*', with the quotes, to search for all folders with names that start with back. Or leave the entire -name backXX out to see all folders on the disk.)

Checking and restoring the backups

At this stage, you should check the presence of the backups. The most recent backups should be in a subfolder 1 of folder backup (or whatever the folder name backXX was, but this discussion will for simplicity assume the default name backup). If you want to restore an older backup than can be found in 1 for some reason, check subfolders 2, 3, ... for whatever date you want. For example, you might need to restore an older backup to get one from a time before a bad installation or virus infection happened. You should only restore backups from a single subfolder.

The discussion here will assume the backups to be restored are in subfolder 1. If that is not correct, everywhere below replace /1/ by /2/ or /3/ or ... according to the desired folder. For example, to list the files in the subfolder 1, in the terminal use

ls /USB_LOC/backup/1/
To list the files in subfolder 3 instead , you would replace the final /1/ by /3/.

Note from the ls command above that the backup file names start with the date MMDDYY (month day year) that they were made. If more than one backup was made on the same day, you can check the times they were made in the terminal as

ls -l /USB_LOC/backup/1/
or if needed as
stat -c %y /USB_LOC/backup/1/MMDDYY_NNNNNNN.log
where MMDDYY_NNNNNNN is a file name as listed by the earlier ls command.

Check the .log files in the subfolder to see whether there were any problems during the backups. Ignore grumblings about .gvfs. In the terminal, you can do this as

more /USB_LOC/backup/1/MMDDYY_NNNNNNN.log
You should also examine the readme.txt file in the backup folder,
more /USB_LOC/backup/readme.txt
This file has brief recovery instructions. They may not be sufficient and may be outdated, but they may sometimes also be more specific than the ones here. There are some more .txt files in the backup folder that might also come in handy under some conditions. You can list the folder contents as
ls /USB_LOC/backup/

At this stage, having figured out /USB_LOC and /SYSTEM_DISK, in the terminal do, making the appropriate substitutions,

cd '/SYSTEM_DISK'
pwd   (MUST say /SYSTEM_DISK, *not* /)
ls    (MUST show bin, etc, lib, sbin, ... unless you partitioned it yourself)
cp '/USB_LOC/backup/dar_static' .     (note the final point)
chmod a+rx ./dar_static               (note the point before the /)
The quotes are only needed if SYSTEM_DISK or USB_LOC contain spaces.

Now you will want to list at least the files in the full backup, as a sanity check that you can access the backups and that the expected files are there. Do so as

./dar_static -l '/USB_LOC/backup/1/MMDDYY_NNNNNN0'
(That is -l, not -1). While typing those long filenames, try pressing the Tab key once or twice; it might partially complete the filename for you. The quotes are only needed if USB_LOC contains spaces. Note that the full backup file name is typed out to, but not including, the point following the 0. If you encrypted the backups, you need to add
 --key ':PASSWORD'
with the shown leading space, at the end of the dar_static line. In case PASSWORD contains exclamation marks, put a backslash in front of each, so type ! as \! within PASSWORD. Similarly type a single quote ' as '"'"' within PASSWORD. That is quote, double quote, quote, double quote, quote.

The dar_static command above shows the files in the full backup set that are to be restored. If that looks OK, (like a whole lot of files), it is time to start the actual restoration. Move the files listed by the ls command, if any, to a safe place, to prevent them from being overwritten by the files from the backup. Do this as

ls
mkdir saveorg 
mv -i * saveorg/
ls
ls saveorg
The third command will complain. But the fourth should show that all files are out of the way, and the fifth that they are safely in saveorg. (If you are really short on space on SYSTEM_DISK, consider moving the existing files to the USB disk by replacing saveorg above by /USB_DISK/saveorg.)

Now restore the backed up system as

./dar_static -x '/USB_LOC/backup/1/MMDDYY_NNNNNN0' --verbose
./dar_static -x '/USB_LOC/backup/1/MMDDYY_NNNNNN1' --verbose --no-warn
./dar_static -x '/USB_LOC/backup/1/MMDDYY_NNNNNN2' --verbose --no-warn
and so on until there are no more incremental backups in folder 1. Or until the backup of the desired date MMDDYY if not the latest. Note the increasing final digit. The 'digit' following 9 is A, then B. etcetera. Remember to change /1/ into /2/ or /3/ or ... if you wanted to restore a backup from an earlier set. And if the backups are encrypted, add again the password part at the end of each dar_static line. Then do
mkdir var/cache/apt/archives/partial

If your hard disk partitions are unchanged since the backups were made, you should be done. (That would not apply if you created a new system disk or used partitioning.) The computer is restored to the exact state it was when the backups were made. Restart the computer. The live disk should automatically eject itself during the restart. If it does not, or it is a USB stick, take it out manually and restart again. Under the stated conditions, the computer should boot normally from the hard disk. Wait a bit until you are sure that all is well and then clean up the garbage as

sudo rm -r '/saveorg'
Be sure to use the quotes as shown. The rm -r command can do lots of damage on typos or accidentally hitting space or Enter.

If the computer did not restart correctly, apparently the partitioning did change since the backups. Then continue with the next subsection.

Making the disk bootable

While at this point the system files are back, the system may not yet be bootable (i.e. able to start up normally) because the etc/fstab file and the files in folder boot may not be updated for the current hard disk configuration.

Note that there will also be a file /etc/fstab and a folder /boot. Do not touch these. There should be no leading / in the specifications.

First in the terminal do

cd /SYSTEM_DISK             (just to be sure, you should already be there)
ls                          (must show files bin, etc, lib, sbin, ...)
mkdir saverecov
cp -ir boot etc saverecov/  (keeps copies in case of a problem)
gparted &                   (opens up a gparted window, if you did not already)
(Or use a substitute like fdisk -l in a second terminal window. This discussion will assume gparted.) In the gparted window, the system partition on the hard disk, typically /dev/sdaN should be shown mounted as /SYSTEM_DISK and normally be of type extN . The swap disk, typically /dev/sdaN but with a different N, will be type linux-swap, unless it is encrypted.

If you just created the system partition SYSTEM_DISK using an installation disk, you must now check the value of N in extN for the system partition versus the one in etc/fstab. In the terminal, do

more etc/fstab
and look for the line, not starting with #, that contains '... / extN ...'. If the value of N in this line is lower than the value indicated in the gparted window for the system disk partition, (like ext3 in etc/fstab and ext4 in gparted), you are likely in trouble. Your backed-up operating system may not support the extN system of the new system disk. (Ext3 has been in the Linux kernels, or at least some of them, since about 2001, ext4 since about 2009.) Your options are now (1) to hope for the best, (2) to recreate the system disk using an older installation disk, or (3) follow the manual partitioning procedure of the final subsection. In the latter two cases, you will then need to repeat the steps of restoring the backups. (For systems that are not Debian and Ubuntu ones, a different disk format than extN might conceivably be used. In any case, the disk format listed by gparted must be the same as in etc/fstab.)

If you used an installation disk as above and are still with it, updating fstab may be easy. The installation should have created the correct fstab. Just move it into place as

ls saveorg/etc/fstab   (if 'no such file or directory', skip the next 2 lines)
mv -i etc/fstab etc/fstab_restored
cp -i saveorg/etc/fstab etc/fstab
If you did partition yourself, do not do the above; you will need to stick with the fstab from backup.

Next in the terminal do

cp -i etc/fstab etc/fstab_save  (keep a copy of the original)
gedit -s etc/fstab              (or: nano etc/fstab  or: pico etc/fstab)
Whatever works. (The -s for gedit seems to be needed on the live disk.) When using gedit, grasp a corner of the window that opens and stretch the window to the full width of the screen. Using nano or pico, no new window will open, and the mouse will not work: you need to move around with the cursor keys. There will be lines in the file, probably something like,
UUID=SOMEUUID / ext4 errors=remount -ro 0 1
UUID=SOMEUUID none swap sw 0 0
(These lines cannot start with #; ignore all lines that start with #.) They are the system and swap disk, respectively. The two SOMEUUID specifications will have to be checked. The correct values can be found using gparted in the properties of the system and swap partitions of the hard disk. (In a terminal, blkid should also list them.) Note that sometimes UUID=SOMEUUID is replaced by something like /dev/sdaN. In that case, check that the values of N are correct. If a UUID or N is incorrect, you will need to change it. That should not happen if you got etc/fstab from an installation disk. As far as I know, at least. Be sure not to make mistakes, especially with those UUID values. You might spend hours hopelessly trying to boot the computer due to a small mistake. Note that you can select the UUID value in gparted and then right-click it to copy it. Or even drag the thing. Make sure you select all of it. If no UUID can be found, replace UUID=SOMEUUID in etc/fstab with /dev/sdaN, with N the partition number of the system, respectively swap, disks. Keep O apart from 0.

If the extN format of the system disk in fstab is different from what gparted says, change that also. If you made changes, save fstab. Exit. In nano or pico, you press x while holding Ctrl to do so. As a sanity check, compare with the original:

diff etc/fstab etc/fstab_save
that will show the lines that are different between the original and changed fstab. Lines starting with < will be in etc/fstab, lines starting with > will be in etc/fstab_save.

If you have an encrypted swap disk, also check and update /etc/crypttab in a similar way.

Now comes the final hairy part. You may need to update the boot files. This discussion will assume the grub boot loader used by Debian and Ubuntu. The first thing to note is that there are two versions, (as of Mar 2012), a "legacy" version and a current version. No, the current version is not backward compatible, what gave you that idea? The older version uses a configuration file boot/grub/menu.lst and the newer boot/grub/grub.cfg.

Even though formally you are not supposed to do this, I suggest that you now check boot/grub/grub.cfg and boot/grub/menu.lst for sanity. First see which ones you have

ls boot/grub/grub.cfg
ls boot/grub/menu.lst

If you have boot/grub/grub.cfg, edit it

gedit -s boot/grub/grub.cfg
(or: nano boot/grub/grub.cfg or: pico boot/grub/grub.cfg). There will be menu items with UUID value and device numbers in them. Check the UUID values versus the correct ones from gparted. (The device numbers, like in msdosN, may seem to be wrong too but apparently the system will boot as long as the UUID are right. So do not change them unless really needed.) If you need to make corrections, you first need to exit the editor without making changes. Then do
cp -i boot/grub/grub.cfg boot/grub/grub.cfg_save
chmod u=rw boot/grub/grub.cfg
gedit -s boot/grub/grub.cfg   (or nano or pico instead of gedit -s)
The same conditions apply as when editing etc/fstab. Don't make mistakes. Cutting and pasting UUID values from gparted is recommended again. As a check, afterwards do
diff boot/grub/grub.cfg boot/grub/grub.cfg_save
That will show the lines that are different in the files.

If you have boot/grub/menu.lst, edit it

cp -i boot/grub/menu.lst boot/grub/menu.lst_save
gedit -s boot/grub/menu.lst
(or: nano boot/grub/menu.lst or: pico boot/grub/menu.lst). There will be menu items with UUID value and device numbers in them. Check the UUID values. (The device numbers, like in msdosN, may be wrong too but apparently the system will boot as long as the UUID are right. So do not change them unless really needed.) If you need to make corrections, the same conditions apply as when editing etc/fstab. Don't make mistakes. Cutting and pasting UUID values from gparted is recommended again. As a check, afterwards do
diff boot/grub/menu.lst boot/grub/menu.lst_save
That will show the lines that are different in the files.

In the spirit of trying to keep things as simple as possible, try restarting the computer now without the live disk. The live disk should eject itself during the restart. If it does not, or it is a USB stick, take out the live disk manually and restart again. If the computer starts normally from the hard disk, you are done. Wait a bit until you are sure that all is well and then clean up the garbage you created while restoring the backups as

sudo rm -r '/saveorg' '/saverecov'
Be sure to use the quotes as shown. The rm -r command can do lots of damage on typos or accidentally hitting space or Enter.

Right. If it does not boot up, restart from the live disk. Remount the system disk as before and do

cd '/SYSTEM_DISK'
pwd    (must say /SYSTEM_DISK, *not* /)
ls     (must show bin, etc, lib, sbin, ... and saveorg and saverecov)
Then do, assuming the complete main hard disk is /dev/sda as usual,
grub-install --boot-directory=/SYSTEM_DISK /dev/sda
grub-install --boot-directory=/SYSTEM_DISK --recheck /dev/sda
Do not add a partition number behind sda. (Older versions of grub use --root-directory instead of --boot-directory. Grub 1.99 and later prefers /SYSTEM_DISK/boot over /SYSTEM_DISK.) (Incidentally, I have seen on a web page that the equivalent command for systems using lilo instead of grub is
lilo -r /SYSTEM_DISK
Take that for what it is worth.) Now repeat the checks on menu.lst and grub.cfg. (At this time you may have both, even if you had only one before.) Try restarting the computer again. If it boots, you are done as before.

Right. If you used an installation disk before, you may want to try partitioning manually as described in the final subsection. If it still does not boot and you cannot find errors, you may be trying to restore an old version of Linux. Try using a live disk from the time that you first installed your system with grub. Rerun grub install, fix up boot/grub/menu.lst and/or boot/grub/grub.cfg, and try restarting. If it boots, you are done as before.

If it does not boot you may be using a new version of grub on the live disk to get a Ubuntu system disk with the old version of grub to boot. For some reason that would not work when I tried it. However, my hard disk had also a new version of Linux on it. (I wanted to keep the old version of Linux around for those "That is no longer supported" problems.) When I ran grub-install as above for the new version, both old and new versions became bootable. (At least they did after I corrected the UUID numbers as described above.)

Right. If that does not work, the next thing to try is to run update-grub directly from the hard disk, the same way you would do it if the computer was booted up normally instead of from a live disk. There is a method, the chroot method, that allows you to do this. (If you have a separate /boot partition, do remember to have it mounted on /SYSTEM_DISK/boot as discussed earlier.) Now in the terminal do

mount -B /dev /SYSTEM_DISK/dev
mkdir /SYSTEM_DISK/dev/pts
mount -B /dev/pts /SYSTEM_DISK/dev/pts
mount -B /proc /SYSTEM_DISK/proc
mount -B /sys /SYSTEM_DISK/sys
cd /
chroot /SYSTEM_DISK   (might need to add /usr/bin/tcsh, /bin/bash, or /bin/sh)
update-grub                      (recreates menu.lst or grub.cfg)
grub-install /dev/sda            (or whatever the system hard disk is.)
grub-install --recheck /dev/sda  (or whatever the system hard disk is.)
exit                             (get out of there)
cd /SYSTEM_DISK
Try restarting the computer now.

Some further possible ways to produce a bootable system can be found in another section.

In the worst case scenario, you would have to stick with a newly created operating system, restoring only the user files of the old system. (If the old operating system was very old, this may have become unavoidable at some time anyway.) Use gparted to get rid of the messed-up system and swap partitions. Use an installation disk to install a current Ubuntu operating system in the resulting unused space. Use the same username as before. Login to the new operating system, install dar_static and tcsh, and in a terminal do

su    (may need to do sudo passwd root first)
tcsh
cd /
ls
mkdir saveorg
mv -i home root saveorg/
ls
ls saveorg
dar_static -x '/USB_LOC/backup/1/MMDDYY_NNNNNN0' -g home -g root -v
dar_static -x '/USB_LOC/backup/1/MMDDYY_NNNNNN1' -g home -g root -v -w
dar_static -x '/USB_LOC/backup/1/MMDDYY_NNNNNN2' -g home -g root -v -w
and so on until there are no more incremental backups in folder 1. Restart the computer. That should be it, although you will have to reinstall your various packages using Ubuntu Software Center, Synaptic, or apt-get. Note that on a system that is not Ubuntu or Debian, user directories might be in a different place than home.

That closes this section. The final two subsections below were used earlier.

Creating a new system disk 1: the installation disk approach

Read this section if you want to create a new system disk partition.

You first need to make a decision: how to do it. There are two ways: using an installation disk or using partitioning.

Using an installation disk is surely the simplest, most user-friendly way to initialize a new Linux system disk. The idea is to use an standard installation disk to install a new Ubuntu or whatever operating system. If your only experience with computers is running applications like browsers, e-mail, and Open/LibreOffice, this approach is highly recommended. And if you have no backups made with the backup script, but just ones made with the backhome or backfol script, you have no choice, you will have to follow the installation disk method. If you do have backup script backups, you can choose to do so: the newly installed operating system can be pushed out of the way and the backup script backups put in its place.

Creating a new system disk using partitioning is much quicker. But it is definitely not for newbies. While not unusually difficult, you can do lots of damage doing something incorrectly. If you have backup script backups and you want to use partitioning, skip now to the next subsection.

Note that there are in fact some potential pitfalls in the installation disk approach if you have backup script backups. Most importantly, there can be a problem if you use a new installation disk to restore an old version of linux. In that case the installation disk might conceivably create a disk format that the old backed-up operating system cannot handle. (The so-called ext3 disk format has been in the Linux kernel, or at least some versions of it, since about 2001, and ext4 since about 2009.) The solution to that may be to dig up an older installation disk. Or follow the partitioning approach: during partitioning you can specify the disk format manually.

The above does not apply if you do not have backup script backups. In that case, if you still have your old installation disk, but it is years old, you may want to get a newer one like discussed in the second subsection. If your Linux version is no longer supported, you are going to get into trouble installing updates and packages.

A less likely problem is if you have old and new linux versions on the same hard disk. I found that the Ubuntu 11.10 installation insisted on creating a new so-called swap disk, even though there was already a perfectly good swap disk for the old system.

To create a new system disk using an installation disk is probably something you have already done before when you first installed linux on your computer. Restart the computer. The live disk should eject itself from the DVD drive. Remove it and instead put in the installation disk, if different. That should then start up the installation process. Follow the instructions on the screen, (and on the Ubuntu web site, if needed,) and what you remember of the earlier installation. Without a good reason otherwise, you may want to put "everything in one partition" when asked. Keep it simple. It will take some time to install the new system, so take a book.

After that, if you have backups made with the backup script, you will need to restart the live session. You should now have a linux system disk to play around with. The idea will be to shove the installed system on it out of the way and put the backups in their place. Return to the subsection on starting a live section and take it from there.

Without backup script backups, restart into the restored operating system. (The installation CD or DVD should eject itself right before the restart.) Now install package dar_static (and preferably tcsh) using Ubuntu Software Center, Synaptic, or whatever. Then follow the section on restoring individual files and folders to restore the latest backups you have. (You will follow the instructions in that section for restoring the entire /FSROOT.) After that, install the various other packages that you use. If you previously used an outdated version of Linux, hopefully the package installations will recognize any outdated versions of configuration files that you might have and take appropriate action. But there will probably be some issues that some things do not work the same as they used to. Not much to be done about that. In any case you are done with this section.

Creating a new system disk 2: the partitioning approach

Partitioning is the process of subdividing a physical hard disk in "partitions", which are essentially subdisks. Using gparted it is not exactly rocket science. However it is definitely not for newbies. While not really that difficult, you can do lots of damage when you do something wrong.

First some basic ideas. As noted, a system hard disk is subdivided into partitions, really subdisks. They vary with hardware, what is on the disk, computer manufacturer, how big the disk is etcetera. Consider again the arbitrary example given earlier of my own system disk:

partition      type      size      flags
-------------------------------------------
/dev/sda1      fat16     40 MB   boot, diag
/dev/sda2      ntfs     260 MB
/dev/sda3      ntfs      60 GB
/dev/sda4    extended
  /dev/sda5    ext4     160 GB
  /dev/sda6 linux-swap    9 GB
/dev/sda1 is the primary "active partition" that has the boot flag set, /dev/sda2 is a primary partition on which Dell put a backup copy of Windows, and /dev/sda3 is the primary partition that holds the actual Windows "disk". All my Windows documents and all Windows programs are in partition /dev/sda3. These first three partitions were already on the disk I got from Dell. During installation of Linux, I made Windows disk partition /dev/sda3 smaller to create space for the extended partition /dev/sda4. Then I used that extended partition to put in the Linux "disk" and its swap disk, in logical partitions (subpartitions) /dev/sda5 and /dev/sda6 respectively. So partition /dev/sda5 has all my Linux documents and all Linux programs. The swap disk /dev/sda6 is scratch space for the operating system to use. (If encrypted, it will list as type unknown instead of linux-swap.)

Partitioning is the process of creating or changing such subdivisions. For a reasonably computer literate person, it is not a big deal to, starting with a pure Windows disk, make the Windows partition smaller, then create an extended partition in the freed-up space with subpartitions for the Linux disk and swap space. I do it whenever I install Linux on my newest Windows PC and so far never a problem.

Some general rules for your main hard disk first. There are up to four primary partitions. Only one primary partition out of these four can be an extended one. But that extended partition can be chopped up in as many subpartitions (logical partitions) as you could reasonably need. Note further that in my example above, almost all space is used by the actual Windows system disk (third ntfs partition) and the actual Linux system disk (ext4 partition). You do not need that much boot space or swap space or whatever. (Typically, swap space is taken to be comparable to the RAM memory in the machine. Like twice as much. For a portable to be able to hibernate, the swap space must exceed RAM.) Also remember, 1 GB equals about 1000 MB, (depending on who you talk with), so a GB (gigabyte) is a lot more disk space than a MB (megabyte).

The discussion here will assume that the partitioning starts with a disk with Windows already on it. The intent is now to shrink the Windows partition. That will create empty, unallocated, space. In that space, an extended partition can then be created with a Linux disk and a swap disk similar to the example above. This is presumably a fairly common case. For other conditions significant modifications to the instructions below may have to be made. There are many good tutorials on partitioning on the web; this document will not even try to cover the general case. The discussion will assume that gparted is used. You could do similar things with fdisk or parted, but it is more awkward.

Following the instructions at the start of the earlier section, you should already be started up and have gparted running from an su terminal. In gparted always remember not to create a new partition table; that would erase the entire disk and Windows would be gone. (If you do by mistake, immediately consult the section on fixing boot problems.) Note that inside gparted you can access additional help information.

Gparted will probably show the hard disk /dev/sda, otherwise use the drop down box. To check that /dev/sda is the main hard disk on your system, check the size of the disk and its manufacturer. As already note, the size will be listed in the specifications of the computer or disk as you bought it. You also expect to see a partition with the boot flag set on a disk with Windows, although that is archaic.

Also it should have a big ntfs (Windows) partition filling most of the disk. Right click that big partition. If it is mounted for some reason, select "unmount". After that is done, right-click it again and select "Resize/Move". A window shows up; use it to reduce the size of the partition. Do leave some space for new Windows files of course. You might still have to use Windows in the future for something. Leave alignment as is unless you have good reason. Trying to squeeze the last MB out of the disk is not a good reason. The freed-up space will show up as unallocated free space.

Right-click this unallocated space, select "new" and turn all of it into an extended partition. Right-click the extended partition and turn part of it into a logical, Linux-swap partition. Select the size of this partition as twice your system RAM. (To find the amount of MB of system RAM, in the terminal enter

free -m
The total amount of MB of RAM will be listed behind "Mem:".) Finally right-click the remaining free space and turn all of it into a logical Linux ext4 or ext3 or ext2 partition. (To be safe, you may not want to go to a higher ext level than the original hard disk. To find that level, consult files oldfstab.txt, oldmount.txt and oldblkid.txt in the backup folder. Using ext2 should certainly be safe, but supposedly ext3 and ext4 are slightly better. Ext3 has been in the Linux kernel, or at least some versions of it, since about 2001, ext4 since about 2009.) If all is OK, select Edit / "Apply all operations" to actually make the changes.

At this point you can return to where you left off in the restoration process.


Applies to software obtained Feb. 2012