Author: David Dougall
Tcpdump
I find myself using the tcpdump network packet analyzer more and more as time goes on. If there is a communication breakdown between a server and a client machine, I break out the packet sniffer and find out exactly what is happening on the wire, so I can hone in on exactly which machine needs my help. For instance, is the problem on the server, or the client? I can know for sure by the network map.
I could easily include Ethereal in this category as well. Many times I will run tcpdump -w
to capture the output to a file and then import the resulting file into Ethereal on my local machine to get a graphical view.
GNU Bash and Perl
I was taught that good system administrators are lazy, and I would have to work a lot harder to do my job without the ability to use quick and simple bash scripting. Many tasks are simple enough that a short bash script will suffice.
For anything more than a few lines I pull out Perl. The beauty of Perl is that it is simple to write simple scripts, but it has such a depth of modules and supporting components that it can do amazingly complex projects as well.
Tar
I include tar as a critical utility for several reasons. For starters, all of the system images we use are stored and extracted with tar. Whenever I make a significant change to a server, I make a complete image of the server on a network filesystem using the options czflp
before proceeding. The c
option creates a new archive, z
compresses the data using gzip, f
specifies the archive name, l
tells tar to stick to the local file system, and p
tells tar to preserve the permissions as-is when copying to the new archive.
When I need to replicate a server, I use tar as well. I take the image that was created as describe previously and extract it with the xzfp
options. Then I simply change the hostname and IP address, if needed, and reboot. Voilà, complete mirror image servers. I use this method when I want a cluster of identical machines for file or mail servers.
The second main use of tar is in copying data around. If I need to move whole directories with good performance while preserving all permissions and attributes, I use a two-stage tar pipeline:
tar cflp - .|(cd $dest;tar xfp)
This creates a tar archive and sends it to stdout. The second stage creates a subshell that moves to the destination directory and extracts the archive from stdin. This same principle can be used over the network as well. This is immensely useful for creating backups of directories when making major changes, or copying data to a more accessible location to manipulate it.
SSH
If SSH provided only a secure terminal connection between machines, it would still be useful, but the program offers so much more. It automatically tunnels X sessions and any other port or protocol you need, and over slow links its compression option is helpful. Its ability to use keys, rather than passwords, for authentication means that you can use SSH in intra-machine scripts.
Recently I have found the key-based authentication useful when used in conjunction with the tar options discussed above. If I need to copy entire directory structures between machines, I could tar it all up and transfer the tar file and then extract it, or I could make a copy to a shared disk space, but I have found that the following is much more elegant:
tar cf - .|ssh remotehost "cd /dest/dir;tar xf -"
This way, I don’t have to create any useless copies. Even though encryption may slow it down slightly, often this approach is faster than transferring huge tar files, since you only have to run this command once.
ln
ln would seem like such a simple command it would be of no consequence, but I find the ability to create symbolic links invaluable in system administration. I may not use it daily, but when it is used, there is nothing to take its place.
I use ln to provide backward compatibility to tools, and to provide packages in well-known locations while allowing the application to be in a different location that is more conducive to archiving. I use it to manage different versions of applications. I can keep multiple versions installed and link only to the currently used version.
For instance, I may have /usr/local/bin/java as a symlink to /opt/java/bin/java. The /usr/local/bin/java is in a well known location, but the /opt/java directory is simpler to manage. Even further, /opt/java is usually a link to /opt/java-1.4.3 or some other version. This way I can install new versions and do testing, but the well-known version will not change until I move the link.
Find
Need to find a specific file without a locate database? Need to run a command on all files in a subdirectory? Need to gather a list of files based on last modify time? Find helps me do all this, and more. With find, I can reset access permissions on an entire directory structure, and I can determine which files have changed after performing an update.
Rsync
Certain files on different servers need to be different, such as config files used for different applications, but many files should be the same throughout a server farm. I have been using rsync for several years to perform synchronize these files. I use a cron job that runs nightly to connect to an rsync server and brings everything back into synchronization.
I have also used rsync to provide local copies of shared areas. For instance, /usr/local is stored on the rsync server. If I wish to install an application that needs to go on all machines, I simply install it once and copy the changed files to the rsync server.
MySQL and PHP
This last category is not a tool per se, but more of a concept or framework. I find it useful to keep data for scripts in a database. That way I can keep a central repository of needed information, but the script can be run from whatever machine is needed. I have implemented database-driven Web interfaces for almost all aspects of my job, and use them to store user information, machine information, printer information and logs, application licenses, and so on.
My job would be infinitely more tedious and error-prone without the Web-based interfaces that link to the databases. The other benefit of Web interfaces is to delegate responsibility for database maintenance to other persons. For instance, one of the first projects that I wrote a database and Web front end for was our Machine Administration Database. This provided a Web interface that allowed other computer support personnel in the college to manage their own entries in the DNS and DHCP server without having to pass all changes through me. The Web page is smart enough to track who can change what, and does all the sanity checking necessary. Then a Perl script on the back end reads the database and creates the named.conf and dhcpd.conf files, and hangs up the associated processes.
In general, if there is a situation where more than one person might need to perform a task or coordinate a configuration, a database with a managed front end can benefit all involved.
Let us know about your most valuable utilities and how you use them. There need not be 10 of them, nor do they need to be in order, and if we publish your work, we’ll pay you $100.