Author: Steve Suehring
administer, or transfer files between computers using an encrypted
transport mechanism. Running over every major operating system, SSH
provides a more secure connection method than traditional telnet or the
much-maligned “r commands” (rlogin, rcp, rsh). SSH includes provisions
for key-based authentication that doesn’t require a password, which
opens the door for some innovative remote access applications.
SSH works on a client/server model. A user runs SSH client software to
connect to a server running SSH software that listens on a TCP
port. Like telnet, SSH gives users a command-shell type interface into
the computer. Unlike telnet, SSH encrypts the login credentials and all
of the data flying over the wire. SSH and its related protocols, such
as Secure Copy Protocol (SCP) and Secure File Transfer Protocol (SFTP),
offer more secure alternatives to their unencrypted brethren.
SSH currently operates under a number of widely supported but de facto
standards. However, there is an IETF secsh working group tasked with
creating RFC-based standards for SSH and related components.
While SSH gives a command-shell interface into a computer, it still
requires that an administrator manually provide login credentials, such
as a username and password, in order to work with the computer.
Actually, SSH offers many options for authentication, one of which is
key-based authentication, which uses cryptographic keys to establish a
trust relationship between client and server. Key-based authentication
can require a password or can operate without a password on the key. It
is this passwordless key-based authentication that is of interest for
automation. When I refer to key-based authentication throughout this
article, I am specifically referring to passwordless key-based
authentication, also known as a null-passphrase key.
Key-based authentication is helpful for automating file transfers for
things like backups, password synchronization, and file system
synchronization, and for monitoring a remote server. Using a monitoring
package such as Nagios or even a
small shell script, you can monitor a remote computer for things like
processor utilization and disk space usage.
With the power of key-based authentication comes risk. You need to
determine whether the advantages of key-based authentication (passworded
or passwordless) are worth the risk.
Key-based authentication should not be used in place of manually
providing credentials for an interactive SSH session. Null-passphrase
key-based authentication creates a security hole between the computers
that have the key-based trust — a hole that would allow an attacker to
log in between those computers without a username or password. To
mitigate these risks I recommend that any computers that have a trust
relationship established between them listen on a separate IP address
and allow only SSH connections between the trusted computers. In
addition, the root user should not be allowed to log in directly using
SSH, much less using null-passphrase key-based authentication.
Finally, attach parameters such as no-pty
and no-port-forwarding
to
the keys themselves to limit the commands that can be run using that
method of authentication.
How do you set up key-based authentication between two computers? I’ll
illustrate using two Linux computers running OpenSSH, but the process is substantially
similar on other operating systems, including Windows, and using other
implementations of SSH as well.
The first step to performing key-based authentication is to decide which
users can authenticate using this mechanism. I find creating a special
user only for key-based authentication adds a little extra security.
For this demonstration, I’ll create a user called “kuser” on both the
client and server to be used for key-based authentication.
Generate a key pair while logged in as the kuser user on the client
computer using the ssh-keygen
command:
ssh-keygen -b 1024 -f identity -P '' -t dsa
This ssh-keygen
command creates a 1,024-bit (-b 1024) key
pair called identity (-f identity) using the DSA algorithm (-t dsa).
The private key is created with a null-passphrase (-P ”), which is
important for automating the login process.
Next, transfer the public key to the server. You can do this through a
variety of means. I’ll use SCP:
scp identity.pub
kuser@
This command transfers a file called identity.pub from the local client
computer by logging in as kuser on the remote server, and creates a
file called identity-
recommend using a naming convention such as identity-
to prevent possibly overwriting any identity.pub file already on the
server. For example, transferring the file from a client at
192.168.1.181 to a server located at 192.168.1.10 using SCP might look
like:
scp identity.pub kuser@192.168.1.10:~/identity-192.168.1.181.pub
You’re prompted for the password for the kuser user on the remote
host, and the file is transferred.
With the public key transferred to the server, log in to the server as
the kuser user using SSH. You may be asked if you’d like to continue
connecting to the server since the authenticity can’t be verified. This
is normal behavior the first time you connect to the SSH server. It’s
wise to verify the key fingerprint you receive, but in reality people
usually just type “yes” to continue connecting.
Once on the server, use the command mkdir .ssh
to create a
.ssh directory within kuser’s home directory, if one doesn’t already
exist, and place the contents of the identity-
file called .ssh/authorized_keys:
cat identity-
This command copies the contents of the public key from the client into
a special file on the server side. Some Linux implementations create
the file with the proper permission; others don’t, so it’s a good idea
to verify the permission of this file on the server. It should be u+r
or u+rw at most. If necessary, change the permission of the
authorized_keys file on the server:
chmod 400 .ssh/authorized_keys
Run ls -l
in the .ssh directory to double-check the
permissions for the authorized_keys file, then log out of the server.
Now it’s time to test the implementation. As the kuser user on the
client, enter the command ssh -i identity
.
You should be logged in automatically without being prompted for a
password. Congratulations!
If you were unable to log in, you might have made any of several common
errors:
containing the authorized public keys should be called authorized_keys2
instead of authorized_keys. Copy or rename ~/.ssh/authorized_keys to
~/.ssh/authorized_keys2 to resolve this issue should you encounter it.
Generating the key on the wrong computer causes the implementation to
fail.
failures. This file should be owned by the user, and the user should be
the only account that can access the private identity file. The public
key can be read by anyone who’d like to read it — it’s public! For
example, if you have the wrong permissions on the private key file,
you’ll receive an error stating something like “Permissions 0666 for
‘identity’ are too open.”
server or not creating an authorized_keys file on the server is
another potential issue.
possible problem as well. Remember, the private key is just a file. When
you specify
-i identity
on the SSH command line you mightneed to specify the full path to that file. For example, if the file is
located in /home/kuser/privatekeys/ you might need to use the command
ssh -i /home/kuser/privatekeys/identity
.Doing something useful with the automated login process
So far you’ve just been creating the infrastructure within which you can
automate some administrative tasks. What can you automate? You’ll start
with a quick command to check the load average on the remote computer.
A high load average might indicate a runaway process or other resource
shortages.
The Linux uptime
command tells you the current time, how
long the computer has been operational, the number of users logged in,
and the load average for the last 1, 5, and 15 minutes. The output of
uptime resembles this:
12:18:49 up 2:35, 6 users, load average: 0.00, 0.00, 0.00
Assume that you want to find out the load average on the remote computer
over the last minute and, if it’s greater than a certain value, send an
email message letting the administrator know that something is affecting
the load. Assuming that no additional protections have been placed on
the computer (such as the GrSecurity kernel patch), you can
write a script to take advantage of the automated SSH login along with
some other shell commands.
The uptime
command usually reads from the file
/proc/loadavg. Rather than trying to get the formatting for the
uptime
command correct you can usually just read from
/proc/loadavg:
cat /proc/loadavg | awk '{ print $1 }'
The full command should be placed in a file called check_uptime.sh
located in kuser’s home directory on the remote server, which might
look like:
#!/bin/sh/bin/cat /proc/loadavg | awk '{ print $1 }'
Don’t forget to give the file permissions to execute (chmod 700
) on the remote server.
check_uptime.sh
On the local computer, you can invoke the remote shell script from a
local Perl script — for example:
#!/usr/bin/perl $load = `ssh -i /home/kuser/identity 192.168.1.10 ./check_uptime.sh`; chomp $load; if ($load > 0.50) { print "Load is $loadn"; } exit;
When run as local user kuser, this script automatically logs in to
the remote server at 192.168.1.10 and runs the command
check_uptime.sh
. The only time this script produces output
is when the load average output is above 0.50. If the script is run from
a cron job, an email will be sent automatically showing the output of
the cron job.
Obvious improvements to the script might be to accept a list of
computers to check or an email address of a different person. If you
want that kind of functionality, however, I’d recommend taking advantage
of the Nagios monitoring framework and running one of the Nagios
plugins, rather than reinventing the wheel by enhancing this script.
In addition to what we’ve done in this simple example, you can use
key-based SSH authentication to automatically check other statistics as
well. Since scp can utilize key-based authentication, you can
automatically transfer files between computers; for example, you could
tar directories on a computer and send them to another computer
automatically every night for off-site backup. Alternately, you could
use rsync over SSH to keep filesystems synchronized between remote
locations.
Wrapping it all up
Being able to automatically log in between computers using a secure
protocol such as SSH has many benefits. It helps to automate the
administration of computers and can also assist in monitoring and
backups. However, using key-based authentication over SSH is not without
security risks, which you should take great care to mitigate if you
implement this method of authentication.
Steve Suehring, an independent consultant for security projects of all sizes, is Advocacy Editor for LinuxWorld Magazine and is
currently writing a book on Linux and Open Source security.
Category:
- Security