At its simplest, tunneling is wrapping data or packets of one protocol inside packets of a different protocol. When used in security contexts, the term is usually more specific to the practice of wrapping data or packets from an insecure protocol inside encrypted packets. In this section, we'll see how Stunnel, an SSL-wrapper utility, can be used to wrap transactions from various applications with encrypted SSL tunnels.
Many network applications have the virtues of simplicity (with regard to their use of network resources) and usefulness, but lack security features such as encryption and strong or even adequately protected authentication. Web services were previously in this category, until Netscape Communications invented the Secure Sockets Layer (SSL) in 1994.
SSL successfully grafted transparent but well-implemented encryption functionality onto the HTTP experience without adding significant complexity for end users. SSL also added the capability to authenticate clients and servers alike with X.509 digital certificates (though in the case of client authentication, this feature is underutilized). Since Netscape wanted SSL to become an Internet standard, they released enough of its details so that free SSL libraries could be created, and indeed they were: Eric A. Young's SSLeay was one of the most successful, and its direct descendant OpenSSL is still being maintained and developed today.
Besides its obvious relevance to web security, OpenSSL has led to the creation of Stunnel, one of the most versatile and useful security tools in the open source repertoire. Stunnel makes it possible to encrypt connections involving virtually any single-port TCP service in SSL tunnels, without any modifications to the service itself. By "single-port TCP service," I mean a service that listens for connections on a single TCP port without subsequently using additional ports for other functions.
HTTP, which listens and conducts all of its business on a single port (usually TCP 80), is such a service. Rsync, Syslog-ng, MySQL, and yes, even Telnet are too: all of these can be run in encrypted Stunnel SSL wrappers.
FTP, which listens on TCP 21 for data connections but uses connections to additional random ports for data transfers, is not such a service. Anything that uses Remote Procedure Call (RPC) is also disqualified, since RPC uses the Portmapper service to assign random ports dynamically for RPC connections. NFS and NIS/NIS+ are common RPC services; accordingly, neither will work with Stunnel.
Stunnel relies on OpenSSL for all its cryptographic functions. Therefore, to use Stunnel, you must first obtain and install OpenSSL on each host on which you intend to use Stunnel. The current versions of most Linux distributions now include binary packages for OpenSSL v.0.9.6 or later. Your distribution's base OpenSSL package will probably suffice, but if you have trouble building Stunnel, try installing the openssl-devel package (or your distribution's equivalent).
If you plan to use Stunnel with client-side certificates (i.e., certificate-based authentication), you should obtain and install the latest OpenSSL source code (available at http://www.openssl.org) rather than relying on binary packages. To compile OpenSSL, uncompress and untar the source tarball, change your working directory to the source's root directory, and run the config script. I recommend passing three arguments to this script:
To specify the base installation directory (I use /usr/local)
To specify OpenSSL's home directory (/usr/local/ssl is a popular choice)
To tell OpenSSL to build and install its shared libraries, which are used by both Stunnel and OpenSSH
For example, using my recommended paths, the configuration command would be as follows:
[root openssl-0.9.6c]# ./config --prefix=/usr/local \ --openssldir=/usr/local/ssl shared
For the remainder of this section, I'll refer to OpenSSL's home as /usr/local/ssl, though you may use whatever you like.
If config runs without returning errors, run make, followed optionally by make test and then by make install. You are now ready to create a local Certificate Authority and start generating certificates.
Stunnel uses two types of certificates: server certificates and client certificates. Any time Stunnel runs in daemon mode (i.e., without the -c flag), it must use a server certificate. Binary distributions of Stunnel often include a pregenerated stunnel.pem file, but this is for testing purposes only!
You'll therefore need to generate at least one server certificate, and if you wish to use client certificates, you'll need to generate them too. Either way, you'll need a Certificate Authority (CA).
Perhaps you think of CAs strictly as commercial entities like VeriSign and Thawte, who create and sign web-server certificates for a fee; indeed, x.509 certificates from such companies will work with OpenSSL and Stunnel. When users (or their web browsers) need to verify the authenticity of a web server's certificate, a "neutral third party" like a commercial CA is often necessary.
However, it's far more likely that any certificate verification you do with Stunnel will involve the server-authenticating clients, not the other way around. This threat model doesn't really need a third-party CA: in the scenarios in which you'd most likely deploy Stunnel, the server is at greater risk from unauthorized users than users are from a phony server. To the extent that users do need to be concerned with server authentication, a signature from your organization's CA rather than from a neutral third party is probably sufficient. These are some of the situations in which it makes sense to run your own Certificate Authority.
If all this seems a bit confusing, Figure 5-1 shows how clients, servers, and CAs in SSL relationships use certificates.
Figure 5-1 illustrates several important aspects of the SSL (and of public-key infrastructures in general). First, you can see the distinction between public certificates and private keys. In public-key cryptography, each party has two key: one public and one private. SSL is based on public-key cryptography; in SSL's parlance, a signed public key is called a certificate, and a private key is simply called a key. (If you're completely new to public-key cryptography, see the Section 4.3.1.)
As Figure 5-1 shows, certificates are freely shared ? even CA certificates. Keys, on the other hand, are not: each key is held only by its owner and must be carefully protected for its corresponding certificate to have meaning as a unique and verifiable credential.
Another important point shown in Figure 5-1 is that Certificate Authorities do not directly participate in SSL transactions. In day-to-day SSL activities, CAs do little more than sign new certificates. So important is the trustworthiness of these signatures, that the less contact your CA has with other networked systems, the better.
It's not only possible but desirable for a CA to be disconnected from the network altogether, accepting new signing requests and exporting new signatures manually ? e.g., via floppy disks or CD-ROMs. This minimizes the chance of your CA's signing key being copied and misused: the moment a CA's signing key is compromised, all certificates signed by it become untrustworthy. For this reason, your main Intranet fileserver is a terrible place to host a CA; any publicly accessible server is absolutely out of the question.
When a host "verifies a certificate," it does so using a locally stored copy of the CA's "CA certificate," which, like any certificate, is not sensitive in and of itself. It is important, however, that any certificate copied from one host to another is done over a secure channel to prevent tampering. While certificate confidentiality isn't important, certificate authenticity is of the utmost importance, especially CA-certificate authenticity (since it's used to determine the authenticity/validity of other certificates).
Anybody can create their own Certificate Authority using OpenSSL on their platform of choice: it compiles and runs not only on Linux and other Unices, but also on Windows, VMS, and other operating systems. All examples in this chapter will, of course, show OpenSSL running on Linux. Also, given the importance and sensitivity of CA activities, you should be logged in as root when performing CA functions, and all CA files and directories should be owned by root and set to mode 0600 or 0700.
First, install OpenSSL as described earlier under "OpenSSL." In OpenSSL's home directory (e.g., /usr/local/ssl), you'll find a directory named misc/ that contains several scripts. One of them, CA, can be used to automatically set up a CA directory hierarchy complete with index files and a CA certificate (and key). Depending on which version of OpenSSL you have, CA may be provided as a shell script (CA.sh), a Perl script (CA.pl), or both.
Before you use it, however, you should tweak both it and the file openssl.cnf (located at the root of your OpenSSL home directory) to reflect your needs and environment. First, in CA.sh, edit the variables at the beginning of the script as you see fit. One noteworthy variable is DAYS, which sets the default lifetime of new certificates. I usually leave this to its default value of -days 365, but your needs may differ.
One variable that I always change, however, is CA_TOP, which sets the name of new CA directory trees. By default, this is set to ./demoCA, but I prefer to name mine ./localCA or simply ./CA. The leading ./ is handy: it causes the script to create the new CA with your working directory as its root. There's nothing to stop you from making this an absolute path, though: you'll just need to change the script if you want to run it again to create another CA; otherwise, you'll copy over older CAs. (Multiple CAs can be created on the same host, each with its own directory tree.)
In openssl.cnf, there are still more variables to set, which determine default settings for your certificates (Example 5-1). These are less important ? since most of them may be changed when you actually create certificates ? but one in particular, default_bits, is most easily changed in openssl.cnf. This setting determines the strength of your certificate's key, which is used to sign other certificates, and in the case of SSL clients and servers (but not of CAs), to negotiate SSL session keys and authenticate SSL sessions.
By default, default_bits is set to 1024. Recent advances in the factoring of large numbers have made 2048 a safer choice, though computationally expensive (but only during certificate actions such as generating, signing, and verifying signatures, and during SSL session startup ? it has no effect on the speed of actual data transfers). The CA script reads openssl.cnf, so if you want your CA certificate to be stronger or weaker than 1024 bits, change openssl.cnf before running CA.pl or CA.sh (see Example 5-1).
# these are the only important lines in this sample... dir = ./CA default_bits = 2048 # ...changing these saves typing when generating new certificates countryName_default = ES stateOrProvinceName_default = Andalucia localityName_default = Sevilla 0.organizationName_default = Mesòn Milwaukee organizationalUnitName_default = commonName_default = emailAddress_default = # I don't use unstructuredName, so I comment it out: # unstructuredName = An optional company name
Now, change your working directory to the one in which you wish to locate your CA hierarchy. Popular choices are /root and the OpenSSL home directory itself, which again is often /usr/local/ssl. From this directory, run one of the following commands:
[root ssl]# /usr/local/ssl/misc/CA.pl -newca
[root ssl]# /usr/local/ssl/misc/CA.sh -newca
In either case, replace /usr/local/ssl with your OpenSSL home directory if different.
The script will prompt you for an existing CA certificate to use (Example 5-2); simply press Return to generate a new one. You'll next be prompted for a passphrase for your new CA key. This passphrase is extremely important: anyone who knows this and has access to your CA key can sign certificates that are verifiably valid for your domain. Choose as long and complex a passphrase as is feasible for you. Whitespace and punctuation marks are allowed.
[root@tamarin ssl]# /usr/local/ssl/misc/CA.pl -newca CA certificate filename (or enter to create) Making CA certificate ... Using configuration from /usr/local/ssl/openssl.cnf Generating a 2048 bit RSA private key ........++++++ ....++++++ writing new private key to './CA/private/cakey.pem' Enter PEM pass phrase: ************* Verifying password - Enter PEM pass phrase: ************* ----- You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [ES]: State or Province Name (full name) [Andalucia]: Locality Name (eg, city) [Sevilla]: Organization Name (eg, company) [Mesòn Milwaukee]: Organizational Unit Name (eg, section) [ ]: Common Name (eg, YOUR name) [ ]:Mick's Certificate Authority Email Address [ ]:firstname.lastname@example.org
By default, the CA.pl and CA.sh scripts create a CA certificate called cacert.pem in the root of the CA filesystem hierarchy (e.g., /usr/local/ssl/CA/cacert.pem) and a CA key called cakey.pem in the CA filesystem's private/ directory (e.g., /usr/local/ssl/CA/private/cakey.pem). The CA certificate must be copied to any host that will verify certificates signed by your CA, but make sure the CA key is never copied out of private/ and is owned and readable only by root.
Now you're ready to create and sign your own certificates. Technically, any host running OpenSSL may generate certificates, regardless of whether it's a CA. In practice, however, the CA is the logical place to do this, since you won't have to worry about the integrity of certificates created elsewhere and transmitted over potentially untrustworthy bandwidth. In other words, it's a lot easier to feel good about signing a locally generated certificate than about signing one that was emailed to the CA over the Internet.
For Stunnel use, you'll need certificates for each host that will act as a server. If you plan to use SSL client-certificate authentication, you'll also need a certificate for each client system. Stunnel supports two types of client-certificate authentication: you can restrict connections to clients with certificates signed by a trusted CA, or you can allow only certificates of which the server has a local copy. Either type of authentication uses the same type of client certificate.
There's usually no difference between server certificates and client certificates. The exception is that server certificates must have unencrypted (i.e., non-password-protected) keys since they're used by automated processes, whereas it's often desirable to encrypt (password-protect) client certificates. If a client certificate's key is encrypted with a strong passphrase, the risk of that key's being copied or stolen is mitigated to a modest degree.
On the other hand, if you think the application you'll be tunneling through Stunnel has adequate authentication controls of its own, or if the client Stunnel process will be used by an automated process, unencrypted client keys may be justified. Just remember that any time you create client certificates without passphrases, their usefulness in authenticating users is practically nil.
Before you start generating host certificates, copy the openssl.cnf file from the OpenSSL home directory to your CA directory, and optionally edit it to reflect any differences between your CA certificate and subsequent certificates (e.g., you may have set default_bits to 2048 for your CA certificate but wish to use 1024-bit certificates for server or client certificates). At the very least, I recommend you set the variable dir in this copy of openssl.cnf to the absolute path of the CA, e.g. /usr/local/ssl/CA.
Now let's generate a certificate. We'll start with a server certificate for an Stunnel server named "elfiero":
Change your working directory to the CA directory you created earlier ? e.g., /usr/local/ssl/CA.
Create a new signing request (which is actually a certificate) and key with this command:
bash-# openssl req -nodes -new -keyout elfiero_key.pem \ -out elfiero_req.pem -days 365 -config ./openssl.cnf
The -nodes flag specifies that the new certificate should be unencrypted. Automated processes will be using it, so it isn't feasible to encrypt it with a password that must be entered every time it's used. -keyout specifies what name you want the new key to be, and -out specifies a name for the new request/certificate. (The filenames passed to both -keyout and -out are both arbitrary: you can name them whatever you like.) -days specifies how many days the certificate will be valid, and it's optional since it's also set in openssl.cnf. Another flag you can include is -newkey rsa:[bits], where [bits] is the size of the new certificate's RSA key ? e.g., 1024 or 2048.
After you enter this command, you will be prompted to enter new values or accept default values for the certificate's "Distinguished Name" parameters (Country Name, Locality Name, etc.), as in Example 5-2. Note that each certificate's Distinguished Name must be unique: if you try to create a certificate with all the DN parameters the same as those of a previous certificate created by your CA, the action will fail with an error. Only one DN field must differ from certificate to certificate, however; the fields I tend to change are Email Address or Organizational Unit Name.
Now, sign the certificate with this command:
bash-# openssl ca -config ./openssl.cnf -policy policy_anything \ -out elfiero_pubcert.pem -infiles elfiero_req.pem
Again, you can call the output file specified by -out anything you want. After entering this command, you'll be prompted for the CA key's passphrase, and after you enter this, you'll be presented with the new certificate's details and asked to verify your intention to sign it.
Open the new key (e.g., elfiero_key.pem) in a text editor, add a blank line to the bottom of the file, and save it.
This step isn't strictly necessary for recent versions of Stunnel, which isn't as fussy about certificate file formatting as it used to be, but I still add the blank line, since it's one less thing that can cause problems (e.g., in case the local Stunnel build is older than I thought).
Open the new signed certificate (e.g., elfiero_pubcert.pem) and delete everything above but not including the line -----BEGIN CERTIFICATE-----. Add a blank line to the bottom of the file and save it. Again, the blank line may not be necessary, but it doesn't hurt.
Concatenate the key and the signed certificate into a single file, like this:
bash-# cat ./elfiero_key.pem ./elfiero_pubcert.pem > ./elfiero_cert.pem
That's it! You now have a signed public certificate you can share, named elfiero_pubcert.pem, and a combined certificate and key named elfiero_cert.pem that you can use as elfiero's Stunnel server certificate.
Creating certificates for Stunnel client systems, which again is optional, is no different than creating server certificates. Omit the -nodes flag in Step 2 if you wish to password-protect your client certificate's key. Unfortunately, doing so buys you little security when using Stunnel. Although you'll need to enter the correct passphrase to start an Stunnel client daemon using a password-protected certificate, after the daemon starts, any local user on your client machine can use the resulting tunnel. (Authentication required by the application being tunneled, however, will still apply.)
 Iptables has a new match-module, owner, that can help restrict local users' access to local network daemons. If your Stunnel client machine's kernel has Iptables support, you can add rules to its INPUT and OUTPUT chains that restrict access to Stunnel's local listening port (e.g., localhost:ssync) to a specific Group ID or User ID via the Iptables options ? gid-owner and ? uid-owner, respectively. However, the owner module, which provides these options, is still experimental and must be enabled in a custom kernel build. This module's name is ipt_owner.o, "Owner Match Support (EXPERIMENTAL)" in the kernel-configuration script. Linux in a Nutshell by Siever et al (O'Reilly) includes documentation on Iptables in general and the owner match module specifically.
Once you've created at least one server certificate, you're ready to set up an Stunnel server. Like OpenSSL, Stunnel has become a standard package in most Linux distributions. Even more than OpenSSL, however, Stunnel's stability varies greatly from release to release, so I recommend you build Stunnel from source.
If you do choose to stick with your distribution's binary package, make sure you get the very latest one ? i.e., from your distribution's update or errata web site if available (see Chapter 3). In either case, I strongly recommend that you not bother with any version of Stunnel prior to 3.2: I've experienced errors and even segmentation faults with earlier versions when using Stunnel's client-certification verification features.
To build Stunnel, you need to have OpenSSL installed, since you also need it to run Stunnel. However, unless you installed OpenSSL from source, you probably also require your distribution's openssl-devel package, since most basic openssl packages don't include header files and other components required for building (as opposed to simply running) SSL applications.
What are "TCPwrappers-Style Access Controls," and How Do You Use Them?
I haven't yet covered TCPwrappers, a popular tool for adding logging and access controls to services run from inetd, mainly because inetd is of limited usefulness on a bastion host (see why I think so in Section 188.8.131.52.1).
But TCPwrappers has an access-control mechanism that restricts incoming connections based on remote clients' IP addresses, which is a handy way to augment application security. This mechanism, which I refer to in the book as "TCPwrappers-style Access Controls," is supported by Stunnel and many other standalone services, via TCPwrappers' libwrap.a library.
This mechanism uses two files, /etc/hosts.allow and /etc/hosts.deny. Whenever a client host attempts to connect to some service that is protected by this mechanism, the remote host's IP address is first /etc/hosts.allow. If it matches any line in hosts.allow, the connection is passed. If the IP matches no line in hosts.allow, /etc/hosts.deny is then parsed, and if the IP matches any line in it, the connection is dropped. If the client IP matches neither file, the connection is passed.
Because this "default allow" behavior isn't a very secure approach, most people implement a "default deny" policy by keeping only one line in /etc/hosts.deny:
In this way access is controlled by /etc/hosts.allow: any combination of service and IP address not listed in hosts.allow will be denied.
In the simplest usage, each line in hosts.allow (and hosts.deny) consists of two fields:
daemon1 [daemon2 etc.] : host1 [host2 etc.]
where the first field is a space- or comma-delimited list of daemon names to match and the second field (preceded by a colon) is a space- or comma-delimited list of host IP addresses.
A daemon's name is usually determined from the value of argv passed from the daemon to the shell in which it's invoked. In the case of Stunnel, it's determined either from a -N option passed to Stunnel at startup or from a combination of the daemon being tunneled and the name of the host to which Stunnel is connecting. The wildcard ALL may also be used.
The host IP(s) may be expressed as an IP address or part of an IP address: for example, 10.200. will match all IP addresses in the range 10.200.0.1 through 10.200.254.254. The wildcard ALL may also be used.
On Red Hat (and any other system on which tcpd has been compiled with PROCESS_OPTIONS), a third field is also used, preceded by another colon, whose most popular settings are ALLOW and DENY. This obviates the need for a /etc/hosts.deny file: a single /etc/hosts.allow file may be used to include both ALLOW and DENY rules.
See the manpages hosts_access(5) and hosts_options(5) for more information.
Once OpenSSL and its headers are in place, get the latest source code from http://www.stunnel.org and unpack the source tarball (in /usr/src or wherever else you like to build things). Change your working directory to the source's root.
Stunnel has a configure script, and I recommend using some of its options. Several of Stunnel's configure options are worth at least considering:
Tells configure that you want to compile in support for TCPwrappers-style access controls (using /etc/hosts.allow, Stunnel has a "deny by default" policy and therefore doesn't use /etc/hosts.deny). This requires the files /usr/lib/libwrap.a and /usr/include/tcpd.h to be present. On Red Hat systems, these are provided by the package tcpwrappers; SuSE includes these in its tcpd package; on Debian, they're provided by the package libwrap0-dev.
Specifies the default path you'd like Stunnel to use to look for stunnel.pem, the default name for Stunnel's server certificate. This can be overridden at runtime with the -p option. I recommend a default setting of /etc/stunnel. (You'll need to create this directory ? make sure it's owned by root:root and its permissions are 0700).
Specifies the full path (including filename) to the file you'd like Stunnel to parse by default when looking for CA certificates to verify other hosts' client or server certificates. Can be overridden at runtime with the -A option. The specified file should be a text file containing one or more CA certificates (without CA keys) concatenated together. Personally, I prefer to keep CA certificates separate; see the next option, -- with-cert-dir.
Specifies the full path and name of the directory you'd like Stunnel to scan by default when looking for individual CA-certificate files to verify other certificates (this is sort of a "plural version" of the previous flag). Can be overridden at runtime with the -a option.
The configure script accepts other flags as well, including the customary -- prefix= et al; enter ./configure -- help for a full list of them.
If this script runs without errors (which are usually caused by the absence of OpenSSL, OpenSSL's headers, or libwrap), enter make && make install. Stunnel is now installed!
And now, at long last, we come to the heart of the matter: actually running Stunnel and tunneling things over it. Before I give a detailed explanation of Stunnel options, I'm going to walk through a brief example session (for those of you who have been patiently waiting for me to get to the point and can wait no more).
Suppose you have two servers, skillet and elfiero. elfiero is an Rsync server, and you'd like to tunnel Rsync sessions from skillet to elfiero. The simplest usage of Rsync, as shown in Chapter 9, is rsync hostname::, which asks the host named hostname for a list of its anonymous modules (shares). Your goal in this example will be to run this command successfully over an Stunnel session.
First, you'll need to have Rsync installed, configured, and running in daemon mode on elfiero. (Let's assume you've followed my advice in Chapter 9 on how to do this, and that the Rsync daemon elfiero has subsequently become so stable and secure as to be the envy of your local Rsync users' group.)
Next, you'll need to make sure some things are in place on elfiero for Stunnel to run as a daemon. The most important of these is a signed server certificate formatted as described earlier in "Generating and signing certificates." In this example, your certificate is named elfiero_cert.pem and has been copied into in the directory /etc/stunnel.
You also need to make some minor changes to existing files on the server: in /etc/services, you want an entry for the port on which Stunnel will listen for remote connections, so that log entries and command lines will be more human-readable. For our example, this is the line to add to /etc/services:
ssyncd 273/tcp # Secure Rsync daemon
(The "real" rsync daemon is listening on TCP 873, of course, so I like to use an Stunnel port that's similar.)
In addition, for purposes of our example, let's also assume that Stunnel on the server was compiled with libwrap support; so add this line to /etc/hosts.allow:
On a Red Hat system, the hosts.allow entry would instead look like this:
ssync: ALL: ALLOW
Once the server certificate is in place and you've prepared /etc/services and /etc/hosts.allow, you can fire up Stunnel, telling it to listen on the ssyncd port (TCP 273), to forward connections to the local rsync port, to use the server certificate /etc/stunnel/elfiero_cert.pem, and to use ssync as the TCPwrappers service name (Example 5-3).
[root@elfiero etc]# stunnel -d ssyncd -r localhost:rsync -p \ /etc/stunnel/elfiero_cert.pem -N ssync
And now for the client system, skillet. For now, you're not planning on using client certificates or having the client verify server certificates, so there's less to do here. Add one line to /etc/services, and add one entry to /etc/hosts.allow. (Even that last step is necessary only if the Stunnel build on skillet was compiled with libwrap support.)
For consistency's sake, the line you add to /etc/server should be identical to the one you added to elfiero:
ssyncd 273/tcp # Secure Rsync daemon
Optimally, the Stunnel listener on skillet should listen on TCP 873, the Rsync port, so that local Rsync clients can use the default port when connecting through the tunnel. If the client system is already running an Rsync daemon of its own on TCP 873, however, you can add another line to /etc/services to define an Stunnel forwarding-port:
ssync 272/tcp # Secure Rsync forwarder
Assuming the Stunnel package on skillet was compiled with libwrap, you also need to add this line to /etc/hosts.allow:
Or, for the Red Hat/PROCESS_OPTIONS version of libwrap:
ssync: ALL: ALLOW
Now you can invoke Stunnel in client mode, which will listen for local connections on the rsync port (TCP 873), forwarding them to the ssyncd port (TCP 273) on elfiero, and using the TCPwrappers service name ssync (Example 5-4).
[root@skillet etc]# stunnel -c -d rsync -r elfiero:ssyncd -N ssync
(If all the unexplained flags in Examples 5-3 and 5-4 are making you nervous, don't worry: I'll cover them in my usual verbosity in the next section.)
Finally, you've arrived at the payoff: it's time to invoke rsync. Normally, the Rsync command to poll elfiero directly for its module list would look like this:
[schmoe@skillet ~]$ rsync elfiero::
In fact, nothing you've done so far would prevent this from working. (Preventing nontunneled access to the server is beyond the scope of this example.)
But you're cooler than that: you're instead going to connect to a local process that will transparently forward your command over an encrypted session to elfiero, and elfiero's reply will come back over the same encrypted channel. Example 5-5 shows what that exchange looks like (note that you don't need to be root to run the client application).
[schmoe@skillet ~]$ rsync localhost:: toolz Free software for organizing your skillet recipes recipes Donuts, hush-puppies, tempura, corn dogs, pork rinds, etc. images Pictures of Great American Fry-Cooks in frisky poses medical Addresses of angioplasty providers
It worked! Now your friends with accounts on skillet can download elfiero's unhealthy recipes with cryptographic impunity, safe from the prying eyes of the American Medical Association.
By the way, if you had to use a nonstandard Rsync port for the client's Stunnel listener (e.g., by passing stunnel the option -d srsync rather than -d rsync), Example 5-5 would instead look like Example 5-6.
[schmoe@skillet ~]$ rsync --port=272 localhost:: toolz Free software for organizing your skillet recipes recipes Donuts, hush-puppies, tempura, corn dogs, pork rinds, etc. images Pictures of Great American Fry-Cooks in frisky poses
Which is to say, the rsync command can connect to any port, but if it isn't 873, you must specify it with the -- port= option. Note that since rsync doesn't parse /etc/services, you must express it as a number, not as a service name.
That's the quick start. Now, let's roll up our sleeves, analyze what we just did, and discuss some additional things you can do with Stunnel.
As we just saw, Stunnel uses a single binary, stunnel, that can run in two different modes: client mode and daemon mode (the latter is also called "server mode"). They work similarly, except for one main difference: in client mode Stunnel listens for unencrypted connections (e.g., from the local machine) and forwards them through an encrypted SSL connection to a remote machine running Stunnel; in daemon mode, Stunnel listens for encrypted SSL connections (e.g., from remote Stunnel processes) and then decrypts and forwards those sessions to a local process. The options used in Examples 5-3 and 5-4 were therefore very similar; it's how they were used that differed.
Here's a breakdown of the options used in the stunnel commands in Examples 5-3 and 5-4:
The -d option specifies on which IP and port stunnel should listen for connections. hostIP, a local IP address or resolvable hostname, is usually unnecessary except, for example, when the local system has more than one IP address and you don't want stunnel listening on all of them. daemonport can be either a TCP port number or a service name listed in /etc/services. In daemon mode, this option is usually used to specify the port on which to listen for incoming forwarded (remote) connections. In client mode, it's the port on which to listen for incoming local connections (i.e., connections to forward). In either case, if you wish to run stunnel as a nonprivileged user, you'll need to specify a port greater than 1023; only root processes may listen on ports 0 through 1023.
This option overrides the default host-certificate path determined when stunnel was compiled, usually ./stunnel.pem. It's necessary in client mode only when you need to present a client certificate to the servers you connect to, but a certificate is always needed in daemon mode.
If you wish to use a certificate in either mode, I recommend you use the -p option rather than trusting the default path to find your certificate file. This avoids confusion, not to mention the possibility of accidentally using a generic sample stunnel.pem file of the sort that's included with Windows binaries of Stunnel (you never want to use a server certificate that other hosts may have too).
The -r option specifies to which port at which remote address Stunnel should tunnel (forward) connections. In daemon mode, this is usually a process on the local system, and since the default value of remoteIP is localhost, usually it's sufficient to specify the port (by services name or by number). In client mode, this is usually a port on a remote host, in which case remoteIP should be specified as the IP address or resolvable name of the remote host.
The -c flag tells stunnel to run in client mode and to interpret all other flags and options (e.g., -d and -r) accordingly. Without this flag, daemon mode is assumed.
This option is used to specify a service name for stunnel to pass in calls to libwrap (i.e., to match against the entries in /etc/hosts.allow). While stunnel's default TCPwrapper service names are easily predicted (see the stunnel(8) manpage for details), specifying this via the -N option makes things simpler.
If all that didn't clarify our skillet-to-elfiero example, Figure 5-2 might. It illustrates in a more graphical form how the two Stunnel daemons function (client and server).
Hopefully, this diagram is self-explanatory at this point. However, I should point out one detail in particular in Figure 5-2: the rsync -- daemon -- address=127.0.0.1 command on the server shows one method for making a service accessible only via Stunnel. Since this command binds Rsync only to the loopback interface, it listens only for local connections and only local users and local processes can connect to it directly.
Not all services, of course, allow you to specify or restrict which local IPs they listenon. In cases when they don't, you can use some combination of hosts.allow, iptables, and certificate-based authentication (see Section 5.1.3 later in this chapter).
The skillet-elfiero example showed Stunnel run in daemon mode on the server. In addition to client and daemon mode, Stunnel can also run in Inetd mode. In this mode, the server's inetd process starts the Stunnel daemon (and the service Stunnel is brokering) each time it receives a connection on the specified port. Details on how to do this are given by the Stunnel FAQ (http://www.stunnel.org/faq/) and in the stunnel(8) manpage.
I'm not going to go into further depth on running Stunnel in Inetd mode here: I've already stated my bias against using Inetd on bastion hosts. Lest you think it's just me, here's a quote from the Stunnel FAQ:
Running in daemon mode is much preferred to running in inetd mode. Why?
? SSL needs to be initialized for every connection.
? No session cache is possible
? inetd mode requires forking, which causes additional overhead. Daemon mode will not fork if you have stunnel compiled with threads.
Rather than starting Stunnel from inetd.conf, a much better way to serve Inetd-style daemons, such as in.telnetd and in.talkd, over Stunnel is to have the Stunnel daemon start them itself, using the -l option.
For example, if you wanted to create your own secure Telnet service on elfiero, you could use the method described in the previous section. However, Linux's in.telnetd daemon really isn't designed to run as a standalone daemon except for debugging purposes. It would make better sense to run Stunnel like this:
[root@elfiero etc]# stunnel -d telnets -p /etc/stunnel/elfiero_cert.pem -l /usr/ sbin/in.telnetd
(Suppose, for the purposes of this example, that on each host you've already added an entry for the telnets service to /etc/hosts.allow.)
On the client system, you could either run a telnets-capable Telnet client (they do exist), or you could run Stunnel in client mode like this (see Example 5-7):
[root@skillet /root]# stunnel -c -d telnets -r elfiero:telnets
You could then use the stock Linux telnet command to connect to the client host's local Stunnel forwarder:
[schmoe@skillet ~]$ telnet localhost telnets
Sparing you the familiar Telnet session that ensues, what happens in this example is the following:
Your telnet process connects to the local client-mode Stunnel process listening on port TCP 992.
This client-mode Stunnel process opens an encrypted SSL tunnel to the daemon-mode Stunnel process listening on port TCP 992 on the remote system.
Once the tunnel is established, the remote (daemon-mode) Stunnel process starts its local in.telnetd daemon.
The client-mode Stunnel process then forwards your Telnet session through the tunnel, and the remote Stunnel daemon hands the Telnet packets to the in.telnetd service it started.
By the way, if I haven't made this clear yet, the client and server Stunnel processes may use different listening ports. Again, just make sure that on each host:
You choose a port not already being listened on by some other process.
The client daemon sends to the same port on which the server daemon is listening (i.e., the port specified in the client's -r setting matches the one in the server's -d setting).
Using Stunnel to forward otherwise insecure applications through encrypted SSL tunnels is good. Using Stunnel with some measure of x.509 digital certificate authentication is even better.
The bad news is that finding clear and consistent documentation on this can be difficult. The good news is that using it actually isn't that difficult, and the following guidelines and procedures (combined with the OpenSSL material we've already covered) should get you started with a minimum of pain.
There are several ways you can use x.509 certificate authentication with Stunnel, specified by its -v option. The -v option can be set to one of four values:
Require no certificate authentication (the default)
If the remote host presents a certificate, check its signature
Accept connections only from hosts that present certificates signed by a trusted CA
Accept connections only from hosts that present certificates that are both cached locally (i.e., known) and signed by a trusted CA
Since SSL uses a peer-to-peer model for authentication (i.e., as far as SSL is concerned, there are no "client certificates" or "server certificates"; they're all just "certificates"), an Stunnel process can require certificate authentication, whether it's run in daemon mode or client mode. In other words, not only can Stunnel servers require clients to present valid certificates; clients can check server certificates too!
In practical terms, this is probably most useful in HTTPS scenarios (e.g., e-commerce: if you're about to send your credit card information to a merchant's web server, it's good to know they're not an imposter). I can't think of nearly as many Stunnel uses for clients authenticating servers. However, I have tested it, and it works no differently from the other way around. Having said all that, the following examples will both involve servers authenticating clients.
Let's return to our original Rsync-forwarding scenario with skillet and elfiero. To review, skillet is the client, and it has an /etc/services entry mapping the service name ssyncd to TCP port 273. So does the server elfiero. Both hosts also have a line in /etc/hosts.allow giving all hosts access to the service ssync. Finally, Rsync is running on elfiero, invoked by the command rsync -- daemon -- address=127.0.0.1.
In this example, you want elfiero to accept connections only from clients with certificates signed by your organization's Certificate Authority. skillet, therefore, needs its own certificate: you'll need to create one using the procedure from "Generating and signing certificates" earlier in this chapter. We'll call the resulting files skillet_cert.pem (the combined cert/key for skillet to use) and skillet_pubcert.pem (skillet's signed certificate). We'll also need a copy of the CA's certificate, cacert.pem.
elfiero will need the copy of the CA certificate (cacert.pem). skillet will need skillet_cert.pem, but it won't need the CA certificate unless you later decide to have skillet verify elfiero's server certificate.
You can keep certificates wherever you like, remembering that they should be set to mode 400, UID=root and GID=root or wheel. So for simplicity's sake on both systems, let's keep our certificates in /etc/stunnel. When Stunnel verifies certificates, though, it expects them to have a hash value as their name. Since nobody likes to name files this way, it's common practice to calculate the file's hash and then create a symbolic link from this hash value to the real name of the file.
OpenSSL has a very handy command, c_rehash, that does this automatically. Tak