Rick van Rein
Published

za 18 mei 2013

←Home

Global Directory 7: Decentral TLS Authentication and Authorization

TLS is the protocol that replaces SSL, best known for its protection of secure websites. Connections over TLS can greatly enhance security, but only when key management is properly implemented. When centrally managed, such as by X.509 CA's then all risk concentrates with that CA. Solutions like DANE help to lighten that burden, but decentral organisation of security is in fact a much more solid model. This article explains how to use OpenPGP-based TLS for security connections between systems.

This article is part of a series of articles about the global directory.

Butterflies

Introduction

The everyday practice of using TLS is not what it could be. Certificates and private keys are usually managed in an ad-hoc manner, since applications all have their own way of dealing with them. As a result, certificate and key files are often spread accross the local file systems of servers. This makes it difficult to manage their security and their renewals.

On the other end of the spectrum, certificate authorities have shown to be less reliable than they would like us to feel about them; one instance has actually been under attack without revealing this harmful fact to reliant parties; others have installed such low validation barriers (such as email verification or a faxed copy of a local government license) that they can be tricked into supplying certificates that do not live up to the strength of the embedded cryptography.

This article describes a more integrated approach of dealing with TLS, and manage it locally in a more coordinated manner. Within a host, TLS connection setup and teardown is concentrated into a "TLS pool", which works accross protocols, and which will use standard a PKCS #11 interface to manage certificates and keys with better operational control. The separate daemon for this TLS pool is the only place that needs access to private keys, so it is the only place where secure credentials float around. None of this needs to end up in a web server or other promiscuous program.

At the same time, it explains how modern developments in certificate infrastructure enables a model with distributed control accross operational domains. Once again, the Global Directory is used as a place to find certificates for remote parties. In fact, what this does is implement the kind of check that a certificate authority should do when being presented with a certificate request. The difference is that we do not need to rely on an external party anymore, and that we are not presented with information that may have expired. The infrastructure around certificate authorities have evolved to include certificate revocation lists to stay somewhat up to date, but nothing can beat a live check at the time it is needed.

Pooling TLS

Protocols that support TLS come in two flavours: Some have a separate port for the secure variety (and, usually, a separate protocol name: http becomes https and its default port changes from 80 to 443) and some start off in plaintext, negotiate features, find the ability to switch to TLS and issue a so-called STARTTLS command after which TLS negotiations start. The former is not advised for new protocols anymore, and could be seen as the second form with an immediate STARTTLS as soon as the connection starts. The implementation of the facility is usually embedded into the service, and based on a general library such as OpenSSL or GnuTLS. Libraries run as part of the program, and their access to private keys means that the security features are incorporated into the service that uses TLS. No problem if the service is a implemented in a secure piece of software, but some are so flexible, and provide so many facilities to non-technical end-users, that this might lead to problems. Web servers running PHP for example, have shown over and over again that they can fail on security grounds; applications built on top of this software stack is often equally flawed, and maintenance practices that install upgrades are far from common.

All this can be remedied by externalising the TLS handling to a separate program. Then that program gains access to private keys, but it will not relay those to the web server, or whatever program might be facing remote users. The STARTTLS point is a definitive point at which both client and server start TLS negotiations of the connection that they have had up to that point. At this point, the server can pass its communication end-point (the "socket") to the TLS pool, ask to setup TLS, and it gets a new socket returned over which it continues normally, except that the TLS pool now handles encryption and decryption over that protocol.

The TLS pool is not even the place where private keys are accessed; it uses the industry standard PKCS #11 as a secure key store. This means that a private key can be generated over a standard API but only the public part can be exported. The PKCS #11 interface will handle requests to sign or decrypt things with the private key, but only to PIN-authenticated programs such as the TLS pool. The PKCS #11 standard has been implemented in many places, ranging from free software modules and cheap USB tokens to expensive and redundant 19" rack-mounted security devices that can be addressed remotely by multiple servers at the same time. Effectively, the TLS pool is an application protocol layer on top of this industry standard with a vast range of usable implementations. Independent applications use a simple API call starttls_server() or starttls_client() to setup TLS, and this is basically all they need to do to implement TLS security.

One added benefit of this approach is that it becomes possible to iterate over certificates that are used behind a PKCS #11 store. This makes it easier to see if any need to be renewed, for instance. Or whether they are still active. Public information such as certificates, PGP keys and SSH keys could be automatically published in LDAP and DANE/DNSSEC, and removed when they are no longer available. Certificate management could be virtually painless and hardly manual.

As part of TLS pooling, it is possible to let the TLS pool decide which of the available certificates to use for external contact -- it should be something that has been published for some time, so as to ensure that relying parties can actually validate the certificate. And this brings us to the topic of decentralisation of authentication.

Decentralising Authentication

The model with certificates hitherto has been one with a strictly hierarchical control over identities assured in certificates. Several recent developments now loosen the strictness of this structure, while at the same time solidifying the level of security that is achieved.

First, the failure of DigiNotar to report that they had undesirable visitors on their infrastructure has shown that it is desirable to be able to "pull the plug" on such authorities, in a much more flexible manner than a browser upgrade. To this end, DANE was proposed as an additional constraint on the certificate, key or authority behind a domain's certificate. When DANE is used under DNSSEC, it cannot be falsified and so this information is secure. It is even so secure that one could choose to accept self-signed certificates, which are not acknowledged by any certificate authority, for the simple reason that they occur under a domain that one intends to rely on.

DANE stores its information in so-called TLSA records. A similar approach has been the CERT record, but did not get the infrastructural interpretation that DANE is establishing for its TLSA records. The SSHFP record for a server's SSH keys. It should be clear that any or all of these records can be derived from more or less centralised key repositories such as the aforementioned PKCS #11 store underpinning the TLS pool.

For user certificates, these structuress are not ideal. The information in DNS should not be considered private, and user information often includes identities that are somewhat private, such as email addresses and personal names. In these cases, LDAP is a much more useful mechanism and, to make this useful accross operational domains, LDAP. Indeed there are several structures for LDAP that permit users to incorporate their key material, as is described in several other articles in this series. One advantage of LDAP is that it can do more filtering, such as explained in the Publishing Information with OpenLDAP article of this series, thus protecting user privacy much better.

Note how this introduces an upside-down treatment of certificates. Where originally the idea of certificates was that they would make a hard claim on reliabiliy, more and more facilities have been added to weaken these constraints: DANE, OCSP and Certificate Revocation Lists. It ends up being necessary to do quite a lot of checking in order to check the original claim of identity made in the certificate. But, if those identities are mechanical and domain-bound, they may as well be automated. A servername is a domain name and can be looked up in DNS; an email address or any other protocol-specific user@domain address can be split into a domain for which an LDAP server can be looked up in DNS, and a user to be found in that server. In both cases, information regarding the authenticity of certificates can be found, like the certificate, its contained key or its secure hash. This means that signatures on the identities are no longer needed; we simply step and out and check for ourselves.

One last development that helps to decentralise TLS is that it now supports not only X.509 certificates, but OpenPGP as well. If a properly standardised flag is sent along with the initial negotiations of TLS, then OpenPGP keys can be used by both endpoints as a mechanism for authentication. Such keys hold a User ID, which is usually formatted as an email address inside angular brackets. And such keys are renowned for not needing a strict hierarchy to be useful; they may be validated through a web of trust where each party can vouch for each other party, or they may be validated as described above, through DANE and LDAP.

Note that, aside from the protocol details of TLS, this procedure is largely mirrored. Both client and server could authenticate themselves, so mutual verification can be done. This is generally helpful in getting a strongly relationship between end points. In this respect, the flexibility of PKCS #11 shows its true beauty; TLS pooling could be extended to desktop clients, and made to work based on either a software-based PKCS #11 key store, or one could use a hardware device that plugs into USB and holds one's X.509 certificates, OpenPGP keys and SSH keys.

The authentication process performed by a TLS pool takes some time because it needs to go online and query resources; this means that its results are good candidates for pooling in something like a memcached. This is especially useful for systems which continually see users come by, perhaps over multiple protocols and possibly visiting a multitude of servers. The distributed nature of systems like memcached makes it possible to centralise knowledge of succeeded authentications in a trusted (namely, local) place. Timeouts could be in the order of one hour, since recalculations can be performed automatically by the TLS pool.

This ends the story about authenticating remote parties, that is establishing their identity. Depending on further needs, it is now possible to decide which party has access to which resources: the process of authorization.

Decentralising Authorization

This is work in progress. Questions and issues include:

`` * use of attribute certificates?``
`` * for web apps, use an OAuth service based on the above``
`` * cache authorization results (again, memcached)``
`` * NEA's "standard" authorization model (white/gray/black listing, with a default policy)``

Development Status

This project is work in progress.

We have had two students working on a proof of concept, and are currently expanding on the experiences gained from that work.

Go Top