In a series of 3 independent articles, we're introducing the current
design of the IdentityHub. We intend to start this work in March 2017.
The services of the IdentityHub are the user-facing facilities that make the IdentityHub worth using. It contains pretty standard components, but goes well beyond that; after all, the whole purpose of having an IdentityHub is to provide end users with control over their online presence -- and that means adding security and privacy through a number of novel facilities that they are currently deprived of.
Please note: If you followed our postings with some degree of accuracy, you may have noticed how we gradually refine the architecture and are pretty much delivering to our promise. The most obvious question that bothers us is our financial continuity, to be quite honest. So, if you find parts of this design useful and others promising, maybe you should contact us to help us settle the financial side.
..
Web, Email, AMQP, XMPP, ...
The classical distribution for a hosting provider has been the so-called LAMP stack for 20 or 25 years now; providing web with PHP and MySQL, plus Email, in a manner that should have been a one-size-fits-all but, as it so often goes, turns out to fit most of us badly and would eventually hollow out the market and lead people to accept "free" services, in spite of their assault on individual privacy.
The choice made in the IdentityHub is to provide very basic services, and add plugin services during ServiceHub; these plugin services are then capable of replacing or extending on the basic services. The choice for basic services means that we can generate their configuration automatically from a Service Directory.
So, the web support provided in the IdentityHub is for static sites -- with access control of course. And email is a very basic service -- except that it does provide support for groups, aliases and even black and white lists.
New protocols to most will be AMQP and XMPP. We are currently considering these as serious candidates for inclusion in the IdentityHub. The protocols each serve a very particular purpose.
XMPP is instant messaging, a service that has been around but for some reason not to do with technology ended up encapsulated in proprietary services. Well, getting out is as easy as getting in -- and an XMPP node may be self-controlled, but it can still connect to any other domains and their users that also follow this standard protocol. The nice thing of open protocols is that they combine these two facets that are so important in an online life: self-control and connectivity. As soon as you have open protocols like XMPP and AMQP, you get to distribute the responsibility and pull your identities under your own realm of influence and control. Meaning, to a hosting provider that will be happy to sign you up under a contract that serves you, not them.
AMQP is a relatively new protocol, and replaces email in many of its uses with attachments. Where email is ideal for letter-styled communication between users, AMQP is strictly meant for passing data files between users, without introduction or other human-processing aspects, but with the annotation and authenticated secure transport that enables automatic processing. Some people may be welcome to send you movies of yet another Parkour jump, while others should not even try; some may be welcome to add things to your agenda, while others better call you and make an appointment that you will add yourself. This is the level of control that AMQP is perfect at supplying. In general, we will use AMQP to fill (and empty) queues in the Reservoir conceptual design. In some future, we expect to even add workflow logic, so as to support standard handling for standard operations.
This general concept of a Reservoir is just the level at which we want to have the IdentityHub; it can easily spark of many interesting plugin services, because it can be a transport for media files, EDIFACT traffic, calendar scheduling invitations, and so on. In these things, it is very useful that AMQP is run over a secure layer which we intend to filter on sender identity as well as on the MIME type of the data transferred.
Global Directory
The global directory is a well-defined manner of using LDAP under a domain
name, to store its semi-structured data. Specifically, this can be used
to lookup information for john@example.com
by searching for (uid=john)
in an LDAP server that has an LDAP server underneath example.com
. This
is not an ARPA2 invention, but an Internet standard.
We use the Global Directory to lookup X.509 certificates an OpenPGP public
keys for someone like john@example.com
. Not only do we publish it
underneath the (uid=john)
outcome, but we also look for it when the
TLS Pool attempts to authenticate a user. This can be used instead of,
or in addition to, a trust hierarchy in the common X.509 sense; this
however is a system that never really took off for identifying end users.
Some alternative methods to locate user certificates and public keys exist;
the only well-defined ones store this data in DNS. The disadvantage with
that is that it reveals that information to anyone, whereas it is our
intention to only reveal information for a user like john
to people who
directly ask for it using (uid=john)
or by looking up entries under, say,
uid=john,ou=Users,dc=example,dc=com
.
In the backend we described the Object Store, where barren files may be uploaded; we also hinted at AMQP as a queue delivery protocol above. In both cases, these documents will be stored in what we have come to call the Reservoir, and what is in essence a nested collection of resources underneat a user's LDAP node. So, users can store data or have data described and linked to an actual storage location in the Object Store. To search the Reservoir, they would use LDAP; and to download the (potentially large) document, they would turn to the Object Store. This gives the best of both systems.
Not all entries are, like (uid=john)
, references to individual users;
there will also be references to other
forms of identity
such as groups and roles. Under each, there can also be public credentials
that would then be shared by group memers (or role occupants). This is
the manner in which groups may share data to collaborate on.
Realm Controller
The realm controller is an active online component for authentication of local users (and groups, roles, aliases and so on). It is essentially a Key Distribution Centre for Kerberos, forming the cornerstone for user logins.
The specific bits that we add here are for realm crossover, a problem for which several solutions have been proposed, but always with downsides. After analysing the options, we concluded that the approach through Kerberos holds the best cards -- except that it is not yet worked out. We have taken it upon us to do so, as it is an essential component for Bring Your Own IDentity. While at it, we also take privacy into account, by allowing the change of a client identity to an alias, role or other inherited form of identity.
Kerberos is empowered by GSS-API authentication and our own TLS-KDH mechanism, but bootstrapping it is difficult; especially on mobile devices, people tend to type short PINs in the presence of others. We plan to replace that habit with a simple swipe of an NFC Tag with the purpose of loading a Kerberos credential into the device. The NFC Tag is assumed to have been programmed at a stationary device in a secure location. It is actually possible to carry multiple NFC Tags, and swipe them to change identity. Security is great, but will only be used when it is made practical enough for daily use!
These extensions are fairly large to the Kerberos commnunity, but at least the realm crossover changes can be made without changes to client code (!) and the changes for privacy are not just optional, but should also prove trivial. The NFC Tag extension will work through a new App, but the Kerberos client itself should not change on account of it.
The quality of protection can vary when using Kerberos; the initial login may be protected with more than just a secret, for instance through number-generating tokens or other devices. This gives room to hosting providers to vary, and thus to distinguish themselves on the market.
Hosted PKCS #11
Next to Kerberos, we present solutions for X.509 and OpenPGP credentials. With a Global Directory to publish the public sides, what remains is a good solution for the corresponding private keys. We intend to use Kerberos to bootstrap secure access to Hosted PKCS #11.
An important reason why public key systems are not widely used is that they confront users with the need to roll their keys at regular intervals, such as annually. This is a task that must be done with great care, yet it is difficult for average users. As a result, the framework is considered too difficult for most, and not even erected for those who would benefit from having it in place. We solve this by offering the end user to take over the responsibility of private key management, if the user believes that is a better allocation of responsibilities. In those cases, the middleware holds the Credential/Key Manager to regularly rollover private keys and public credentials to avoid the expiration of identities alive.
Having decided to offer "Hosted PKCS #11" to users, we innovate one step further, by adding "Layered PKCS #11". This means that the user who accesses Hosted PKCS #11 will not just be able to use private keys for its own identity, but also find an additional readonly layer that is shared from the inherited identities of roles, groups and such. This means that all group members can decrypt traffic that was encrypted to the group, that any role occupant can sign on behalf of the role, and so on. That is, if these also rely on Hosted PKCS #11.
It is vitally important that users can control their own security in any way they like. For that reason, the Hosted PKCS #11 layer (as well as the Realm Controller) should facilitate self-control; that is, a user should be able to run these components in their own premises, if they intend to avoid that the hosting provider of their domain could access their data. For some sensitive applications, this may be a legal requirement to be able to use the IdentityHub infrastructure.
The quality level of PKCS #11 is going to be a place where hosting providers can vary, thus creating a more lively market: the general interface ranges from software implementations all the way to rack-mounted, tamper-proof, redundant and highly available Hardware Security Modules.
As a final remark, the relationship between the Global Directory and PKCS #11 objects is tight; the ARPA2 Global Directory will represent the private objects in PKCS #11 with references to the public credentials derived from it, but such PKCS #11 descriptors will only be made available to those users who can also access the described private objects over Hosted PKCS #11. Briefly put, this means that LDAP can be used to look for private keys and public credentials that can be used in pairs.
The complete series
- IdentityHub 1: Backends to Die for
- IdentityHub 2: Middleware from Heaven
- IdentityHub 3: Services to Thrive on