The IdentityHub is a rather complex composition of processes. How are
we going to design it with practical software? How are we going to
help you run your own components? Here's what we have in mind.
As described before, we have a schema with blocks that each fulfil a certain purpose. Making it easy to run such components will help a great deal to get hosters to accomplish the transition to this new service style. Also on our minds, these hosters are going to desire their own ways of setting up these services, in ways that we cannot and intentionally will not predict.
Docking into Containers
It is now common practice to build functional units in a container, and the most renowned technology for doing so is Docker. Though the most straightforward implementation is founded on Linux Containers (LXC), a Docker container can in fact be run on a plethora of platforms.
Docker further comes in two versions to suit everyone's needs; both a free Community Edition and a supported Enterprise Edition exist. We love this model; it means that business can be treated with knowledgeable support desks, but nobody is forced to pay for free replication of software for which they assume no human support framework. Moreover, having a business plan to back it is perhaps the best model for avid development and maintenance of free/open software.
Our current intention is to take each of the blocks in the block diagram below, and publish them as individual Docker containers. There will likely be versions with alpha, beta and RC level software.
..
Each their Own
We have explicit intentions of allowing users to run one or a few components under their own control. Some might want to do this for their PKCS #11 key store, for example. Others may cringe from the thought that someone else might get to manage their Kerberos Realm Controller.
When each block becomes a Docker Container, they already have the potence of being highly pluggable. What remains is a communication standard between them.
We were incredibly pleased to learn that Docker runs on Raspberry Pi, on C.H.I.P. and quite a few other embedded platforms. This means that we have an option to build Docker Containers for these platforms, and that users can even have implement them at home.
This embedded Docker system is founded on Raspbian, which isn't too far off of our development target platform, which is Debian, so the usual glitches that occur during transitioning to an embedded setup may not be as hard as they sometimes are.
Most interestingly, running these components as Docker Containers
also means that the home user can still run his components for
home automation (say) on the same machine! Plus, they would be
able to hack around and make adaptions to the IdentityHub home
kit. Hopefully with the intention to share ;-)
Messaging between Containers
As long as Docker Containers run at the permises of a hosting provider, without links to other players, it stands to reason that they trust each other on face value. This is a fair assumption when the network is protected by firewall and bridge ACLs, with monitoring and so on. So the link between components run on-site should be efficient, and probably not go through the hassle of authentication and encryption.
Remote components from users are a different matter, of course. We are talking about identity and the keys that represent them, which implies that we really must employ authenticated connections with message authentication codes. In addition, the information tends to be private, so we must employ encryption. So, between two sites we shall insist on a protection layer. Specifically, we shall use a GSS-API protection layer, and specifically the Kerberos5 mechanism.
Then there is the messaging system itself. We will not design one ourselves, but instead use AMQP 1.0 (or see the video), which is an OASIS standard — meaning, it is an open specification, therefore interoperable. And as may be expected, it has been implemented in various open source packages. As you may have guessed, AMQP can be wrapped in a GSS-API protection layer in a well-specified manner.
The first task at hand in the design of the IdentityHub will be to standardise on messaging formats, the queues between the various Docker Containers implementing IdentityHub components, and then we can start building.
Update 17-7-2017: The most beneficial part of AMQP is that it allows queues to stay erect while the components around it go down. This means that the components are not coupled too tightly, and that work can go on in the face of temporary downtime of some components.
While this may apply to batch processes (like creating key material and entering new data in the directory) the same is not true for live and interactive exchanges. Especially authentication and authorisation seem to demand more direct responses. In these cases, we fall back on SCTP, as that is a reliable, message-based protocol. It has many of the facilities of AMQP, but is more tightly coupled and this is just what is required for interactive traffic.
Go Top