In a series of 3 independent articles, we're introducing the current
design of the IdentityHub. We intend to start this work in March 2017.
In a series of 3 independent articles, we're introducing the current design of the IdentityHub. We intend to start this work in March 2017.
Everything in the backend is designed for scalability. Meaning, there is hardly any overhead per item or per request; instead, everything in the backend is designed for bulk handling. To further this advantage, it is all designed to support redundant and load-balanced operation, or at least the hosting provider who installs the IdentityHub can configure it in that manner.
Please note: If you followed our postings with some degree of accuracy, you may have noticed how we gradually refine the architecture and are pretty much delivering to our promise. The most obvious question that bothers us is our financial continuity, to be quite honest. So, if you find parts of this design useful and others promising, maybe you should contact us to help us settle the financial side.
Object Store: Reservoir, Web, Queues, Backups...
The IdentityHub involves a storage component, meant for putting away large data files, as well as smaller ones. This is imlemented through an "Object Store", a concept from the Cloud Storage world.
This generic service can be useful for many applications, including:
- Plugin services may store their backup files as an object
- Static web sites may store documents and style sheets directly (and future plugin services may add dynamicity to the websites)
- Queues may use objects for intermediate storage of data between users and between sites
- Reservoir is a (public, user-only or group-shared) collection of annotated data files (such as media, calendars, writeups, ...)
We standardise on a RESTful interface for the Object Store, with additional facilitation of access control. Our current choice of implementation is OpenStack Swift because of its cut-and-dry API that might be reimplemented if we wanted to; but mostly because it is flexible in its degree of replication and supportive of off-site replication, and because it scales down to a simple file-based service if so desired. Finally, it has pluggable authentication which integrates well with our IdentityHub plans.
We are not saying that other backends, such as Git will never end up in the IdentityHub, but it is simply not being planned for now. Also, we aim to support only bare essential services of a generic nature in the IdentityHub.
We believe that an Object Store facilitates a diverse market of providers, varying replication levels and similar factors. As an example, the off-site replication facilities could be combined with ordering or selection of DNS records to offer replicated web services with a preference for servers near a requesting client. This level of service is quite new to the market of general-purpose service hosting!
We standardise most of our semi-structured data with LDAP, and have already started facilitating it with the SteamWorks components. These enable a subscription to the data with sheer-instant updates, meaning that any configuration changes can arrive in their intended sweet spots in little or no time.
For the time being, the IdentityHub focusses on its own services, such as those described in part 3 of this series. But when we enter the ServiceHub phase, quickly after we finish IdentityHub, we are going to fan out and use the same setup to share configuration information with plugin services from independent service providers.
The Service Directory is different from user-facing data precisely because of its bulk nature. Rather than sorting information first into domains, and then perhaps describe services, this component first sorts descriptions into backend providers or service categories, so that a single subscription suffices for each of these, and they can receive configurations and updates over a single bulk query. This is much simpler than separting connections per domain, or per user; and also much simpler than filtering from a generic description for many services. And also note that access control is much simpler to arrange in this bulk style!
This part of the diagram has been dashed for now, because it is really just part of the ServiceHub phase. We will formulate authorisation requests using Diameter lingo so that plugin services will be able to learn whether certain users may access certain resources and/or communicate with particular peers.
We are hoping to facilitate plugin services on a need-to-know basis only; at the same time, it is in the interest to facilitate some form of caching. This is in fact one of the more interesting challenges in our infrastructure!
In case you are wondering why we selected Diameter over RADIUS: this is the protocol that is more aligned with the future; moreover, translation from RADIUS to Diameter is straightforward and sufficiently implemented to be of no practical concern.
The funnel tunnel is something we envisioned quite a while ago, as a sort of umbillical cord between a plugin service and the IdentityHub. It is specifically designed to facilitate bulk operation of protocols, and it runs over a reliable, and easily secured transport namely SCTP over IPv6.
The protocols being run may vary between plugin services, but will most likely include access to the other backend components Object Store, Service Directory and Access Control. In addition, it may do much more.
The protocols are wrapped in a secure layer en masse, to avoid overhead due to encryption on every end-user request. This contrasts common design approaches of security in web systems; and indeed, systems that follow this per-request authentication and authorisation approach are usually on the slow side as a result of all the public-key operations needed.
The complete series
- IdentityHub 1: Backends to Die for
- IdentityHub 2: Middleware from Heaven
- IdentityHub 3: Services to Thrive on