Rick van Rein
Published

do 22 november 2018

←Home

Backbone Innovations in the IdentityHub

While developing our IdentityHub, the core facility where users control their online identity and get security and privacy in one go, we need to connect a number of microservice. We made a few surprising choices and smile on the benefits.

It is common these days to develop software in terms of microservices. These relatively small components have a well-defined task and communicate with other components over a queueing backbone.

The queues help to resolve downtime of independent components, and help with operational control and fast recovery of complex systems.

The separation of a complex system into well-defined tasks also helps to focus while developing one task, and to avoid entanglement of code.

A run through our Microservices

We are developing microservices for a number of tasks. Some of the development takes place in the form of Docker containers, so we make it easy for others to test out the ideas while they are still in progress.

  • IdentityHub is the central site for coordination of identities and their relationships, including the rights set in access control lists. Identities include domains and their users, for which we defined a variety of forms.
  • KeyMaster will be the place where we manage private keys and their use in services, as well as in user view.
  • Kerberos will be the place where Kerberos accounts are created, from which users obtain service tickets and that participates in realm crossover.
  • Global Directory will be a publicly searchable LDAP repository that allows searching for user keys by remote parties. It's standardised location under a domain gives that the desired authority and interoperability. By using LDAP, we can filter who may see what, in the interest of privacy. We aim to only show identities when they are explicitly requested.
  • DNS is the module that outputs host IP addresses, as well as keying material for such things as DANE or ACME.
  • Reservoir is a storage facility for documents. Their metadata ends up in LDAP and the actual data is stored in an object store. This integrates with the IdentityHub because it would be a core service when used to collect backups from various plugin services.

Let's take an example by looking at the IdentityHub. Internally, this can be operated by a shell, which may give a good idea of the things done. Here is a list of self-explanatory commands that one might enter into the IdentityHub:

Shell to the ARPA2 IdentityHub.  You can add, del, mov
identities for users, groups, roles and so on.
arpa2id> domain add orvelte.nep Orvelte, Incorporated
arpa2id> user add orvelte.nep bakker Hij die bakt
arpa2id> user add orvelte.nep smid Hij die hakt
arpa2id> user del orvelte.nep bakker
arpa2id> user del orvelte.nep smid
arpa2id> domain del orvelte.nep

Shells with JSON backdoors

Shell-based control is perfect for operators and other power users. It will always be useful to have this, even if just as a last resort. If we took a classic approach to system management, we would add an SSH connection to run these shells remotely, and end up fixing problems with quotes for a long while before our code became secure enough for automation.

Instead, we prefer to unleash these facilities over a queueing mechanism. To do this, we introduce a second interface to the existing shell interface, more in line with queueing and automation, namely a JSON interface. We can do this because the shell already interprets and validates data and assigns names for each part. By way of example, the syntax of the user statement is

arpa2id> ?user
user add <domain> <uid> <descr...> |
user del <domain> <uid>

The assignment of <domain>, <uid> and <descr> made here is useful for the implementation code, that can retrieve those fields at will. Other than that, the literal tokens user and add help to select the variation of the code to use. All this is incorporated into the shell parser. In fact, the names are not just assigned, but may even be subject to shell-specific checks.

The feature set of these shells is very useful, but the local formatting rules of shells are not perfect to glue microservices together. So what we added, strictly through generic code, is a JSON interface with the names defined in the syntax:

 {
   "do_":    [ "user", "add" ],
   "domain":   "orvelte.nep",
   "uid":      "bakker",
   "descr":    "Hij die bakt"
 }

As you can probably tell, this matches the second commmand line example given above. Each form has its own use; shells are useful for people when they come with command line completion and online help, and JSON is more suitable for automation, and also more common. And so we do both in our microservices.

Normally, JSON structures tend to be loosely structured, not to speak of the literal values passed in it. But the alignment with a validating shell is perfect: the shells can be quite restrictive in the JSON input that is deemed acceptable. And it will make sure that the request fields are completely "used up" while parsing the structure. This sort of thing is not noticeable during proper use, but it quickly gets in the way of abusive patterns.

End-to-end Security and Access Control

Most microservices are used internally, and a front-end is the only way to get in. The front-end handles encryption, authentication and access control (authorisation). We intend to open up our infrastructure to plugin services in the upcoming ServiceHub phase, so this model will not work for us.

What we end up doing could make sense in many more situations. We use end-to-end security for our backbone, and apply an efficient ACL mechanism at the serving side.

This means that access control is built right into the shells, and from there it automatically carries over to JSON. The interesting result is that we can accept shell commands from anyone, as long as they establish their identity and we will look it up using our efficient ACL mechanism.

Is this model too refined or too dynamic? Probably not. Instead of having to know what access to constrain in a front-end, we now put it right where the actions are about to launch, and this is going to be simpler in any case. There is bound to be a high degree of dynamicity, because every domain is going to have its own administrator. It can only be helpful to have the lookup of these rights related directly to the place where they are exercised.

End-to-end security means that a JSON fragment can be dropped on an AMQP endpoint, to have it routed to the responsible microservice, where it can decide on access control. Our microservice interconnect is open to InternetWide Access! Of course, there will be the usual concerns about attempts to overload these services, but it is quite normal to use SASL authentication before granting AMQP access; even if this is just coarse-grained authentication for an entire bulk provider, then this is still helpful to sort out any massive-sending abuse without needing to confine access to only a set of static IP addresses (which would disable roaming access by users).

The shells are not omnipotent, not even for root users. They will only grant actions according to access control lists; the same rules that apply to remote users also apply to local shell users. As a result, shell access can be open to anyone, and what each user can do is still controlled. No special user accounts are needed, so administration of the systems can be fairly relaxed and root access is only needed for hardware and software management, but not for configuration management.

The mechanism for end-to-end security cannot be TLS, of course. TLS only provides hop-to-hop protection. Instead, we use GSS-API for encryption and mutual authentication. Though mostly known as the way to embed Kerberos5, other GSS-API mechanisms exist. Specifically Kerberos is based on symmetric-key cryptography, making the mechanism highly efficient and yet it protects from attacks with Quantum Computers — we have got your backs covered!

Is this difficult? Not at all. To the microservice programmer, GSS-API is a function through which a network packet passes to add protection, and another on the other end to verify and remove the protection. Or more likely, it is part of the backbone communication library. To the operator, GSS-API is even simpler, because it is baked right into SSH as an authentication mechanism. Just connect to the right server, let single sign-on take care of silent login and start a shell. You can immediately start to enter the desired commands. Access control is not visible during permitted actions, making it easy to have it everywhere. This actually means that anyone, not just hosting system operators but also domain owners and even users, can safely be granted operator shell access!

Go Top