Looking back at my earlier posts, I'm surprised how utterly Java-centric everything I did was. Never mind things like bytecode instrumentation, of course that's JVM specific, and it's cool stuff!
But for other things, I think I would try to achieve more interoperability these days. Specifically, I'm looking at JGroups. It has some nice features in terms of consensus, making sure that all known nodes have the same view of the system, but looking at it purely as a networking library, it shockingly limits you to implement all your nodes in a JVM language.
I think there are generally three layers of interoperability. The JGroups example is one extreme, basically no interoperability. It has one Java implementation and its network protocols are specific to that implementation. I don't know the specifics of JGroups, but the protocol used may well be tailored to transmit very specific manipulations of JGroups nodes' states. In other word, there are lots of assumptions that make JGroups easy to implement in Java, but very, very hard to port to another language.
One kind of interoperability comes through dynamic linking. For example, through native methods, Java can call code written in C or another language that is compiled into machine code. As long as the JVM uses the ABI and calling conventions of the native library, access is no problem. The native code is executed in the calling process, so this kind of interoperability is strictly for libraries, not for client/server communication.
So if it was CGroups instead - written in C - then there could simply be a Java binding and a .Net binding etc., and all platforms using these bindings could cooperate. Unfortunately the reverse isn't true: the JVM is, quite obviously, a virtual machine, and you can't simply call code targeted at a VM. That VM first has to run, and then you have to interface with the VM before finally getting access to the library. This is true for all languages requiring a runtime environment (i.e. interpreted languages), but it gets worse the more complicated the runtime gets.
Which leads me to the third kind of interoperability: wire protocols, which solve the issue of having a protocol that is full of assumptions from the original implementation. Instead, the first step to designing the application or library is to design a binary or text based encoding and to clearly states what kinds of communication are allowed, in what order, at what time, etc. The protocol has to be simple and explicit enough to write down, so it should also be possible to implement in different languages. That doesn't allow you to use one library in different contexts, but it makes it feasible to port the library for different platforms.
If it weren't for that last kind of interoperability, the Internet would be a very different place. Imagine if websites used a browser-specific protocol instead of HTTP! Microservice architectures also rely on well-defined protocols to connect otherwise incompatible systems: of course, a service written in Java can expose some JGroups-derived functionality via, say, HTTP, and then any other application can access that service.
Sunday, June 24, 2018
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment