Semantic Import Versioning in the wild
The best and worst thing about semantic import versioning is that it makes BC-breaking changes hard.
In the past few days, Russ Cox has made a splash in a series of white papers describing Go and Versioning. In them, he coins a new term, Semantic Import Versioning, distilling it to the following principle:
If an old package and a new package have the same import path, the new package must be backwards compatible with the old package.
I am very happy Russ has come up with a good name for semantic import versioning, because this concept has been out there for quite a long time, but without a concise name or formulation of its the design. In fact, I would even say that semantic import versioning is inevitable when you take on the premise that you will never break user code. It is so inevitable, that semantic import versioning is already practiced in the wild in a variety of places. Here are a few examples:
- REST APIs often are versioned with explicit version numbers in the request (e.g., in the URI) to let clients specify what version of the API they want. If a client wishes to upgrade to a new version of the API, they must rewrite their API requests to a new URL. REST APIs are forced to semantic import versioning because the traditional mechanism for avoiding breakage, version bounds, are unavailable in this setting.
- Stripe's REST API pins each of their customers to the version of their API at the time they subscribed; even if Stripe makes a BC-breaking change in the future, the API for a given customer never changes. In this case, the semantic import is still there, but it is implicit (associated with a customer account) rather than explicit (in the client code); consequently, Stripe is willing to break BC a lot more frequently than would otherwise be acceptable for a REST API. Stripe's blog post points out a very important aspect of maintaining libraries under semantic import versioning, which is that you need to put in the engineering effort to sustainably manage all of the semantic imports available to users.
- Semantic import versioning is widely practiced in programming languages, in the form of language standards/epochs. In C++, the setting of -std=c++xx specifies a particular semantic version to be "imported". It would be unheard of for a compiler to unilaterally break backwards compatibility of -std=c++11 in a new revision of the compiler; similarly, a user must explicitly migrate to a new language standard to take advantage of any new features. Rust epochs have a similar tenor. The choice between Python 2 and Python 3 is another form of semantic import versioning.
- Semantic imports don't have to just specify a number. Feature flags, such as {-# LANGUAGE #-} pragmas in GHC Haskell, let users opt into BC-breaking changes at their use-sites.
- In the deep learning world, ONNX models declare a semantic import to a particular version of an operator set. Operator semantics can evolve in BC-compatible ways without bumping the version, but to take a BC-breaking change, you must update the import statement.
One insight I draw from these examples is that what we call an "import version" is really a specification for some series of implementations. To someone who has spent a lot of time thinking about module systems, this is really a step in the right direction: program against interfaces, not implementations.
Another thing we can observe from these examples are the real world consequences of semantic import versioning. One particular effect stands out: semantic import versioning is challenging for maintainers, because it pressures them to maintain multiple major release branches simultaneously (after all, who wants to use pkg/v2 only to have it immediately unmaintained when pkg/v3 comes out). In the traditional release branch model, where one creates a release branch for each major version, only the most well-staffed software development teams can afford to maintain multiple, active release branches (backporting patches is a lot of work!) The friction involved with managing multiple implementations means that less well staffed projects will have a strong pressure to never break backwards compatibility.
This may not sound like a such a bad thing to the "don't break my stuff" grumps in the audience, but a lot of bugs and security problems have stemmed from being literally unable to outlaw harmful and dangerous APIs with BC-breaking changes. The danger of moving the calculus further towards preserving backwards compatibility is a further entrenchment of bad "first try" APIs. So while I do not deny that a genius of Russ's framing is to describe semantic versioning as part of the package path, it also sets up a bad expectation for the feasibility of BC-breaking changes, when what we should be doing is improving the state of tooling so that making a BC-breaking change is "no big deal." To me, the most promising way to reduce the friction of a BC-breaking change is to organize your software development so that a single codebase, under a single build, implements multiple specifications (v1, v2 and v3). As we saw from the examples, compilers can manage this (GCC supports multiple C++ versions), but traditional programming languages make it hard for libraries to do the same thing.
I don't now exactly how to solve this problem, but I do have a few ideas:
- Treat specifications as data. This means you can write code that operates over a specification, and for example, automatically generate the boilerplate necessary to forward from one implementation to another.
- Don't ask programmers to manually write diffs. I would never ask you to make a source code change by writing a diff by hand, just because this is the best representation for a VCS to store. Instead, you would just make the edit, and expect the system to figure it out. BC-breaking changes to APIs follow the same principle; it is much simpler and easy to understand if you just make the change, rather than write a description of the change
- Package level modularity. In a traditional package management system, I can't release a single bundle of source code which presents multiple "package interfaces". Even in vgo, even if I have a shared codebase implementing v1 and v2, I still have to make two releases to publish a new version of code. This is backwards; there is no reason a single unit of code cannot provide multiple interfaces, and package tools should make this possible.
These are maybe a bit too radical to expect Go to adopt them, but perhaps the next generation of programming languages will explore this design space further.