Infrastructure Could Be Commodity Code
Thanks to Tanya Reilly and Ross Donaldson for their invaluable feedback on the draft of this post.
On a recent week of vacation I prepared for the three hour plane ride from New York to Miami the way that I prepare all for relatively short flights: by downloading approximately three and a half weeks worth of reading material. Along with a fantasy novel and some Tom King comic runs, I grabbed the docs for Ballerina. I’ve been meaning to look at Ballerina since I heard about it at KubeCon 2018, and what better time to check out a new language than when you don’t have wifi or a laptop? Reading the Ballerina docs got me thinking about programming languages and what innovation in languages looks like.
Broadly speaking, I’ve noticed that popular new programming languages tend to lean towards one of two bents: Adventures in Type Theory (ATT) and Revamped Curly Braces (RCB). Yes, I’m going to gloss over the fact that a new flavor of LISP gets popular every so often; as always, LISP is its own thing.
ATT languages are easy to spot: they’ll most likely be purely functional, the syntax will look like a flavor of Haskell with some operator that looks like ==-=~~.
, and their homepage will feature a description like "Flurb is a language with twelve-dimensional topological functo-morphic types." These languages are a lot of fun. Type theory is a really interesting area of research, and given how many absurd things I do on a computer on a daily basis, I see investing in the compiler’s ability to point out my mistakes as a worthwhile area of research. But even the most user-friendly of these languages seem to struggle finding adoption in the undifferentiated and unexamined pool of companies that I will refer to as "the industry." I think this largely owes to where these languages fall on Simon Payton Jones' spectrum of correct and useful. All of these languages are increasing levels of verifiably correct, but for some reason none of them can seem to standardize on a web framework. I know a lot of people who enjoy Haskell—me, for instance—but not too many people are using it to write programs for their job.
On the more "useful" but less "correct" side of the plot is RCB languages. These are languages that attempt to make up for the sins of past languages while keeping enough of their primitives to be easily adoptable by other curly brace enthusiasts. These are languages like Go and Rust, whose syntax is familiar enough for anyone who's ever written C or Java, but who attempt to rein it in a little bit on unsafe program behavior. These languages are great, and of course we should examine and reimplement language concepts as they’re used in new contexts. There's no way the designers and implementers of C expected things like buffer overflows to be used as a vehicle for malicious user behavior, and it's ice skating up hill to try to adjust to that new reality without some language-level innovations. But these languages also suffer from what I call the "killer app" hypothesis of language adoption.
Briefly, my "killer app" hypothesis of language adoption states that the lifecycle of an RCB language after creation follows a curve that is determined by the "killer apps" that it helps create. Note that I’m using the term "app" here to mean "generic software," because the word "app" has lost all meaning anyway and "killer software" sounds silly to me. Languages that are widely adopted in the industry are adopted because someone has an idea to write an app that explodes in popularity, and the new language is the only first-class way of interacting with that app. For example, Kubernetes was written in Go, and all first-class library support for extending Kubernetes is in Go. At some point, everyone decided that if their service wasn't running on Kubernetes, it wasn't worth running. Cue an explosion of use cases that need to extend Kubernetes, and many more lines of Go entering the world as a result. People became comfortable with Go, and used it to write other tools. Some of those tools become popular, and the feedback loop begins again. Plausible? Yes! Scientific? Absolutely not! Do I plan to actually do some research and see if this plays out in a meaningful way? Maybe!
One implication of the "killer app" hypothesis is that programming languages are only as adoptable in "the industry" as the productivity gains that they enable. People don't adopt Kubernetes because it's cool, they adopt it because of productivity gains that running on a container platform enables—and because the CNCF has great marketing, but that's another post.
The selling point of languages like Ballerina and Dark is a bit different than ATT and RCB languages: they lean into an idea of language-level abstraction of commodity code. Commodity code is a term used by Sandi Metz to describe bits of code that accomplish mostly-undifferentiated application behavior such as handling the details of serving traffic over the internet or implementing session-level persistence. Put another way, commodity code is all of the bells and whistles that a service needs if it doesn't want to just print to standard out. Frameworks like Ruby On Rails remain popular in large part because of their ability to abstract over commodity code; Rails developers don't worry about how to serve internet traffic or make database connections, they write business logic and let the framework handle the gritty details. And the details are gritty. One of the most interesting aspects of commodity code is that, while it's undifferentiated in its per-application use, it’s often extremely hard to implement correctly.
What's considered commodity code has changed over time. At some point, we all decided that HTTP was how we were going to serve our sites to other humans, and thus just about every language written since the dot-com era has a standard way of abstracting over HTTP. Infrastructure APIs—Google Compute Engine, for instance—present the opportunity to commoditize code in even more complex domains. For instance, load balancing is a hard problem with a commodity solution provided by just about every cloud vendor. So you might ask, "why is it that we still regularly configure that separately from the rest of the application?", "why do I write a web server in Ruby or Go or JavaScript and then have to write Terraform to provision load balancers separately when the load balancer is only for the web server?", or "why should I have to write a Dockerfile and a bunch of Kubernetes YAML to containerize and deploy an application that falls under the 80% of services that can be standardized as 'web server running in a container backed by a Kubernetes deployment?'" At this point, some percentage of people reading this are having a mild heart attack while gasping the words "coupling" and "the Unix philosophy." I 100% agree that commoditized infrastructure code wouldn't work in all cases. People are always going to need specialized tools. It's ok folks, sometimes an integrated platform is a good thing.
Languages like Ballerina and Dark are running with this notion of extended commodification, which is one reason that I find languages like them fascinating.
Ballerina describes itself as both a "language" and a "platform" for "writing cloud-era" applications. There's a few things I like in this definition: it acknowledges an integrated platform around the language, and it acknowledges that it's a language well-suited for a particular era—and by extension, a model that will eventually go out of style—of programming. That last nod to the language's eventual obsolescence is probably unintentional, but I really like how it frames and constrains the problem and paradigm that the language is solving for. Ballerina has fundamental, language-level abstractions for things like clients and services, which makes it a really interesting model. More importantly, it has language-level support for deploying its own applications to Kubernetes in a way that reminds me a lot of a language-integrated Metaparticle.
Dark is what I would build as an internal platform given a dictatorial reign over a platform engineering team, infinite money, and absolutely zero other priorities—though perhaps separating out the integrated editor into a kind of "code explorer," because you will pry NeoVim out of my cold, dead hands. I have some concerns about Dark as a business–mostly because their FAQ around business continuity and vendor lock-in questions is, to put it generously, handwavy, but from a purely technical perspective it's a really impressive look at what's possible with an integrated platform. Dark lets you model most infrastructure components in a really slick way that auto-updates in real time on the server side, allowing you to skip any sort of "build and deploy" steps. Note that I refuse to use the word "deployless," even if I happen to think it's more substantive than "serverless."
To me, these languages represent a far more intriguing view of what programming could be than languages that focus on moving the correctness meter by 10%. Why? Because for most software, given fast enough rollbacks, program defects don't really matter as much. Neither type theory nor constraining unsafe programming practices will save us from the fact that most of the worst bugs are emergent behavior from the interactions of complex components in a system. Given that, perhaps it's time that we as an industry start exploring more integrated solutions for abstracting over commodity infrastructure code.