On Flexibility and Software Temples

(Published: 12 January 2023)

The Pillars and The Temples

It's easy to assume that the idea of editing text is simple. It's such a basic task, and most people who touch a keyboard get away with it just fine.

And that assumption is only a short leap away from the claim that text editors should be simple to use as well. It paints a clear picture where an editor is a finished product.

All you have to do is shove in enough features in there.

Human-computer interaction = Features.

And this is the way that most text editors are written: let's build our vision of a product, and when the users complain, we will shove in MORE STUFF.

The philosophy, for most editors, is: just do whatever feels right. In other words, there's none.

Are features enough, though? Far from it!

The really one right thing to bet on seems to be the user experience. And there's a lot of stuff that goes into building it.

You can reason about it in more concrete terms: pillars. Pillars are used to construct temples and other sorts of buildings. (Considering the nature of the discussion, the religious connotation is not too far off, though.)

So, OK, let's build a temple. From scratch.

Take all of the features you want in your editor – well, those are your pillars.

And make no mistake, there are some pillars other than features: the ergonomics, the learning curve, the execution speed, the memory footprint.

Beneath all that are all the technologies used to build the foundation and the pillars. The language. You can think of it at the building material for everything else.

Do your materials fit and work well with each other? If so, then they may be called homogeneous.

And the real foundations beneath all are: extensibility, flexibility1 You will be hearing this word a lot very soon., power. These are the metapillars. The language is a metapillar, too. Metapillars define the properties of the pillars, exercise a direct degree of influence on them.

At the foundation of the pillars and the metapillars, which are undoubtedly made of curtain materials, there lie the bones beneath it all. These are the meta²pillar of the community and the developers, the architects and the voluntary workforce.

Of course, there's also an antagonist to this little story.

The antipillar of shit.

It's called the world, and it contains everything and anything that wants to destroy any given temple of text editing. Those forces are the sea storms, the anti-religious governments, the earthquakes, the businessmen. Other temples and their adherents.

It may be tempting to say that the user experience is where it ends, but, really, it's where it all begins. Because make no mistake: the user experience is not just the roof of the temple, standing over all kinds of pillars. It's the whole damn temple.

And we have traced a rather interesting blueprint for that temple: the pillars of features and a few visible properties, the metapillars of power and language, and the meta-meta-pillar of the community.

Well, most editor-building efforts concentrate on features. They are feature-centric. They make it all about the pillars. And the reason for this prevalent attitude is simple: it's easier to provide a feature than it is to provide a power.

And the metapillar of power is effectively never considered. The power to change the temple. And that may as well be how we build real temples, but its lack is a blatant oversight for the temples of software.

Software should be more like a living organism if it is ever to mold to the user. Not a product – a construction box. Not a temple.

Feature-centrism is not the path to powerful software, no matter how cool or advanced are the pillars.

Some Well-Known Temples

There are a lot of classic temples out there: nano, notepad, gedit. To name a few.

Where does Vim stand? Vim is pretty good on speed and footprint, a bit tough on the learning curve, and has great ergonomics once you get it.

But it's almost a classic feature-centric temple. Almost. It does give you a bit of power through its configuration mechanisms. It has a pretty contemptible and primitive configuration mechanism, but it's undeniably there.

Emacs? Well, Emacs got Elisp. Emacs actually addressed the fact that the user might want to reconstruct the temple to his own liking.

Can Vim outlast Emacs? Well, guess not, because, as a matter of fact, it has already been replaced by another temple: NeoVim. That one has more features with just slightly more power. Still, it's pretty far from Emacs. Plus homogeneity effectively got destroyed when they allowed multiple languages. But considering how bad VimScript was, the net effect must have been positive (in the short term, anyway; otherwise non-homogeneity will still bite them in the ass big time some day).

Emacs did not disregard the power of power, and that's why it is still going at it after 40 something years.

Of course, Emacs is far from perfect. It lets you do a lot, but the more you move away from its turtle sandbox game, the more fucked up it's going to get for you.

And its offshoots were but temporary distractions, temporary forks whose advances swiftly found their way back to the original effort.

Those advances were but features. Couldn't have happened otherwise. And the meta²pillars (the community) were a stronger force.

Emacs is defined by its metapillar of power, that's how it's understood by the people who use it. And that's why the New Emacs can't be just another Emacs.

And it certainly can't be replaced by anything less powerful. Products like vscode are just classic temples, because that's all corpos have learned to strive for. Claiming that it can overtake Emacs is absurd.

And not in the least because of other metapillars such as the language and the horrendous tech stack. And most importantly, that's how the community understands it: as a conglomerate of features.

It's an old way of thinking about software. Software built that way is literally not alive. It's just waiting to be buried. And that's exactly where these type of efforts end up: in the trash bin of history.

I don't think it's bad that legacy software gets replaced once in a while. But the way these are built, the new replacement is not ever much of a conceptual improvement. They just rearrange the pillars and swap the metapillars for close alternatives. And that's what will happen to vscode: it will get buried and replaced by something vaguely reminiscent of itself.

But mere features can't replace Emacs. Because there's power to it.

So, it's important that we examine the concept of power if we are to understand software engineering better.

Power

Programmers love to talk about power in a vacuum. The power of languages. The power of systems.

Some mistake Turing-completeness for power, for fuck's sake.

Well, I like to talk about power, too. And for me, power lies in flexibility.

And by flexibility I mean the ease of making a system in question specialized.

That means the ability to swap a certain mechanism, a certain way of doing things in your system to something else. Adaptation.

And since you could specialize anything if you tried hard enough, the principal measure of flexibility is the ease of exercising it.

And, perhaps, the ease of learning it too.

In other words:

If you can't wield the power, you don't have the power.

To give a crude example: if your program is written in C, I have to recompile it to apply changes I have just added to the source code. That means I have to restart the program. It's not much of a power of you have to interrupt it.

And if your program is too cryptic for me to understand, or if it's designed badly, or if its architecture is not flexible enough to allow the modifications I want, I am all less powerful for it, even though, in principle, the changes that I want are possible.

Types of Efficiency

So, flexibility lets you change a given system to your liking, to extend and specialize it to your purpose. And a truly powerful system will let you do it easily.

But one of the most striking effects of flexibility has to do with efficiency.

I have encountered multiple times a belief of sorts among programmers that flexibility hurts efficiency (as in speed). That's a shallow understanding of what flexibility is.

First of all, there are at least two kinds of efficiency (and we will add another one by the end of this article):

  • Efficiency as in the user capability.
  • Efficiency as in speed.

And how can we get efficiency? I think there are two ways:

  • Efficiency via specialization.
  • Efficiency via flexibility to specialize.

Without an argument, flexibility amplifies user efficiency. The user becomes more efficient in using the system as he specializes the system to his own needs.

Various systems try to accomplish user efficiency via building specialized inflexible systems where they pretend they know what the user needs. This is what most software does.

These systems exercise control. And every time they change the way system works, the user has to deal with it.

Other kinds of systems grant the power to the user. Needless to say, these are perceived as the more efficient environments.

Now, what about the execution speed?

Well, whenever you have a system with inputs predictable by the user, but not by the system's architect, the architect may leave the user the flexibility to adopt the system to the input.

In that case, the user may choose ways of dealing with the input more efficiently, yielding better speed.

The architects, in order for them to design a general-purpose system, are forced to use general-purpose algorithms/data-structures/constructs.

The power to swap a general-purpose algorithm or data structure for a better-suited alternative will easily yield better efficiency. And that's what the flexibility of specialization is all about.

Flexibility is usually associated with some minor constant-time penalty. But it doesn't even have to be: not if you provide the flexibility to change the levels of flexibility of a certain subsystem. Think of runtime recompilation.

Flexibility to specialize is, without a shred of doubt, superior to mere specialization if your goal is efficiency of speed and user capability.

Now, before we take this discussion any further, we first need to consider the relationship between flexibility and bloat.

Inflexibility ⟶ Bloat

Petrified strata of features emerge in the infinite vacuous spaces of power that never was.

Lack of flexibility invariably results in bloat.

Specialization without flexibility yields inefficiencies of all kinds due to eventual misuse.

Flexibility without specialization is not flexibility. It's just a generalization, which is inefficient by definition. A generic approach without any mechanism to specialize will result in all kinds of inefficiencies.

We have seen all this happen to browsers. We have seen it happen to Emacs. To the operating systems and to their ecosystems. To the command line interfaces.

All these systems lack the appropriate building blocks for what they are attempted to be used for. In other words, most such systems are misused.

Let's take a closer look at some examples of systematic bloat due to inflexibility and misuse.

Bloat & Browsers

HTML was an innocent-looking idea (if you forget about the syntax). But then came code execution and things have been going south since then.

But I think no one really would even argue with the fact that browsers are bloated to hell. They are trying to be operating systems, but with a very limited intent.

They are so because modern web browsing has converged to a single specialized form of processing information.

And that form is inadequate.

Just as text editing was equated to strings, so was web equated to HTML/CSS/JavaScript.

And just like what happened with string-based editors, the web paradigm was and is abused likewise for writing applications that were never meant to be written in it.

There are even full-blown text editors that work in the browser. And it's not very pretty:

[Atom] A new approach to text rendering

Here are some excerpts from a relevant article about Atom. Atom is built on Electron (which is basically a browser).

In Atom 1.19, we’re landing a complete rewrite of the text editor’s DOM [Document Object Model, a tree of HTML elements] interaction layer that improves rendering performance and simplifies the code.

[…]

But if we’re going so far as to bypass Chrome’s CSS engine, maybe we should try to bypass the DOM entirely. If you look at the above flame graph, a huge percentage of our frame time is going to DOM manipulation (highlighted in yellow) and DOM-related overhead (highlighted in purple). If we could eliminate this work, rendering a frame would only involve executing some domain-specific update logic and painting, which could be fraction of the current time.

Fun, huh?

HTML and CSS serve well for this kind of website here you are reading now. But do we really have to be building complex applications in them?

Why, yes, apparently, we do, because just about every other website on the internet is an "app" now.

But HTML+CSS is too opinionated for actual applications. And yet, the markup is general enough to support most use-cases. In theory, that is.

And, well, efficiency aside – but that's really the problem here:

General-purpose structures eventually get misused.

Inflexible systems of general utility, such as browsers, produce inefficient applications. While the structure is general-purpose, it can fully serve only a very specialized set of use-cases.

And even though the browsers provide the options of SVG and WebGL, you still exercise little degree of control in comparison to native execution. Moreover, if you opt out of HTML, the user ends up with bloat, namely the whole browser itself, because what your application needs is a graphics library and a browser is much more than that.

So, you get two kinds of bloat here:

  • that of the browser itself (it's trying to be an operating system on the user machine),
  • that of the applications that are forced to work with structures that are not suitable for their purposes.

The only way to avoid the issue seems to be to put the user in full control of all the source code. I talk more about this in Data-Supplied Web as a Model of Free Web.

Bloat & Operating Systems

One of the reasons we have complex browsers is because of how our operating systems are built.

As Alan Kay has mentioned in some of his talks, our operating systems could have been live programming environments that only run services. The services can communicate via message-passing.

Very nice.

But that's not what we have. So where's the bloat?

I think it mostly comes from lack of homogeneity. You just can't achieve a good level of integration among incongruent programs together. And you can't do so because they are written in different languages. They are not a part of the same system.

There's absolutely no notion of embedding. You can't reuse one program within another. For instance, you couldn't just use a custom text editor in your browser's text field.

That kind of integration would require a language. OSes should be live programming images, and not whatever it is they are now.

And even if you wouldn't care about drop-in sort of functionality, here's another one: there's no viable intercommunication either. It has to be structural to be any good and most modern operating systems have a silly notion that everything has to be a byte-reading game, and that computer programs have to be written to do it manually.

But all these are only the symptoms of the architectural mess and lack of clarity beneath. Or rather, lack of sufficiently-suitable building blocks.

Although, instead of going lower, we can go higher and observe how there's so many Linux-based distributions out there. Think of it: why are there so many similar systems? Was one system not good enough to accommodate everyone's taste?

The foundation, the main building block shared by the whole ecosystem of distros, is, apparently, inflexible enough to accommodate all the uses that differentiate all these distros.

Lack of communication aside, the way in which distros are built is in itself inflexible and is the reason for the enormous bloat which is the Linux distro ecosystem.

And that's a metapillar of shit for you.

One system with the right distribution mechanism and with enough flexibility could, in principle, replace all this mess. That one system would accommodate any kind of user via its configuration and distribution mechanism, if only they were flexible enough.

And consider the UNIX philosophy and its adepts saying that there should be a bunch of small tools, each one doing its job well.

The problem with this reasoning is that there is no overarching system to fit these tools together. And a good system to do it would only be a real programming system, because it's the only thing that would be flexible enough to satisfy all the use-cases.

With a tool like bash slapped onto the side, you can only use programs like functions. You can't use a program as a building blocks of a larger system. Programs only act as APIs to some functionality. And, as I will soon argue, API by itself is not responsible for flexibility. And, what's worse, you have no common control mechanism for all the running programs.

Or, you know, consider the abomination that systemd is.

These are all quite clear symptoms that the underlying assumptions of how an OS should function are deficient. The core provides for a non-homogeneous, inflexible environment. And, so, we have ended up with:

  • the whole distro-bloat,
  • the whole system administration bloat,
  • the bloat across whole swaths of applications which can't intercommunicate in any meaningful way, and where no reuse or tight control may ever happen.

And by bloat I mean: the complexity and the excessive code that has to deal with that complexity, code that has to deal with the ramifications of quite tragic levels of inflexibility.

Bloat & Hardware

Let's think about how inflexible our hardware is and what that costs us.

In an interview from 20052 A Conversation with Alan Kay., Alan Kay mentions how custom microcode made an efficiency difference of a factor of a thousand on a certain benchmark they ran.

In fact, the original machine had two CPUs, and it was described quite adequately in a 1961 paper by Bob Barton, who was the main designer. One of the great documents was called "The Descriptor" and laid it out in detail. The problem was that almost everything in this machine was quite different and what it was trying to achieve was quite different.

The reason that line lived on—even though the establishment didn’t like it—was precisely because it was almost impossible to crash it, and so the banking industry kept on buying this line of machines, starting with the B5000. Barton was one of my professors in college, and I had adapted some of the ideas on the first desktop machine that I did. Then we did a much better job of adapting the ideas at Xerox PARC (Palo Alto Research Center).

Neither Intel nor Motorola nor any other chip company understands the first thing about why that architecture was a good idea.

Just as an aside, to give you an interesting benchmark—on roughly the same system, roughly optimized the same way, a benchmark from 1979 at Xerox PARC runs only 50 times faster today. Moore’s law has given us somewhere between 40,000 and 60,000 times improvement in that time. So there’s approximately a factor of 1,000 in efficiency that has been lost by bad CPU architectures.

The myth that it doesn’t matter what your processor architecture is—that Moore’s law will take care of you—is totally false.

A thousand-fold improvement in execution speed. Three orders of magnitude!

So, what's going on here? And what could be the numbers be on other benchmarks? And what's happening here anyway?

Now, a word of warning: I don't understand much about computer hardware. But it doesn't take much expertise to see that something has gone bad for us, especially in lieu of the assessment above.

What do the computer chips give you? Well: an instruction set. An instruction set is a programming interface (an API).

Well, when someone gives you an API but no building block, forget flexibility.

We have even called it hardware.

API without a building block is just a black box. And the programmer doesn't have any access to the internal building blocks of hardware.

Hardware is developed by corporations, which means they have probably been optimizing for profit and immediate gain all these years. They are in the business of building black boxes (literally) for the general-purpose use.

General-purpose hardware lends itself well to incremental improvements. On top of that, the hardware manufacturers are happy to be building and supplying more and more black boxes that are optimized for particular applications. Off the top of my head: video processing and vectorization. And now: neural networks. There are probably many more examples, though.

They are doing that instead of simply giving the programmer an array of programmable boxes, every one of which could be programmed for a particular purpose. And, as has been shown above with microcode on B5000, this is not some dream or make-believe. Just a fact.

And for the increasingly multicore architectures, this seems like an obvious match if each could be programmed to its own purpose.

I call the optimization of the general-purpose CPUs misguided. It's misguided due to the failure to provide reasonable specialization mechanisms to the user (the programmer in this case). Instead, it's a feeble attempt to predict the user programs which could be absolutely anything.

Now, imagine how many millennia programmers have spent optimizing their software to deal with something that could have been better done at the hardware-level (or rather at the hardware-programming level). How much excessive code the lack of such a mechanism has produced.

And there's no end to this waste in sight.

(It seems the industry had to take a slight turn with GPUs, though: these have been getting more and more programmable over the years. Why? Well, it was simply too hard to have performant general-purpose 3D graphics without this level of flexibility.)

Bloat & Languages

Most languages act like APIs. They give a set of keywords and tell you how you can piece them together and in what order.

The constructions built with those languages cannot ever deal with repeating patterns that come up over and over again. There's even a word for that kind of linguistic bloat: boilerplate.

But not all languages are like that. For instance, Lisp gives you the access to the building blocks via macros.

What do macros cost? Nothing. They don't slow your code down. In fact, the outer-layer structures may act as optimizing compilers themselves by recognizing and optimizing the code they enclose.

Macros only effect the compilation speed. And since the development is, by-and-large, interactive (and, therefore, incremental), this metric doesn't tend to play much of a role. Some Lisps even give you a syntax-level access via read macros. Those are helpful too.

A language without a metaprogramming facility, an API-driven-language that can't deal with boilerplate, dies as soon as some other language with a prettier API comes along.

Of course, syntax doesn't always play the deciding role: there are other pillars such the ecosystem, the compilation/feedback cycle, legacy, etc. etc. Many kinds pillars and metapillars.

Many pillars and many temples. And they will keep coming because each new one finds an easy way to offer an incremental improvement in the form of new, perhaps merely specialized syntax, marketed at either solving a given task, or at being much better than its predecessor.

That's why there's only a handful of Lisps and a handful of Smalltalks, but a million of Algols.

The first two have a way of dealing with code bloat. Most improvements on them end up packaged into libraries. But the Algols can't be extended syntactically and, so, people are constantly unsatisfied with them, and the only way of improvement is to build a new one, and then everybody can clearly see that the new thing is better than the old.

So, the inflexibility is at the syntax level, but it results in two kinds of bloat:

  • boilerplate, and
  • a bunch of replicas of what's basically the same language.

All else aside, the language that gives you a building block can only be truly replaced by one with a building block at least as powerful. Does that sound familiar? Because inflexible programs die like flies too. They constantly get replaced by better versions of themselves written from scratch.

Bloat & Software Evolution

And that brings us to the classic example of bloat: when a system is too inflexible to accommodate new functionality, this often results in multiple non-backwards-compatible versions of the same project.

And if the system is used by many, each version will have to be maintained separately for a quite while.

That's bloat in the ecosystem.

Bloat & Distribution

Many programs and libraries hoard features. That's because they don't provide a way to extend and build upon its core power to an external system (or the user).

However, even the ones that expose the core hoard features anyway. They don't externalize the features to packages. And so, they effectively supply a distribution instead of a core.

From the user's point of view, there's no flexibility of distribution, because the user can't say he doesn't want to receive certain features.

Any built-in feature may signal that something couldn't be done with the core power of software. But it could also simply mean the disrespect for the user by supplying him the things he doesn't need.

It may be said that built-in features are convenient. But it's still bloat to those who don't use them.

Why does this happen? Well, the distribution means are inflexible (and remember that flexibility doesn't mean mere possibility, but also the ease of exercising it), and so, the program authors choose to bundle perfectly-externalizable features with the core.

Famously, Emacs hoards everything that the maintainers think could be remotely useful to the user. In fact, just about any program does it.

Bloat as bloat is.

Bloat & Text Editing

Emacs hasn't been replaced (only yet) and, indeed, it's one of the few programs that boast flexibility, and is probably the only one that has been on point about it. To a certain degree, anyway.

There's a famous quip that Emacs is a good OS, but lacks a good text editor. But it's a kind of a quip that has too much truth in it to make anyone laugh. And, indeed, Emacs is a virtual Lisp machine, albeit with a lot of walls and pillars.

Emacs is not a temple, per se. But the user is forced into the paradigm of strings.

As such, Emacs lacks a good building block. The only one they give you – a buffer – is only suitable for constructing the same kind of temple over and over again. Every user is effectively restricted to building his own variation of the same thing.

Factually, most of Emacs power originates from Elisp, not from the notion of a buffer. The buffers, I argue are quite horrendous building blocks for the purposes of editing text. (See Emacs is Not Enough for that discussion.)

And, I argue as well, there's no other way to build an efficient and slick text-editing experience other than by doing so structurally.

Structure gives you specialization. It gives you integration. The ability to seamlessly intermix rather-incongruent types of data within a singular space, a way to build specialized text editors on the fly.

(See The Power of Structure for a comprehensive discussion of bloat and inefficiencies caused by the string-based approach.)

Bloat, Summarized

Bloat is superfluous repetition. It's caused by the lack of appropriate compression mechanisms.

Flexibility of a system is its capacity to change and adapt in order to suit efficiency demands (which include preventing unnecessary repetition).

With these two definitions in mind, inflexibility can be said to be the only source of bloat once the system gets used for the purposes it wasn't suited for (i.e. misused). (Although, there are, of course, many ways to get an inflexible system.)

And so, in case of misuse, any kind of bloat can be traced back to a source of inflexibility.

Power and Flexibility

Sometimes, power refers to the feature-set of a software system. However, features have little to do with the capacity of a system to easily adapt to the unseen-before requirements. But this latter capacity is the definition of power that is actually useful to adopt in the context of software development. This includes the power of the user, of course. So:

Flexibility is the only source and measure of power of a software system.

Bloat is commonly referred to as unnecessary complexity, implying lack of simplicity, that the system isn't powerful enough to reduce the complexity. So, if you agree with equivalence of power and flexibility, then we can conclude that the lack of simplicity is due to the lack of flexibility as well.

But while it's pretty common to discuss powerful ideas as a way to reduce bloat, it's not very common at assume that flexibility has anything to do with it. In my opinion, it has everything to with it.

Think of abstractions in software. The cost of abstractions is high, it may seem, and is indeed the experience of many software authors. The way I see it: they just didn't have the meta-flexibility to deal with these abstractions, to reduce the bloat within the abstractions themselves. They couldn't do the right abstractions because the system in which they were writing those abstractions, the metapillars, weren't powerful enough. They were working within the kind of constraints that didn't allow for the right approach.

But, in my opinion, one of the largest reasons why abstractions get a bad rap (and rightfully so) is because of a huge flexibility vacuum.

Most often, this means the language. Or the "framework" within it. Or the paradigm. And since the inflexibility in case of misuse leads to bloat, the abstractions written within these pillars will then be costly and inefficient. And so we call this kind of bloat over-engineering.

So, the solution should most often not be to search for a better abstraction within a system, but either to switch to a different system or make the existing system more flexible.

API vs. Building Blocks

Speaking of software design, one other matter that often gets confused is the talk about application programming interfaces (APIs), as if an API is what constitutes the quality of a system to its user. It's as if, if the API of a library is good, it doesn't matter what goes in beneath the surface, and the library itself is good.

But APIs, by themselves, do not constitute flexibility. If a piece of software is to be powerful, it must be flexible, and, by itself, an API is not the answer. What is, then?

I think, flexibility must come from building blocks. And inflexibility is caused by their lack or their inadequacy.

Real flexibility (and, thus, power) never stems from an API. The notion of an API is orthogonal to that of flexibility or the building blocks.

What's a building block? It's a notion within a system that is itself defined by the properties of its interaction with the rest of the system and with the copies of itself; by the nature of its operation. A building block could be an object or a process or even a concept. Ultimately, I think, it's a thought in your head, and then your formalized understanding of how your system should operate to provide the kind of efficiencies that you are aiming for.

But the two traits of building blocks seem to be:

  • specialized reuse, and
  • interoperation/contact with other blocks of the system.

Flexibility seems to spring from an appropriate set of building blocks. They should be directly accessible to the user, and if there's API, then it should simply reflect the nature of these blocks.

A good example of a building block is code in Lisp, a programmable programming language. Code is just data. And the user may exploit this fact via macros to a great effect. The macro system is an API to the building block of code.

Homogeneity

We can define homogeneity as the degree of interoperability and compatibility between a building block and the rest of the system, including other building blocks and itself.

Lack of homogeneity is one of the greatest causes of inflexibility. A bad building block infects and jams the system in which it is operating.

Ever heard a complaint that there isn't enough reuse in software? Well, the reason there isn't enough reuse is that we use incompatible systems of expression, meaning that the building blocks can't interoperate. Most often that means the use of incompatible languages.

Therefore, we are bound to have incredible amounts of code that is similar in purpose but different in appearance.

But web is also guilty of this: it separates the source of a web page from the resulting web page itself. But the data and its presentation, even if separated, should both be a part of the same configurable environment. I discuss this point in this article.

The proliferation of modern web-technologies and non-compatible languages are a testament to the fact that the requirement of homogeneity is neglected. And that's why there's no flexibility and there is so much bloat.

Meta-Flexibility

We prize biological systems for the capability to adapt.

And, with time, we can even get quite fast at what we practice a whole lot.

One might say that we can't be too efficient, though, that there's a limit to efficiency, and that this is because flexibility has a cost; that if you build a specialized robot, then that robot will be much more efficient (in terms of execution speed) than some meatbag, such as one of ourselves.

Now, that's an interesting example, because that's when you need to start thinking about meta-flexibility. The kind of flexibility that tells you how flexible you want your current variant of a system to be.

It's the flexibility that lets you easily glide between the different kinds of tradeoffs, the kind which would let you turn your organism into a robot OR an organism that can learn and adapt, or, in fact, be anything in between.

Meta-flexibility is, of course, still flexibility at some conceptually-higher level of the system. And, so, flexibility can be very much responsible for providing high efficiency of execution.

And in a programming system, you would want to have such flexibility on a contextual basis, where you can adjust whole subsystems to your particular liking.

I don't believe there's any programming language out there that consciously explores this extreme. Not even Common Lisp or Smalltalk.

Common Lisp, for instance, is garbage-collected, so you can't really change that and proclaim you are going to be doing something manually in a certain part of the program.

Neither could you restrict a certain part of the program to use continuations, because that's not the model of execution that CL adopts.

(But, of course, even though the macro system and the gradual typing system are not devices of meta-flexibility at the core level, they may still be instrumental to creating sublanguages that reduce flexibility for the gain of specialization and, thus, performance.)

And while the practical necessity of such extreme control may appear questionable at first, it seems that the hesitation lasts just until you start seeing the new possibilities: not only to glide between tradeoffs, but also be able to do so as if the flexibility mechanism wasn't there in the first place.

And, in general, from the standpoint of meta-flexibility, if a part of a language adapts a certain philosophy or a way of doing things, that part needs to be able to change. And whatever section of code or a system you have, you should be able to change the flexibility level just for that piece of code; to exercise change over limited sets of building blocks, and not just over all of them at once. Sometimes, this means contextual execution. Sometimes: just the assignment of certain properties to individual blocks. And both.

I don't know what such a language would be like, the kind of language that fully introspects its own definitions with respect to the particular programs written in it.

I don't mean syntax, of course. I am certain that it could just be a Lisp. The interesting part is how flexibility of flexibility would work; how seemingly incompatible ideas blend, coexist and get embedded within one another. Some of the obvious targets include the typing systems, the runtime/compilation control, the memory models, the object models.

There is no one silver bullet, it seems, only the way of specialization within a flexible system that supports and harmonizes orthogonal modes of thinking.

So, I think, what we perceive as religious questions in programming is often but the lack of the flexibility in our programming systems.

For instance, two programmers may have different preferences of a language, and yet will likely agree that you should use the right tool for the job (to the point that this saying has become a terrible cliche). So, that must mean, if the languages are truly different, then the right one needs to be chosen. So, the only outcome of this is that neither is powerful enough to imitate the other one.

The only real measure of the right language for all jobs, then, seems to be, indeed, flexibility.

Optimization and Bloat

If premature optimization is the root of all evil, then inflexibility is its square.

Premature optimization often results in inflexible systems.

Premature optimization leads to shackles, a diversion from flexibility.

Perhaps, the reason why we have been so concentrated on optimization is because we can measure and perceive it. There's no such measurement for flexibility, nor is it readily perceptible. You have to look into the system to see if its really there.

The other evil kind of optimization is misguided optimization. It's an attempt to optimize a general-purpose data structure instead of providing the flexibility for the user to adapt to the particular input that he has.

Attempts to optimize a general data structure are never enough, and so come things like caching and, eventually, a desire to infer and convert between the general-purpose structure and an ad-hoc specialized structure (instead of just having a general interface to specialized structures).

Needless to say, this leads to an explosion in code complexity and size.

Misguided optimization and fucking bloat are like fire and smoke.

And it doesn't just end with bloat: the system becomes increasingly hard to develop and use, and, what's more, optimal execution efficiency is never achieved.

I have discussed this kind of optimization in the section on hardware, above. Another class of misguided optimizations may be said to include just about any optimization in a string-based text editor. (See the Rune section in The Power of Structure for a discussion.)

Kinds of Efficiency

Bloat usually originates from poor expression capabilities of the system – the language is not flexible enough to describe a certain state of the system.

So, a system capable of dealing with bloat may be said to have efficiency of expression. That's one kind of efficiency.

Then there's efficiency of execution and it comes from the ability of the system to specialize to the user input or its own structure.

At last, if a user is capable of easily accomplishing what he wants within a system, the system may be said to have good efficiency of user capability.

All three kinds of efficiency may heavily benefit from and sometimes must originate in the flexibility across the system.

In other words, flexibility is instrumental to efficiency – of any kind.

Most software falls short on this metric, and, thus, typically, on all dimensions of efficiency.

The Role of Structure

Power lies in the ability of the system to adapt, to specialize. Specialization requires the meta knowledge about the input, its structure. So, it's important that structure doesn't get prematurely lost.

Failure to preserve or infer structure in advance is known to cause a lot of problems, such as in the case of text editors.3 Again, I address this issue in The Power of Structure.

So, universal structures, documents, containers etc. can only be truly efficient if the system allows specializing on them.

You can't have flexibility without the knowledge about structure.

Where We Are

Most software systems and ecosystems in existence are a huge mess. They are inefficient on all levels. Often times, the reason for this is precisely the lack flexibility, the lack of the appropriate building blocks.

And these systems, themselves being inflexible, in turn cause inflexibility of thought within whole communities, the kind of thought that prevents progress due to the sunk costs of development, due to the inability to start from scratch.

Inflexibility acts like a broken piece of equipment within a machine. Inflexibility is pathogenic in the sense that it causes other parts of the system to become inflexible just like itself.

And then, it is often assumed that generality is enough.

But that's not true.

A thing too general fails to be efficient in the face of a specialized problem. Most computer-related tasks, and, arguably, all the useful ones, are structured. Which means it's often possible to designate certain problems as specialized.

But most general-purpose system are not flexible enough to provide specialized efficiency on demand.

And a general-purpose system that doesn't provide sufficient specialization mechanisms will eventually get misused, yielding all kinds of inefficiencies.

And that's a source of a lot of problems in modern computing.

The reason why the userspace is so bloated is that our operating systems don't provide us with good building blocks, and because the building blocks that they provide (like the notions of files and executable blobs) are too rudimentary.

Instead of constructing high-level environments to write specialized programs with high reuse value, we instead employ low-level languages to optimize general structures, and those general structures end up being kitchen sinks.

And then, most programming languages make specialization hard to impossible, and absolutely disregard flexibility of expression. No wonder, programs written in these end up being over-engineered and full of boilerplate.

The systematic bloat in software systems reflects the inefficiencies of our thoughts across the whole computational man-made landscape. And we have riddled that landscape with the temples erected out of pillars of our flawed understanding of how to deal with flexibility at such scales.

I am convinced that whenever you see bloat, whenever you see pillars that construct priestly temples, that's where you will find inflexibility of viewpoint, extension and change, that's where you will find no proper regard for user experience whatsoever.

And while it's only my limited experience, I think it's pretty common in the programming circles to speak of power and simplicity and good abstractions, but barely ever mention flexibility.

Flexibility is often opposed to execution efficiency. It is said to incur a cost. But, on the contrary, some part of a system is simply not flexible enough to specialize on the costly flexible elements influenced by it.

Low performance is not a problem of being too flexible, it's a problem of not being flexible enough.

The only cost of a truly flexible system are the costs of time and effort to develop the right set of the core building blocks along with the ways of making them work together.

There are, of course, ideas, such as late binding, that enable flexibility, but, otherwise, flexibility is under the radar.

Even STEPS4 STEPS is an initiative of Viewpoints Research Institute to investigate powerful architectures with a purpose of reducing massive amounts of code bloat. See STEPS Toward the Reinvention of Programming, 2012 Final Report and the presentation by the team: T-Shirt Computing., which is, perhaps, the only initiative that attempted to attack all the mess head-on, never seemed to emphasize flexibility, but rather simplicity and bloat reduction. (It's still a great project, though.)

The cleverest of abstractions won't save us if we can't make them easily adapt to the local requirements of efficiency on demand.

Flexibility is the pillar of abstractions.

And so long as the inaccurate attitudes and views about flexibility persevere, so long will our computer environments be bloated, slow and user-hostile.

That's what I think the problem with modern software is. We build temples, and then we pray for them to play nicely with each other and with the user.

It won't happen.

Pillars build us temples that have preachers that can only tell you how nice the temple is. They won't let you touch the pillars. At most, you could only preach like they do.

The flexible, the adaptable, is the only kind of elegant. And an elegant system makes the user wonder what they can do instead of how they can get by.

Everything needs re-examination: the hardware, the operating systems, the languages, the user programs. It all has to become a cohesive system, with the building blocks exposed at every level.

Otherwise, we will continually be stuck within the mazes of temples.

Flexibility, perhaps, should become the primary measure of the wellness of a system from the perspective of software development.

Simplicity is an indicator of flexibility. And bloat is an indicator of its lack.

Flexibility requires homogeneity and the knowledge of structure within a system.

Flexibility arises from sets of cohesive building blocks and their properties.

Flexibility means the capability of a system to specialize itself and its parts.

Flexibility is evaluated by the ease of exercising it.

Flexibility to adjust the degree of flexibility (meta-flexibility) may at times be key to allowing efficient execution.

Flexibility is the capability of a system to adapt to the requirements of efficiency.

Flexibility ultimately results in efficiency:

  • of expression,
  • of user capability,
  • of execution speed.

Flexibility is power.

Power is efficiency.

Footnotes:

1

You will be hearing this word a lot very soon.

3

Again, I address this issue in The Power of Structure.

4

STEPS is an initiative of Viewpoints Research Institute to investigate powerful architectures with a purpose of reducing massive amounts of code bloat. See STEPS Toward the Reinvention of Programming, 2012 Final Report and the presentation by the team: T-Shirt Computing.

...proudly created, delivered and presented to you by: some-mthfka. Mthfka!