The Power of Structure

(Published: 12 January 2023)

Welcome to Project Mage!

The purpose of this article is twofold:

Overall, the goal is to build a set of text+GUI-driven platforms with highly-interactive and customizable applications, all comprising an efficient environment for the power-user.

I estimate it will take about 5 years to fulfill the intent.

You can find the campaign details here.

This article aims to explain what's special about this project, who will find it useful, what goals it sets, and what ideas motivate it.

I am building this project not just as a programmer looking to improve his tools, but primarily as a power user. And power users are, really, the intended audience. Regular users could reap the project's fruit later on, but this campaign is not aimed at them.

There are also a few essays/articles that follow this one where I lay out my thoughts on Emacs, text editing, flexibility, web and a few other things. Don't hesitate to start with those writings and come back later, if you wish – you will find them all listed at Home. Particularly, if you are an Emacs user, you may find it appetizing to read Emacs is Not Enough first.

TLDR: If you want to get just the gist about the project's structural approach to text, please check out The Elevator Pitch article. Also, if you don't want to get bogged down in the technical detail, then SKIP the Fern section and go straight for Rune or any other section that is of most interest to you (but still, do finish reading this intro!).

The campaign for Project Mage aims to build the following:

An advanced GUI platform.
  • Knowledge-Representation Prototype OO system (KR) (from CMU's Garnet1 See the home page here. The latest Garnet repository may found here. GUI project).
  • Multi-way constraints (UW's Multi-Garnet).
  • Contexts, configurations, an advice facility; full customization and flexibility at runtime.
  • Relative simplicity.
A platform for building seamless structural editors.
  • Purpose: Define a specification for lenses.
  • Stands for: Rune is Not an Editor.
  • Based on Fern.
A versatile Knowledge-Representation platform.
  • Purposes: note-taking, computational notebooks / literate programming, programmable knowledge bases.
  • Based on Rune.
A seamlessly-structural IDE for lisp-like languages.
  • The only target, for now, is Alchemy CL.
  • Based on Rune.
Alchemy CL
Alchemy for Common Lisp.
A sane alternative for modern terminal emulators.
  • Purpose: REPL interfaces, advanced command-line interfaces.
  • Stands for: Ant is Not a Terminal.
  • Based on Rune and Fern.
A project knowledge tracker / bug tracker / decentralized version control system.
  • Purpose: substitute Git, GitHub and the like.
  • Based on Kraken.

In this article, I give the description of all these systems and my motivation for writing them.

To me, the most interesting development on the list is Rune. It embraces the fact that in order for us to work with information efficiently (both in speed and comfort), we need to start taking the structure of textual information a little more seriously than what is usually presumed by most contemporary text editors.

You will find Project Mage most useful if you are:

This is a long article, so feel free to jump around and read about what interest you most, but I would urge you first to read the Who is this project for? and The Project's Philosophy (all soon below this section), as well as, at least, skim the sections on Fern and Rune.

All the software will be free (as in beer and as in freedom), see the License section.

The anticipated timeline for the development (provided stable financial support from the community) is as follows:

Naturally, these are only rough guidelines. If in the course of the development I see a much better way to rebuild something at the cost of a few months, that's what likely will happen. I don't imagine any system to fall squarely into the estimate. The estimates mostly show where I expect most of the work to fall into.

I am pretty sure I can get done in ~5 years. In the end, the estimate assumes quality, because I am fairly certain the core functionality will take a bit less than that. So, there are a lot of corners to cut features that can be postponed till better times if I feel like something is getting out of hand.

My main goal is to provide a platform which can be improved upon by others. A vertical slice of sorts, but with some beef to it. And I will make the applications at least usable to myself, and that includes most of the features listed in this article.

Keep in mind that there might be year 6 for dealing with any excessive optimism about the 5-year timeline. I have written software before, so I am no stranger to underestimation. But worry not. I started at 3 years thinking of something rather rudimentary and then bumped that to 4. And then to 5, just to make sure. So, hopefully, the beginning of the sixth year will only be for some final touches, or, better yet, for maintenance and work with the community.

So, look at it this way: it's done when it's done; but hopefully in 5 years. I talk more about this in the section How Realistic is the Time Estimate?.

I will be posting a brief summary of progress about once a month, on this website. This, of course, assumes I can work on the project in the first place, which depends on the funding success of the campaign.

I have also set up a mailing list in case someone wants to have a discussion or ask me anything. See the Contacts page.

All the software in the project will be written in Common Lisp – see the Why Common Lisp article.

Table of Contents

Who is this project for?

This project is for power users. The type of people who get easily annoyed at doing things the wrong or slow or in-some-manner inefficient way.

Being able to program helps a lot with being a power user, but a lot of people can configure a program like Emacs without any extensive knowledge of programming. Most of the time, it's just a copy-paste of a few lines of configuration, or changing some value. It's not that hard, and especially not if there's the right kind of UI for it (developing such UI may be beyond the scope of the campaign, but I see no problem in doing it).

As such, I don't see the potential users of the system to be just programmers. Anybody who is interested in their tools should be able to pick up on any Mage application fairly easily. I will try to make all the applications usable out of the box.

The project (within the limits of the campaign) aims to provide alternatives to, or a gateway to replace:

  • advanced text editors (e.g. Emacs),
  • specialized text editors (e.g. HEX editors, CSV editors etc.)
  • command lines (such as those of terminal emulators),
  • Sly/Slime (Common Lisp IDEs for Emacs),
  • computational notebooks (e.g. Jupyter),
  • note-taking applications (e.g. Roam Research, Org-mode)
  • version-control systems (e.g. Git, mercurial),
  • bug trackers (e.g. GitHub),

while providing a new, powerful and fun-to-use GUI library, capable of constraint management, contextual customization and much more.

That's a lot of applications, and it may seem incredible for the 5-year timeline I have claimed. But think of these as parts of a unified platform, and you will start to see that there's a huge amount of overlap in the functionality of all of these. (But, I take this discussion further in How Realistic is the Time Estimate?)

But the key word here is: integration. All the applications are a part of the same system, and that provides not just for the capability of reuse, but also for a high degree of flexibility and customization power.

And Project Mage doesn't merely replace these programs to unify them under the same umbrella, it often tracks and eliminates their common fundamental flaw: their disregard for the structure of the data that they are operating on.

By the closing time of the project I aim to establish a ground for a community to form. Hero will be instrumental to this end. But I will also set up a public repository specifically for everything Mage. For more details, see the Ecosystem section.

The Project's Philosophy


Common Lisp is a top-notch interactive platform. It lets you work in a live image. You can evaluate and run code without reloading. Any class, any object, or any function may be modified at runtime, at your will.2 A notable exception on some implementations of Common Lisp is trying to change struct definitions that have live instantiations, in which case the behavior is typically undefined and a warning is issued – all due to the optimizations that apply to structs. On top of that, there is a powerful condition system that lets you fix errors and exceptional situations; inspecting the stack frame and changing their local bindings is something you can do quite routinely.

If you have programmed Smalltalk or Emacs or most other lisps, you probably know what this is like.

In some discussions online, this quality of interactive development is often referred (and thus reduced) to something called a Read-Eval-Print-Loop (REPL), as in: an awkward command line prompt. Contrary to that perception, REPL development may happen in a source file by repeatedly evaluating whichever expression you are working on at the moment.

Image-based programming is important not only because of comfort, but also because of valuable state that you might not want to lose. When you are working with unsaved data, or editing a text document, or churning through a bunch of processed logs, being forced to reload the whole system is annoying, costly and at times simply unacceptable.


It's not enough to use an image-based programming language, even the kind that lets you change any component of the system, in order for your library/platform/program to be meaningfully extensible and customizable.

Extensibility requires its own design and foresight. A truly flexible system should allow multiple modifications by multiple independent parties. It should provide a way to introspect and deal with the modifications of others. All this should, of course, be done without the user ever having to replicate any existing code.

If one is to have a healthy extension ecosystem, such design is of utmost importance.

Modularity and Minimalism

No hoarding like in Emacs.3 The only reason for this seems to be the assignment of copyright to FSF.

Oh wait, I am sorry, I didn't mean it quite like that.

No HOARDING like in Emacs.

Anything that can reasonably be implemented as an external package – should be.

The core programs and libraries are there for defining the building blocks and the common API, not for acting like a distribution.

As such, the core is just a set of libraries that you load into your image.

Just as well, the core itself should be modular (where it makes sense), so that the user may easily replace any of its parts with someone else's package for a part that he wants replaced.

Same, of course, goes for any backends: they are completely separated from the core and just implement the API requirements.

An Option of Speed

Speed is one of many arguments that proponents of the mainstream languages use when flaming against high-level powerful languages, and quite possibly the only one they are right about (remotely at that).

But even then, Common Lisp lends itself well to user optimizations through its gradual typing system and various declarations, allowing for compile-time optimizations. Possibly, the one essential thing that's beyond the reach of the application programmer is the garbage collector4 And even that might not be forever. CL compilers are getting ever more sophisticated and modular (see SICL, for instance), and, possibly, one day, (some) compiler internals might become user-space hackable just as well as anything else by the means of one of those defacto standards in the future. We haven't known the limits yet..

However, not everything is down to the language. The core libraries may help the user gain speed when it's required. For instance, in Mage, geometrical primitives will be implemented as flexible Knowledge-Representation objects. But, in turn, these KR implementations will rely on the relatively lightweight and simple structs that implement all the necessary functionality. These structs, along with the machinery to manipulate them, will all be exposed to the users, in case they want to make a million triangles or something.

So, some design choices need to consider and account for performance, especially when it comes to graphics.


The core should NOT have a foreign-language plugin/extension API.

A foreign API, no matter how good, can never be powerful enough to offer the original hackability of the image-based system that it exposes. Only the native packages do.

Second, if a foreign-language API were to be introduced, the ecosystem would effectively become fractured, especially so when multiple languages are allowed to use the API.

The motivation for foreign plugin API (such as kind done in vscode and NeoVim) is nothing but opportunistic: as a rule, the goal is to attract more people to write plugins. Plugins attract users. More plugins = more users.

The reality ≠ sunshine, though: the power user, who likes power and typically forms the real core of the development effort, now has to deal with a plethora of (likely inferior, less powerful) languages, because that's what the ecosystem becomes polluted with.

And, what's worse: the power was lost across the gap that the foreign API has created. Just as the foreign plugin doesn't have the benefit of interacting with the native language directly, nor may the user work with the foreign code from within the image. The benefit of an image-based system is thus effectively diminished. Introspection and interoperability die and the system is not live anymore.

Not to mention the fact that foreign languages evolve, break compatibility and sometimes even lose popularity (read: die), so the ecosystem becomes dependent on the forces beyond the original choice of language. The result is a biological bomb with a years-long fuse.

Foreign extension interface creates an artificial division which is ultimately detrimental and unhealthy, and immediately deadly for truly interactive systems. It is simply bad design from the point of view of a power user.

Also, see my opinion on Guile in this article.

There's No Such Thing As Sane Defaults

Among the power users and developers there lives a vision of something akin to a fabled unicorn: sane defaults.

Sane defaults are a hallmark of great software. They tell you what's right and sensible, they show you a way forward.

Well and good, up until one fine day a subject comes up on the mailing list: this once perfectly sane default is now actually insane! A panic and bewilderment ensues. A commotion with some chaos mixed in. But no matter what everybody says or does, or what arguments finally surface, it can only end in one of these two ways:

  • Let's break the old behavior and alienate some users.
  • Let's stay with the old behavior and keep alienating some other users.

The classic trolley problem.

How about this instead: the user has to pick what he wants.

And with some concept along the lines of named versioned configurations, the process of such customization can even be quite painless. What these configurations would do is hold all the values used to customize the system. These configurations may be supplied either by the package itself or from elsewhere. But they encapsulate all the configurable behavior of a package. And there may be multiple of them. So, the author of the configuration, when he wants to introduce or change a value that breaks some behavior, derives a new one with the version bumped, as a separate object.

A slightly more interesting variation on the idea is to have parameterized configurations which take certain arguments (version could be one) from the user and return a set of values to be finally applied to the configured environment.

When the user installs a package, all he has to do is pick a version (or a label such as "latest") among the available configurations.

This way no one can blame the author of the configuration for breaking things, not when done in the name of progress anyway.

Configurations force the user to be responsible for their setup.

So, no more arguing about keybindings. No more arguing about how some behavior is obviously better than some other behavior, but, see, changing it will anger a few mythical mammoths, so that's why nothing is ever going to improve and that's why we are gonna be stuck on this one bad thing forever.

The only good defaults are no defaults.

And if you don't like someone else's configuration, just build your own and distribute it. And since configurations can layer, you need only specify the things you want to change.

I discuss how configuration might be implemented in Fern in a later section.


Power is the one real first-class citizen here.

All the points that you just read above are but the reflections of this tenet.

If you can't easily modify your program, you have just lost out on some power.

If you have to restart to make something work, your power is effectively diminished.

If you can't make something run fast enough, that's some power gone.

If you have to resort to hacks in order to change some behavioral facet of a live system, guess what, mentally subtract some power from your big tank of POWER.

Power not understood is power lost.

And a power that can't be wielded easily is not much of a power to begin with, but only an illusion of such.

So, that means that simplicity is an implicit requirement, and not only because it makes it easy to reason about things, but also because elegant building blocks tend to underlie powerful systems.

Lego is kind of inspiring in that sense: all you have to do is get a grasp of a few building blocks of various shapes and sizes, and then figure out how to snap them together. Then you can construct just about anything and don't really need any further guidance or having to look up the Lego manual.

Programming is more complex than Lego. But the idea of a few building blocks that snap together can be a source of strength there just as well.

So, the foundational goal of the project is to be the source of power for the power user.

But it's important to understand that power doesn't come from having a general enough framework. It comes from the ability to extend, combine and build upon a small set of primitives to build your own specialized frameworks and tools.

And another point here is: structure. Structure is necessary for the ease of extension and reasoning. The discussion on this with regard to text takes place in the Rune section.

Project's Goals

Before we begin, there's something to be said: what you will find below are designs, and most of them haven't been coded yet.

My goal here is not to tell you every detail (although, there is going to be a bit of detail), but rather for you to understand the capabilities and possibilities of each system, along with its motivation and purpose. I will try to show the most illustrative parts, so you can autocomplete the mental picture for other details and the system as a whole.

I want to share these designs because I find them exciting, and, I hope, so will you.

PS If you are unfamiliar with Lisp syntax, don't feel intimidated. Many constructs here will be unfamiliar to Lispers as well.

Fern: A GUI Toolkit   partially implemented

A form of self-repetition, ferns have well served a prime example of fractal-like structures.

They are indigenous to just about every corner of the Earth.

I dig their simplicity.

Fern is a toolkit for constructing graphical user interfaces (GUIs).

In the section on power, I have mentioned Lego. So, in Fern, the Lego blocks are schemata and the way you click them together is via constraints.

These concepts are in no way specific to the construction of user interfaces; however, they do seem to be particularly fit for the job. The process of interface-building in Fern is heavily rooted in them.

Let's discuss everything point by point. It may get just a tad technical at times, but it seems worth it to describe everything in sufficient detail. This whole Fern section is the most technically-involved of the article, so if you find it slow-moving, I would still recommend at least skimming through for the general ideas.

(To be honest, I am not all too satisfied with the following explanations. In my opinion, technical explanations should be a part of the environment that they refer to. A live example can often times bring greater and more immediate understanding than a wall of text. In this section, I had to resort to text and just plain, non-live examples. So, please, try not to concentrate on the syntax, but rather on the overall picture, because that's what's important.)

KR: a Knowledge Representation System5 For an extensive overview of kr, see KR: an Efficient Knowledge Representation System by Dario Giuse. Also, Garnet maintains its own version of KR.1 See the home page here. The latest Garnet repository may found here.

kr is a semantic knowledge network, where each object is a chunk of information known as schema.

It's sometimes best to just show an example, because kr is very simple to use. Let's define6 Note that not all code below will work if you want to test it with Garnet's kr. I have changed some syntax for readability. a schema for the mythical Kraken:

(create-schema kraken
  (:description "A legendary sea monster.")
  (:victims "Many ships and the souls on them."))

Let's get its description:

(get-value kraken :description)

yields → "A legendary sea monster."

You can easily add a slot to a schema.

(set-value kraken :first-sighted '18th-century)

adds a slot :first-sighted with value '18th-century to the kraken schema.

Now, that's all very interesting, you might be saying now, but so far this looks just like a regular hash map! Well, it kind of is! But, worry not, there's more! How about some multiple inheritance at runtime?

(create-schema mythical-monster
  (:location 'unknown)
  (:victims 'unknown))

(create-schema cephalopod
  (:location 'sea)
  (:group 'invertebrates)
  (:size "From 1cm to over 14m long."))

(set-value kraken :is-a (list cephalopod mythical-monster))

:is-a is a special slot that defines a relation between objects, and we have just assigned to it a list of two schemata: cephalopod and mythical-monster. This will make Kraken inherit their slots, with priority given to cephalopod.

Now, the location, (get-value kraken :location), will yield SEA. Note that both mythical-monster and cephalopod define the :location slot. The lookup is done as a depth-first search over the defined relations in the object.

Note how victims, (get-value kraken :victims), are not unknown, but rather "Many ships and the souls on them.", which we have specified in the original definition of our schematic Kraken. Setting a slot overshadows the inherited value.

The interesting thing about all this is how you can update the inheritance of a schema at runtime, basically at no cost. Values are never copied in an inheritance relation, they are just looked up. (But it's also possible to set up local-only slots whose values are either copied or set to a default value.)

Another property of kr is that we can add (or remove) slots without ever touching the original definition: we do this simply by the set-value operation. We will get back to this point later when we talk about customization.

kr represents a prototype-instance paradigm of programming. It is an approach to object-oriented programming which says there's no distinction between an object and a class. There's just a prototype, which is an object which can be inherited from.

What can be more object-oriented than that?7 I can't seem to find the source of this quote.

In a conventional OO system, objects have types. But with prototype-OO, all schemata are of the same type schema and are instead differentiated by names and contents.

In fact, a schema may even be unnamed.

Behaviors are simply lisp functions that are stored in slots. That's useful for late-binding.

One may set up hooks that react to value change in any given slot. It will also be possible to add hooks for addition and removal of slots.

There are a few other things in kr, but this should do for a quick overview.

That is all VERY fine, you might, say. But is that all?


With Multi-Garnet project8 Multi-Garnet is a project that integrates the SkyBlue constraint solver with Garnet. For a technical report, see: Multi-Garnet: Integrating Multi-Way Constraints with Garnet by Michael Sannella and Alan Borning. and its solver SkyBlue9 See The SkyBlue Constraint Solver and Its Applications by Michael Sannella., it is possible to set up constraints on kr slots.10 Please, note that the code below will not work if you want to test it with Garnet's multi-garnet. I have changed some syntax for readability. Also, multi-garnet integrated with Garnet formulas, with a goal of replacing them. Garnet formulas are done away with altogether in Fern.

Constraints may be used to define a value of a slot in terms of other slots, i.e. outputs in terms of inputs.

Whenever one of the inputs changes, the output is then recalculated automatically.

Constraints keep the slot values consistent. Whenever one value changes somewhere, it might have a ripple effect on a lot of other values. The most widespread antipattern in most GUI frameworks is the handling of changes with callbacks. Constraints provide a declarative way to deal with this - simply declare the dependencies between the slots.

  • One-Way Constraints

    Consider this rather bland example, where we define y to be 0 when x is less than 0.5, and 1 otherwise.

    (create-schema stepper
      (:x 0.75)
      (:y 0))
    (set-value stepper :y# (create-constraint :medium (x y)
                             (:= y (if (< x 0.5) 0 1))))

    This is a rather usual schema, until we add another slot, y#, which holds a constraint that defines y in terms of x.

    The names of the inputs, x and y, refer to the slots that are being accessed, :x and :y.

    Right after the addition of this constraint, y has immediately been recalculated to be 1.

    If we now set x to 0.25, the constraint solver will detect the change and trigger the recalculation, this time yielding 0 for y.

    So, the outputs are evaluated eagerly (as opposed to lazily, that is, upon access).

    Importantly, the output form may be arbitrary Lisp code.

    The constraints also have strength: like medium in the case above.

    There may be multiple constraints defining the value of the same slot, and the strength decides which one to chose. We will discuss strength a bit later.

    A constraint needs to be a part of a schema for it to take effect. As such, you can remove a constraint simply by removing the slot or by setting it to nil.

    (Also note that slots that contain constraints may be arbitrarily named and don't have to contain a hashtag, like in y#.)

  • Multi-Way Constraints

    One-way definitions are interesting. But, sometimes, it may be useful to define some things in terms of each other. Let's see a classic example of a temperature converter.

    (create-schema thermometer
      (:celsius -40)
      (:fahrenheit -40)
       (create-constraint :medium (celsius fahrenheit)
         (:= celsius (* (- fahrenheit 32) (/ 5 9)))))
       (create-constraint :medium (celsius fahrenheit)
         (:= fahrenheit (+ 32 (/ (* celsius 9) 5))))))

    So there are two slots, one for degrees Celsius, one for degrees Fahrenheit. We set up two other constraints, each defining its output's relation to the other variable.

    (set-value thermometer :fahrenheit 212)

    If we inspect celsius now, we see that it has become 100. Similarly, if you change fahrenheit, say, to 32, celsius will become 0.

    The constraint solver is smart enough to figure out that these two variables depend on each other, so there's no infinite loop of change, and only a single calculation is done. In general, the solver allows circular definitions like that (possibly involving multiple variables).

    These two constraints can be batched together like this:

    (create-schema thermometer
      (:celsius -40)
      (:fahrenheit -40)
       (make-constraint :medium (celsius fahrenheit)
         (:= celsius (* (- fahrenheit 32) (/ 5 9)))
         (:= fahrenheit (+ 32 (* celsius (/ 9 5)))))))

    Now, that's shorter!

  • Pointer Variables

    So, how do we refer to variables in constraints? What can the inputs be? Well, if a slot in your schema contains another schema, you can refer to its slots using pointer variables.

    Time for another example! Let's say you live in room, in a country where it can get quite hot at times. We will model the environment as a separate object, and we will include it along with the thermometer in the slots of our room.

    (create-schema environment ()
      (:weather :stormy))
    (create-schema room ()
      (:thermometer thermometer)
      (:environment environment)
      (:window (create-schema nil (:closedp t))))

    Since you can't be bothered to buy an air-conditioner, you use the good old window to control the temperature. Of course, there's little point to opening the window when it's too hot outside, so we keep it closed then. We set up this constraint like this:

     room :temp-control#
     (create-constraint :medium
         ;; input variables
          (temperature (:thermometer :celsius))
          (weather (:environment :weather)))
       ;; output
       (:= window-open-p
           (and (> temperature 25)
                (not (eql weather :scorching-sunbath))))))

    The input variable specification (:thermometer :celsius) describes a path to the celsius slot in the thermometer of the schema where the constraint is placed (room in this case).

    You can use such paths when accessing slots of kr schemata:

    (get-value room :environment :weather)

    Paths may be as long as necessary, but every slot except the last one has to be a schema.11 Arbitrary access for other data structures hasn't been implemented.

    The constraint system picks up on the value changes along the path and updates the outputs accordingly. So, if, for instance, the value of the environment slot is swapped for another environment, then window-open-p will be recalculated.

  • Stay Constraints

    A stay constraint is a special kind of constraint that keeps a slot from changing. It looks like this:

    (create-stay-constraint :strong :some-slot)

    This may be useful when you want to ensure that some value doesn't change, even though some other constraint may have it as an output (but only when it has a lower strength than that of the stay constraint).

  • Constraint Hierarchies / Strength

    Multiple constraints may output to the same variable: their strength decide which one to compute.

    SkyBlue9 See The SkyBlue Constraint Solver and Its Applications by Michael Sannella. constructs a method graph from the given constraints. Based on the constraint strength, this graph is used to determine the method for any given output variable.

    Take a look:

    (create-schema rectangle
      (:left 0) (:right 10) (:width 10)
      ;; ...
       (create-constraint :strong (left right width)
         (:= width (- right left))
         (:= left (- right width))
         (:= right (+ left width)))))

    Setting the value of right to 100 (via set-value12 Even set-value has a strength value associated with it (:strong by default). This doesn't play a role in the example, though.), there are two things that could happen:

    1. width changes to 100, left stays the same.
    2. width stays the same, left changes to 90.

    In this case, the first option will take effect due to the order in which we have specified the constraints (width comes before left).

    Other than by changing the order, you could control this by creating dedicated constraints for each output and assigning suitable strengths to them. However, there's a simpler way, by adding a stay constraint:

    (set-value rectangle :width# 
               (create-stay-constraint :medium :width))

    Now, setting right to 100 would keep width at 10 and change left to 90.

    Constraint hierarchies, defined via strengths, allow the programmer to specify what methods have the output priority and allow you to specify rather complex relations between variables.

  • Multi-Value Return

    Multi-garnet allows multiple variables to be returned. The syntax is:

    (:= (x y z) (values 0 1 2))

    This may be useful when your function calculates multiple values: you might want to unpack those values into several slots.

    For instance, a layout function may produce the layout positions and the resulting size. So, it might be convenient to pack those values into their respective slots right away.

  • Laziness   not implemented or planned

    Stay constraints may be used to achieve a sort of laziness, where you add and remove a stay constraint at runtime. This can be done by hand. However, some convenient way to achieve laziness would be nice to have.13 This could be achieved exactly adding and removing a high-strength stay constraint upon access. However, the values that depend on it, might also need to be recomputed upon access. Marking a slot as lazy would involve embedding information about it into the constraint system for the values that depend on it. It shouldn't be too hairy, but not a walk in the park either. I will probably skip this feature.

    This could come in handy for expensive computations.

    Note that laziness and the rest of the features in this section haven't been implemented (only yet, in some cases).

  • Parameterized Pointer Variables   not implemented yet

    Pointer variables (described a above) are nice, but sometimes the number of inputs changes, or you don't even know what your inputs are in advance. In that case, you might want to treat your inputs in bulk and work with them in a container. There are plenty of uses for it: later on, in the section on cells, you will see a couple of examples.

    But generally, a schema might contain multiple other schemas of some kind, and you might want to calculate something based on some particular slot in each one. Just as well, you might want to access a particular set of objects within a custom container (discussed later in the section on cells).

    If all this sounds pretty nebulous, then here's a quick example:

    (create-schema some-sand
      (:grain-1 (create-schema (:mass (gram 0.011))))
      (:grain-2 (create-schema (:mass (gram 0.012))))
      (:a-rock (create-schema (:mass (gram 4.37))))
       (create-constraint :medium
           ((m1 :grain-1 :mass)
            (m2 :grain-2 :mass)
            (m3 :a-rock :mass))
         (:= total-mass (+ m1 m2 m3)))))

    Now, that total mass constraint is a totally ugly mess, to be honest. If you add a grain, you have to rewrite the whole constraint. If you decide a little rock isn't good enough for a proper sand pile: rewrite. So, that kind of sucks.

    There's a remedy of sorts: you can add constraints programmatically (this is already existing functionality). So, you could generate the constraint on mass whenever a slot is added or removed. This could be done via a hook, or, if a meta-list of all slots is available, then through yet another constraint that takes it as an input and outputs the desirable :total-mass#.

    So, you can solve the problem by hand, but most people only have two, and the number of constraints in your system may be many more.

    It would be more viable to introduce some syntax for pointer variables that would track slot additions in the schemas along the path.

    I don't yet know what the exact syntax will be, there are a few considerations14 The syntax should be extendable to containers other than schemas, allow either predicates or filters, allow either values or pointers, and multiple arbitrary pickers/wildcards along the path.. But the general idea is to have something like (masses * :mass) as an input and then return (apply #'+ masses). It's work in progress, but the utility is apparent.

    There's a different way to approach generalization of pointer variables and I will discuss it in the section on KR with Generalized Selectors.

  • Advice   not implemented yet

    It should be possible to advice constraints.

    Advising a constraint means prepending, appending or wrapping its function in your own code. Multiple parties in different places may advise a single function and the effects will simply stack. Later inspection and removal of advice should be possible.

    This notion comes from Emacs and often proves useful for customization.

  • Change Predicates   not implemented yet

    A function may produce an output equivalent to the already-existing value in the slot. This shouldn't trigger a recalculation.

    It should be possible to specify a predicate for a slot that would identify whether two values are equivalent and prevent unnecessary recalculation when other slots depend on it.

  • An Algebraic Solver   not implemented or planned

    You might have also noted how there's a slight inconvenience in the Fahrenheit↔Celsius conversion example: you have to define the same algebraic formula twice. Multi-Garnet may integrate other, more general or specialized solvers (such as for linear equations or inequalities).

    Layout code, for example, could benefit from such an addition.

    Not planned as part of the campaign.

  • Asynchronous Execution   not implemented yet

    Sometimes, you might want to compute something expensive. Asynchronous execution should be an option.

Cells   not implemented yet

The centerpiece of GUI-building in Fern is a cell.

Think of a cell as a biological cell, some shape that reacts and interacts with its environment, which is, for the most part, other cells.

A cell has a shape, a position in space, a transformation, a viewport, context (discussed later), configurations (also discussed later) and a few other things.

  • The Tree Structure

    Unlike a biological cell, a fern cell may contain other cells, thus yielding a tree-like structure.

    (create-cell bicycle
      (:handlebar (create-cell
                    (:is-a (list handlebar))
                    (:shape #|...|#)
                    (:angle 0)))
         (:is-a (list bicycle-wheel))
         (:shape (make-circle #|...|#))
          (create-constraint :medium
              ((wheel-angle '(:angle))
               (handlebar-angle '(:super :handlebar :angle)))
            (:= wheel-angle handlebar-angle)))))

    This creates a schema named bicycle that :is-a cell.15 :is-a slot is constrained to append a cell to it (even if the user supplies his own :is-a). The constraint may be removed if undesired, of course. It contains a few other cells within, as slots.

    A cell's :super slot stores the cell that contains it and is automatically reset upon being placed into another cell (or moved out of one). If a cell has no super, it's called a root cell.

    :front-wheel and :handlebar got bicycle automatically assigned to their :super slots.

    Bicycle, on the other hand, has it's :parts slot populated with its cellular slots: '(:front-wheel :handle-bar #|...|#). If you add a :frame cell to the bicycle cell, or remove :front-wheel, the parts will be updated accordingly.16 This is simply done via slot-removal/addition hooks, available to the user.

    So, in order for you to build a tree of cells, you just place cells within other cells: all the bookkeeping is taken care of.

    Shape of the Bicycle

    You might wonder: what is the shape of bicycle from the example above?

    By default, the shape of a cell is defined to be the bounding rectangle of the union of the shapes of its :parts.

    How is this accomplished? You might notice that if you simply constrained :parts (or a list of values of parts) and then change the shape of some part, the constraint won't pick up on the change, because the value of :parts itself wouldn't change.

    Instead, this is accomplished via parameterized pointer variables, or by generating a constraint based on parts manually.

  • Containers

    Now, let's get something out of the way.

    If you want to store a bunch of cells in a list or a vector, or what have you, what do you do? 'Cause you need to do that if you are generating cells.

    Well, you can set up a cell, put all those cells in the data structure of your liking, and draw them yourself. Then you would just have to keep the meta slots like :super and :parts up to date.

    Instead, you could use containers and that's exactly what they do: they are cells defined for some common data structures, and they keep metadata up to date.

    A container gives you direct access to its data structure (simply held in a slot) and eliminates the work on your behalf.

    There's a different way to approach containers and I will discuss it in the section on KR with Generalized Selectors.

  • Contexts

    Suppose you have two branches of cells, and you want all the cells in one branch to have a red background, but have a blue background in the other branch.

    One way to do that would be to walk each branch and set each cell's background one by one. But then, every time you add a cell to a branch, you would have to set its background as well. Not to mention that you would have to keep track of what branch has what background color.

    The way Fern solves the problem is through contexts.

    Each cell has its own context in its :context slot. A context is a KR schema.

    What's interesting about a context is it inherits from the context of its super (to repeat: the cell to which the current cell belongs). The super's context, in turn, inherits from its own super. The cell at the root of the tree dynamically inherits from the global context object.

    So, the contexts along the branch of cells form an inheritance tree. Note that the values are not copied, and, so, the context are quite cheap.

    But what you can do now is say

    (set-value :some-cell :context :colorscheme :bg +red+)

    And some-cell along with all its parts (constituent cells) and their parts will all have a red background (since the drawing functions rely on context to provide such information).

    All of this is done dynamically, so if you move a cell into another cell, its context inheritance will change (because this is accomplished simply by constraining the :is-a relation).

    Context itself may have other schema within it (e.g. :colorscheme, :font-settings), which in turn also inherit from the namesake slots in the super's context.

  • No Windows

    Well, you might be itching to draw some stuff by now. For that we need to create a window.

    Or do we?

    Because we could simply say that if a cell has no super (thus making it a root of the tree), and if that cell is visible, display it as a window in your windowing system. Or as a workspace in a desktop environment. Or as a tab in a browser17 Not that anyone would want such a thing..

    A cell with no super may be invisible by default. This is easily modeled as a constraint, so, once it becomes a part of another cell, it automatically becomes visible; this would prevent accidental creations of windows.

    Of course, you can still give hints to the backend responsible for maintaining the window by assigning values to slots that it understands (like :name). And vice versa, the backend could place its own properties or constraints in a given cell (perhaps, to allow some native behavior).

    So, we can define a window to be just a visible cell without a super. This makes it very easy to place your cell in some other cell.

    As such, it seems quite satisfactory that a notion of "window/frame object", so ubiquitous in GUI frameworks, can be done away with fairly easily.

  • The Drawing Process

    The tree of cells contains all the information for its rendition to the screen. And we want it rendered automatically, yet leaving all control to the user should the need arise.

    How should we do this?

    One obvious way would be to walk the tree every screen frame and call a draw function of each cell. That would be quite wasteful, because you would have to be doing this 60 times a second and some cells may be expensive to render and there may be a lot of cells.

    Instead, what if each cell would have a texture18 Or, rather, an abstraction of a texture over some real GPU texture. that would be constrained as an output of its parts and its own shape? That's a step in an interesting direction. Now, whenever some cell changes, its texture is re-rendered and the change ripples back to the root. So, unless there are changes, no frivolous drawing has to take place.

    This approach is also a bit wasteful when you have many small cells: after all, each cell would need its own texture. More than a memory issue, however, it is an issue of extraneous draw calls for rendering those textures to their super's texture.

    So, the first way has no caching, and the second way is full of caching, and neither is ideal. We need a mixture of the two, a way to choose between immediate drawing and cached drawing for any given cell. Ideally, we could hit a switch on a cell which would decide what kind it is. We also want to leave the user a way to control the process.

    How this is done exactly is not too important for our discussion, but with a few constraints, :context, and a couple of slots, it should be rather approachable. And the whole process will be out on the surface for the user to tweak and modify.

    As for custom drawing (like drawing a non-interactive circle), it is as simple as (draw (make-circle ...)))). The draw method relies on :context, so you don't have to pass extraneous information like viewport or default color or such. (There will be a macro to modify a context locally, though.)

  • Sensing the Environment

    One may set the slots of any given cell, and the cell may react to that by setting up its constraints on those slots. The events can be useful as well, and the cell can process them in the same manner.

    For instance, whenever a user issues a key press, an event will be issued, which may then be passed to the active root cell (root cell means a cell without a super). The root cell may then pass it to its branches (or consume it), and so forth. By pass, I mean, there's simply an :event slot, which will be assigned to, and which may be constrained (as an input).

    If a history of events is needed, a constraint can build it with an output into a separate slot (e.g. :recent-events).

    How should the input be dealt with, exactly? Well, state machines let you easily model many types of user input (and, probably, some other kinds of activities as well). Since kr can represent graphs, it is suitable for building state machines on the go. So, in your constraint, you can create a state-machine schema and use its slots as outputs, and the events as inputs.

    In other words, there's no need for any special input-handling abstraction: kr+constraints should be enough to make a cell interact with its environment.

Configurations   not implemented yet

Prerequisite section: There's No Such Thing As Sane Defaults.

Say, we need some sort of configuration objects in our system. A configuration doesn't have to do much: it has to be able to add, replace or modify some slots of another schema to which it is applied. Ideally, a configuration should be reversible, and multiple of them could be layered.

If we want reversibility (a sort of undo mechanism for customization), we can't simply replace the local slots of a configured schema. How about inheritance? It's reversible. However, inheritance doesn't override local values, which is what we need a configuration for in the first place!

Instead, we want the configured schema to behave as if it inherited from itself after being configured: (:is-a (list configuration configured-schema)). Of course, you could easily create another schema with that inheritance relation, but we want to original schema modified, we don't want a new one.

So, that's what we will define configurations to be: an inheritance relation that makes the configured schema behave as if it inherited from the configurations and, lastly, its old self. Something like this:

(:is-a (list configuration-1 
             ;; ... 

So, the configuration slots take precedence over the configured slots. And if you don't want some slot to take precedence, you may simply remove that slot from the configuration.

The way configurations will be done within kr is an implementation detail and shouldn't really incur an efficiency penalty any more than adding any other inheritance relation would. But we get what we set out for: reversibility and layering, all achieved in a rather simple, almost familiar manner.

We need one more point to address: versioning. To that end, the configuration author implements a configuration function to return a specific configuration schema based on what we have asked this function to give us (like a specific version).

And, finally, the act of configuration is accomplished via a :configured-by slot and looks something like this:

(create-schema bicycle
  (:is-a (list vehicle))
  (:configured-by (list (get-configuration
                         :version 1.17.0))))

So, to summarize, configurations

  • are simply KR objects used to customize other KR objects,
  • are parameterized and versioned,
  • may be applied partially and dynamically,
  • may be layered,
  • are reversible,
  • are modeled just like an inheritance relationship for a new object, which in itself is rather simple to understand and makes them predictable.

Configurations for cells may even be set up contextually: just set up a constraint that appends a configuration to a schema based on its cell type.

So, the necessity of configurations have a lot to do with the importance of circumventing the need for defaults. Nearly all Fern objects are schema, including global settings, and contexts, and cells. But they also help with distributing, testing and storing your own, well, configurations of the system.

Sometimes, bulk configuration may be required (to be applied to several schemas at once). Or, perhaps, a configuration may need to define or redefine an external function upon application, or do some action beyond the schema. The configuration system will be designed for that and won't be limited strictly to kr.

Linear Algebra

APL stands for A Programming Language. It is array-oriented and uses a somewhat very cryptic, but concise, syntax to manipulate arrays in a very efficient manner. You can take a look at the available functions in this document: Dyalog APL Function and Operators, and here's a cool APL demo video. There's also APL Wiki.

Basically, APL provides a neat way to work with arrays, and that means linear algebra, which is an important foundational stone for graphical applications: vectors/points, transformation matrices, etc.

Fern uses April, which is a project that brings APL to Common Lisp. It is a compiler for APL code (to be more precise, April is pretty close to the Dyalog dialect).

(april-f "2 3⍴⍳12")
0 1 2
3 4 5

or, in CL terms:

#2A((0 1 2) (3 4 5))

So, arrays in April are just usual CL arrays, holding either floats or integers (or 0/1 booleans).

I prefer readability, so Fern has the translations for all the APL functions provided by April, so now you can write the expression above like this:

(reshape #(2 3) (i 6))

i generates numbers from 0 to 5, and reshape reshapes them into a 2-by-3 array. So, that's pretty neat!19 It's important to note that using April compiler directly with multiple operations in a row may generate better-optimized code, though.

KR with Generalized Selectors   not implemented yet

One interesting direction to think about is to imagine KR objects not as at the level of hash tables, but at the level of their slot access semantics, while leaving storage details to the object itself.

So, for example, you could say that you want a vector to represent your schema.

Then you could define the meaning of accessing :0 in that schema (e.g. returning the first slot), or any number :x.

So, these would be the selectors, and it would be necessary to define their semantics of access and setting.

This way, the containers as described previously are replaced with a much more powerful alternative; all the user has to do is describe any container/data-structure/object in terms of these selectors, and the container would be held in place of the hash table.

And then, the usual definition of a slot would simply define a namesake selector.

Also, this paves a natural way for defining the semantics of generalized pointer variables. So, a selector for * could be defined as all the elements of the data structure.

So, a generalization like that can be really powerful and give a high level of homogeneity. And note that these would still work just as well with constraints.

This is not implemented yet, but it's probably one of the first things that will have to be addressed.


Geometrical shapes are business as usual. They are implemented as structs20 Manipulation is meant to be functional, and all slots are read-only., but each shape will have a correspondent schema for easy manipulation.

Functions like intersect-p and draw are generic21 For comfort and speed, I will likely be using polymorphic-functions for this. and will work with either representation.

2D API   not implemented yet

Fern will define a specification and set a few requirements for the graphical backends to implement.

The user is expected to install the backend separately from the core.

The specification will include some pretty basic drawing functions, and functions that work on multitudes of shapes. It will also define a few kinds of events, including those for the keyboard and the mouse.

For color, Fern will likely absorb one of the existing Common Lisp color libraries.

The specification will be guided by SDL (a platform-independent graphics library), and, in fact, SDL will be the first backend to be implemented.

Native look is beyond the scope of the campaign. However, the backend may itself provide configurations for any existing cells for the user to apply to make them look and behave appropriately.

3D API   not implemented or planned

It's not out of the question that at some point in the future, it might be very handy to acquire a language for building 3D scenes. Obviously, it would have to integrate with the existing 2D primitives.

3D is not a part of the campaign.

Text Rendering   not implemented yet

The API for rendering text will mimic some basic Pango.

Clicking It All Together

If you made it this far, congratulations! Fern is the most technical section of the bunch. Perhaps that's because much of it is implemented already (KR, constraints, basic geometry, and linear algebra).

Sans a few implementation details, at this point, you probably know about Fern about as much as I do.

So, what do we have?

We have a fernic tree full of cells and it is an organism which renders itself to the screen, each cell responsible for its own processing and interaction with the world around it (but may be influenced from the outside).

And if you wanted some geometrical shape to be interactive, you just make a cell of it and place it somewhere.

If you have to render a million interactive shapes, you set up a container, also a cell, and teach that cell to interact with that container, and set up all the necessary structure to deal with large amounts of drawn material efficiently.

An interesting outcome of all this cell-tree business is that the displayed graphics corresponds to the cells that render them, so pixels attain semantics beyond the visual aspect.

This yields not just self-documenting, but self-demonstrating, semantically-explorable user interfaces.22 Or, if you lose the marketing speak, this just means that pixels map back to objects, which map back to code, which may, in turn, be associated with some documentation.

Fern isn't so much about building GUIs as it is about providing a good way to maintain a living, dynamic tree-like structure, every branch of which you can freely fiddle with. Constraints let you do that declaratively, and contexts let you localize your changes, and configurations provide for a rather painless cross-user customization experience.

With prototype OO, the classes are objects, and the objects are classes, so, in case of kr, the boundary between the data structure and its own definition disappears.

KR makes it easy to remove or add any slots, constraints and behaviors. Inheritance relations allow for dynamic value propagation. Contexts let us tweak whole branches at once, and configurations get us rid of the problematic notion of defaults. Unlike conventional OO, kr grants a great deal of runtime flexibility.

All facets of Fern majorly boil down to kr, constraints and the few bare-bones concepts that we define with their help, namely configurations and contexts. Beyond that, there are just the event objects, the texture objects, the shapes, and the API for drawing shapes and text to said texture objects. Some linear algebra. That's it.

Fern is a spiritual successor to Garnet, a Common Lisp project for building graphical user interfaces (GUIs). Garnet originated at CMU in 1987 and became unmaintained in 1994. It's a great toolkit which was used for many research projects, some of which are still active. I talk more about Garnet in another article.

Both KR and Multi-Garnet8 Multi-Garnet is a project that integrates the SkyBlue constraint solver with Garnet. For a technical report, see: Multi-Garnet: Integrating Multi-Way Constraints with Garnet by Michael Sannella and Alan Borning. may be found in the Garnet repository1 See the home page here. The latest Garnet repository may found here.. Or just see the Wiki.

Rune: A Platform for Constructing Seamless Structural Editors

In just a little while I will explain what the words in the title mean exactly. But that's in just a little while. Because for far too long has the computer interaction been plagued by the tyrannical idea of plain text…

… but what is plain text, really?

One way to look at plain text is how you store data: as human-readable text.

Lots of things are plain text on your computer. Source code is plain text. Notes are plain text. Dotfiles, command history, even some rudimentary databases may all also be stored as plain text.

Plain text, in that understanding, is something that feels safe and accessible. Whatever complex configuration a program requires, you know all you need is a text editor to modify it. The whole idea has to do with storage and a guarantee of ubiquitous, easy access.

The other way is to say that plain text is how you work with data: as a string of characters.

Unlike the previous (benign) interpretation, this one has been a source of pure evil in lots and lots of our experiences of interacting with computers. And within the realms of this evil interpretation, plain text has been misused and abused to no end.

The common symptom of such abuse is how the receiver of data has to be doing some parsing based on some specific documentation instead of, say, a universal representation23 Such as s-exps or, Stallman save us all, XML.. This misuse is ubiquitous and apparent in how Unix-like programs communicate with the user and with each other: through some ad-hoc textual output, which you have to parse somehow.24 As has been duly noted, Unix pipes are like hoses for bytes.

Now, I have to make the distinction clear: the process of serializing data as text and passing it as a string to another program is not evil in and of itself. It is simply a necessity for badly-integrated dead operating systems (such as all the mainstream ones) where programs have to interact through such barbaric means, but it all has little to do with the plain text inanity that we are talking about here.

What I want to focus on is the fact that in the user-level applications there's no automatic conversion to a structured representation once you receive that output at all. When you type ls in the terminal and get a dump of lines describing the directory structure to you, that dump of lines carries information that you can only infer by the means of parsing. And that's what really is evil.

Now, the example with our 1970s terminal emulators is well-known and everybody knows that the situation is bad. Of course, just as bad is the situation, just as soon has everyone decided to accept it as an inseparable part of their daily lives that can't really be argued against except in some dreams about the far-off bright future.

It's bad, yes, but it's really OK, you see. You can deal with plain text output from some programs just by doing some parsing on it. You can deal with it just fine as you can deal with sed/awk/grep and all that regexp goodness with mad syntax. Mad syntax and bad decisions and manual fiddling with things that the computer should've really been doing for you in the first place.

But do you know where the real treasure-chest.gif treasure trove of insanity treasure-chest.gif is buried?

That's right. You guessed it.

Within our precious text editors /( ͡° ͜ʖ ͡°) /.

The Status Quo of Text Editing

So, get your treasure maps and talking parrots and wooden legs ready, we are going treasure hunting.

Your text editor probably has some kind of way to state that a certain file you are working on has some kind of format. In Emacs, that's called a "major mode". For instance: elisp, dired, help, org, whatever.

And what's a major mode, really?

I will tell you what it is. It is but a declaration that the text string25 Emacs people like to call their strings buffers, but that's but a trick of the tongue: a buffer is really no different than a string with some confetti flying around. you are going to work with has a certain kind of structure.

And so you need certain kind of tools and further declarations to deal with that structure. What could that possibly be? Well, it could be something like: a function to select the current function definition. Or: delete the current tree branch in an org-mode string. It's always something to do with the semantics of the immediate characters surrounding the point, all quite primitive stuff really.

And then there's also a need to have a syntax table for the syntax highlighting, which is basically a bunch of rules describing how to indent certain semantically-significant parts of the string and what pieces of that string you want colored.

Strings, strings, etc, etc…

So, did you see what just happened? Your text editor has just taken some data, decided to call it a string, thus denying and ripping it off of its original structure, and then, since the original structure indeed exists and is valuable, it decided to offer some subpar tooling geared specifically for different types of strings. And called it a day!

Somewhere along that way, someone was badly, badly bullshitted. And that someone was you, me, and everybody else who touched any modern text editor.

95% of all user-level operations in a contemporary text editor are accomplished through some kind of regexp trickery. The rest is done by walking up and down the string. Walking and parsing. Parsing and walking. Back and forth, back and forth.

If you know what turtle graphics is, you know how modern editors work with text.

How Much Structure is Out There Anyway?

A lot. Have a tiny taste:

Notes. These tend to have a tree structure. What does your plain text editor do when you want to remove a section at point? It scans backwards starting at the point, inspecting each character, until it finds a demarcation that scans: I am a headline. A couple of stars in the beginning of line. An easy regexp. Not too expensive, so it doesn't really matter, does it? Not until you try to build a navigator for the structure of your tree, anyway.

Well, now, go ahead and make a table in your notes. A bit harder to reason about, but well. Autoformatting as you type is kind of helpful. But a multiline string within a cell? Hmm. No, no, forget about that!

But it's just a plain-text table.

So, what went wrong? Why can't you have multiple lines in a cell?

Answer: because it's too hard.

Try summing all the numbers along a column now. You can do it. Was it worth it? How much scanning did you just do?

Now mark some cell in your table in bold. Since it's just plain text, you have just surrounded your word with a pair of markers like this: *word*.

See, even words have structure. Are words objects? We wouldn't have a word for them if they weren't significant somehow. And that's how I would want to analyze and work with them: as words. For the plain text editor, a word is just a cluster of characters somewhere in a string, and there's a whole pile of mess built on top of that string to convince you otherwise.

  • Your word was supposed to be an object that carries information about itself.
  • Your table was supposed to be a 2d array.
  • Your notes were supposed to be a tree.
  • Your table was supposed to be a node in that tree.
  • The word was supposed to be a part of a sentence within a cell within your table.

All of that structure is just not existent when you edit as plain text.

Some may say: So? Strings have been working just fine for the most part!

Well, "for the most part" is not good enough.

We get the sense of power from the editors that work with files as strings because, in theory, you could make them do anything (or almost anything, as we will see later). That's a false sense. And the system won't tell you that you need to stop. It will simply say: try harder, do more optimization, do more caching, include more edge cases, fix more bugs, come up with better formatting, turtle around better, etc… All while it gets slower and slower and more bloated and more monstrous. Until it all becomes impractical and then you give up and say it's good enough.

But it isn't enough.

Doing something the wrong way can only take you so far, and we have already seen the limits of it in our text editors. We have hit the ceiling, and there's no "up" if you wish to pretend your data is doing fine as a string.

"Strings are not the best representation of structural data" may seem like a truism. And it absolutely is. But the outcomes of the opposite understanding is ingrained into our reality nonetheless.

So, let's just see what we have ended up with.

The Fatal Flaws of the Data Structures for Generic Strings

The modern text editor will store your file in some kind of a string-oriented data structure. A gap buffer, a rope tree, a piece table. They don't expose even those structures to you (not that it would help much).

Here are the outcomes of "everything is just a string" approach to text editing:

  • No querying.
  • No trackable metadata.
  • No semantic editing.
  • No efficiency.

The only way for you to reliably query some structure in the textual information that your file represents is to scan it all the way through. For instance, you may ask of your document with notes: find all sections that have a certain tag associated with them. What does the editor do? It scans the whole damn thing, every time you ask for it.

You could try to map back every removal and edition into a separate data structure. But, then, your editing operations still haven't learned anything about the actual semantics of the document.

Not to mention that the whole concept of constantly reconstructing a structure from its inferior representation is backasswords!

Moreover, incremental caching is hard to get right, especially if you are dealing with structures more difficult than a sequence of words. And don't miss the fact that it is slow, and it will get slower as the file gets larger. A misplaced character may disrupt the whole structure of the document. This is not trivial to deal with. And it is error-prone.

Incremental maintenance of a separate structure based on string edits is hard. That means tracking object identity is just as hard.

And object identity is paramount if you want to associate some external knowledge with a place in your document. Like, for instance, index your content.

By extension, usage by semantic units is not trivial either. Neither is content hiding.

None of it is impossible, though: just hard enough for you not to bother.

Trying to fit a string to another structure or impose a structure on it internally (by carrying around and hiding the markers) is basically Greenspun's tenth rule applied to text editing:

Any sufficiently complicated string-based editor contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of a structural editor.

One way to look at it is, when working within the string paradigm, you can never treat anything as an object. Objects let you carry information and unbind you from the necessity to encode the object within its presentation. Objects let you have composition, complex behaviors and complex ways of interaction. None of this is very reasonable within the string-paradigm. This would require identity tracking, and if you are doing that with strings, you are really just beginning to reimplement structural editing, but without having the slightest chance of getting it right.

Unstructured data leaves you with just one option: turtle-walking.

Is turtle-walking fast?

I am glad you asked! Because I can't even begin to explain how painful to me are the micro-freezes when I am working with text. I can't perceive lag very well. I don't think anyone does. The lag seems to feed back into my brain, which starts predicting the lag…

Brain on lag.

My fingers start typing slower, I have an uneasy feeling, nausea of sorts, when there's lag. Text editing isn't supposed to have lag. And it's in the simplest of stuff, and all over the place.

I am basically used to raggedy scroll by now. As much as anyone can get used to it. I think I have accepted it. I tried to make it faster, it's all still there.

Typing? Typing isn't as nearly as snappy as it could and should have been.

Multiple cursors? Absolutely not.

It's a widespread wisdom in software engineering that programs run slower than possible because of: bad abstractions, bad algorithms and bad data structures. Everybody knows that. And yet, here we are, tolerating bullshit gap buffers and piece tables and rope trees for text storage along with a bunch of clever algorithms and hacks that optimize this mess.

Here, I found an uncut gem for you, dear reader. I mean, just look at this:

[VS Code] Bracket pair colorization 10,000x faster

Unfortunately, the non-incremental nature of the Decoration API and missing access to VS Code's token information causes the Bracket Pair Colorizer extension to be slow on large files: when inserting a single bracket at the beginning of the checker.ts file of the TypeScript project, which has more than 42k lines of code, it takes about 10 seconds until the colors of all bracket pairs update. During these 10 seconds of processing, the extension host process burns at 100% CPU and all features that are powered by extensions, such as auto-completion or diagnostics, stop functioning. Luckily26 Emphasis mine., VS Code's architecture ensures that the UI remains responsive and documents can still be saved to disk.

CoenraadS was aware of this performance issue and spent a great amount of effort on increasing speed and accuracy in version 2 of the extension, by reusing the token and bracket parsing engine from VS Code. However, VS Code's API and extension architecture was not designed to allow for high performance bracket pair colorization when hundreds of thousands of bracket pairs are involved. Thus, even in Bracket Pair Colorizer 2, it takes some time until the colors reflect the new nesting levels after inserting { at the beginning of the file:

While we would have loved to just improve the performance of the extension (which certainly would have required introducing more advanced APIs, optimized for high-performance scenarios), the asynchronous communication between the renderer and the extension-host severely limits how fast bracket pair colorization can be when implemented as an extension. This limit cannot be overcome. In particular, bracket pair colors should not be requested asynchronously as soon as they appear in the viewport, as this would have caused visible flickering when scrolling through large files. A discussion of this can be found in issue #128465.27 That's: Issue Number One Hundred Twenty Eight Thousand Four Hundred Sixty Five. Jesus.


Without being limited by public API design, we could use (2,3)-trees, recursion-free tree-traversal, bit-arithmetic, incremental parsing, and other techniques to reduce the extension's worst-case update time-complexity (that is the time required to process user-input when a document already has been opened) from O(N+E) to O(log³(N)+E) with N being the document size and E the edit size, assuming the nesting level of bracket pairs is bounded by O(log N)28 Yeah, O(log(N)), biatch!.

Look: if you had the right data structure to store your code within, then, maybe, you wouldn't need very clever optimizations and caching tactics in the first place.

Strings just aren't making it.

General-purpose data structures are not good enough for specialized data.

It almost sounds like a tautology, but it's important that this point be absolutely clear.

But can we work with text differently? Does specialized tooling make sense? You bet!

What We Want Instead

So you are telling me you took some highly structured data that was stored in plain text, foolishly decided to edit it as plain text, and now you have a ton of half-assed tooling that works on strings and is obviously insufficient? What do we do now?

Well the reason all of that happened is because you were too lazy to parse the string into a fitting data structure and implement some common interface to that data structure.

Does that sound too simple? Well, it is that simple!

Let's read that again: let the user use his own data structures.

As we have already noted:

  • You store a table in a 2d array. Not a string.

But also:

  • A note tree in tree.
  • A huge file in a lazy list.
  • Live log data in a vector.
  • Search results as a vector and recent search results as a map.
  • Lisp code in a tree.

And then you represent:

  • Note section as an object with a heading and tags and what have you.
  • A word as an object with style data and a unique ID within a document.
  • A literate program source block as an object storing its source location and resulting output (or history thereof).
  • etc, etc.

Bear with me: it doesn't mean added work. It simply means a different kind of work. And much less of it (if you do it right), with more robust results.

That's not quite the end of it, though. We don't just want a tree editor for our notes. We want that tree to freely locate tables within it.

We want to piece together an editor with editors inside.

Because if we only are allowed to employ a single structure for the whole document, it won't help us very much yet.

We want to support embedding.

  • A tree of notes will have sections as its nodes.
  • A section is an object with a heading.
  • A heading is an object with tags.
  • A section body includes a list of other objects.
  • Those other objects may be tables, paragraphs, pictures, code blocks, really: anything you want.

And a table will then just be a 2d array in memory and its (tabular) cells may contain various other types of objects with their own unique structure.

Every data structure should have a dedicated editor! But more than that we should be able to freely piece these editors together and embed them into each other.

That begs a question: If we piece together a bunch of editors, can they even interoperate?

And the answer is: Of course! And why not? They can and the editing experience will be better than any structure-free string-slinging editor could ever offer.

With structural editing, you can retain the comfort of a text-driven interface. With a cursor. With your muscle memory that knows how to navigate any other textual document.

More than that, you get easy semantics and efficient data access.

A conventional editor requires you to do scanning to accomplish anything at all: move your cursor. That is not just a resemblance of imperative programming, it's a very limiting kind of it.

The user ought to be able to access the data structure programmatically, as an object in memory, and thus be able to work with it in any manner of a fitting programming approach.

How can we build editors that let us work with custom kinds of data and yet retain a comfortable textual interface?


We need a special kind of editor that will give us all the desirable properties.

Let's call such editors lenses.

Let's say that lenses can contain or otherwise refer to arbitrary kinds of data that they are supposed to edit. So, in general, lenses have sources that they interface.

The next piece of the puzzle is a universal editing interface.

That sounds officious. Let's paraphrase: no matter what the source or the kind of a lens in question is, there are some kinds of operations that apply to all of them. Namely:

  • Navigation. For instance: move the cursor to the next semantically-significant object (like a word).
  • Modification. Example: remove a range represented by a certain area on the screen.
  • Querying. Example: search for something within your sources.

Here's a viewpoint: suppose you use Emacs which uses a gap buffer to represent its file. Now, it shouldn't be problematic (in principle) to swap the gap buffer to, say, a piece table: all you have to do is to implement the interface for a piece table, just like the one you did for the gap buffer.

And that's what a lens does: it implements a certain interface specification for working with it sources.

The specification of such universal editing operations lets us retain an efficient textual interface.

And so we build our textual interface on top of such a specification.

Runic Specification

Rune defines what a lens is, but its main purpose is to define a specification of common structural operations that every lens implements.

The specification concerns itself with navigation- and modification- related functions. It covers anything you would expect from a string-based editor (and then some).

Here are a few examples of such functions:

  • Select the current semantic unit.
  • Remove multiple ranges (multiple cursors).
  • Place a lisp object at the cursor(s).
  • Structurally search for a lisp object in your sources.
  • Jump N semantic units forward.
  • Get the positions of all visible representations that match a predicate.

And so on.

A lens maps such Runic operations to its sources.

There is no reason why this should be any harder than implementing an interface to something like a rope tree or a gap buffer.

But since the data structures are arbitrary, and the semantics vary, the vocabulary changes a bit. So, instead of saying:

Remove all characters in this range of string that I see on the screen.

You now say:

Remove whatever you need to remove, but it's represented by this visual range on the screen.

And now it's the responsibility of a lens to map the character range back to the relevant piece of data in its sources and do the appropriate removals.

The specification will support operations for navigation, removal and querying, but also:

  • support ranges and multiple positions (thus allowing multiple cursors),
  • undo mechanics (via a notion of reversible operations),
  • narrowing,
  • and probably a bunch of other things.

It's important to note that it's not an error for a lens not to provide something the specification asks for. The user simply won't be able to use relevant functions, and that's that.

Lenses implement the specification, they may interoperate with other lenses, but, otherwise, they are self-sufficient. A lens even displays itself: it is responsible for its own visual representation (and as we will see later, it doesn't even have to be textual). A lens tracks the cursors and selections (if any) and displays them too.

A lens is even responsible for managing its own embedded lenses. This simply reflects the fact that structural sources often times embed various types of data.

Sources can be anything: data structures, database connections, references to some other data structures. It all depends on the purpose of the lens. Lenses may even share sources.

The internal structures of those sources, may, in turn, serve as a source to other lenses, which are specialized for those embeddings. Naturally, this calls for lenses to be arranged as trees.

And trees we will make them.

Lenses are Cells

When discussing Fern, we have discussed the notion of cells. Cells correspond to the needs of a lens very well: like cells, we want lenses to process input, display themselves, form trees, and otherwise be self-sufficient.

Well, instead of saying that lenses are just like cells, we can claim: lenses are cells. And so they are.

Everything that applies to cells applies to lenses: configurations, contexts, the tree-like structure, the flexible caching mechanisms.

Now, here's an interesting one: since a lens is just a cell, you are in full control of how it displays itself.

Why is this interesting? Well, it is very interesting.

Suppose your lens is a table. For example, you can draw all the separators as lines, not as pipes and dashes.

| A | B |
| C | D |

Or, suppose, your lens sources a 2D array:

#2A((0 1 2) (3 4 5))

For the text-like interactive purposes, the lens can think of this matrix as:

[0 1 2 3 4 5]

And then render it like a matrix that it is:

For practical operation, it would help for it to distill this matrix into further lenses that operate on numbers. (So, yielding a lens with 6 embedded number-oriented lenses.)

The lens maintains its internal cursor position and selection information, so it can render the corresponding visual elements on that matrix. (And when the motion operation of the cursor exceeds the boundary of the lens, the superlens relays the rest of the motion operation to the next lens or does its own cursor manipulations.)

Lenses being cells is interesting for a variety of reason, because not only do you have full control of their display and dimensions, but also because you can fully treat them as non-textual editors too.

Suppose you lens corresponds to an image path. You can display that image and that's alright. But if you choose to inherit the lens from an image editor, now you got yourself an embedded image editor in your otherwise textual document! And that editor doesn't disrupt the textual flow of editing in any way, because the lens still provides a textual interface, which, in this case, could simply be that the cursor acts like the whole image is a single character. Alternatively, when the cursor lands on that lens, it could turn back into an image path. It's just a UI decision.

So, a lens may be a hybrid of a textual and a non-textual editor that seamlessly integrates well with the surrounding lenses.

The Quality of Seamlessness

Some people perceive structural editing as some kind of a boogeyman that prevents you from a comfortable editing experience. It's as if, if it is structural, that means you have to work with structure, and you can't work like it's just text.

It's precisely the opposite. Structural means: simpler, more efficient, and more powerful. And more comfortable.

However, the traditional structural editors do tend to impose some limitations: for instance by forbidding you to insert certain characters into random places. Or by restricting your range of motions and selection. Or they may show "holes" that signify a missing piece of information (like a value between two operands with infix notation).

But these are not the inherent qualities of structural editing. This is just the way someone chose to implement it.29 I discuss non-seamless structural editors and language workbenches in All Else Is Not Enough.

So, when I say that Rune provides seamless structural editing, what I mean is that the other kind of choices are possible: the kind where you are editing as if the structure is only apparent, but not in your face, and so, it's just like there are no seams. These choices are necessarily taking place at the level of lenses, but this kind of capability will generally be reflected in the Runic specification itself.

It all really comes down to what you want to happen. The editing experience comes down to your ability to map the state of the structure and the state of the representation via the operations on the representation over time. There are no hard rules here, because it all really depends on the nature of the edited object in question.

But an important point to realize here is that the two states don't have to be consistent. An incomplete or ambiguous operation may simply be pending until clarified, for instance. And the lens will be responsible for keeping track of that state.

Of course, structural editing should be apt in keeping ambiguities away, or in enforcing clarity. But there's nothing that says you can't deal with intermediate states all the while keeping the structure intact.

Plus, ambiguity can always be localized to the structural element where it appears. It doesn't have to disturb the whole structure around it.

Really, in the end, it's all down to the interaction policies and UI design, and all that logic is the responsibility of a particular lens in question.

The Runic spec, however, will include many operations (e.g. cutting and pasting on ranges) which more-or-less cultivate an attitude, if not require it totally, towards a rather high degree of granularity and flow, if a lens is to be usable at all.

Seamlessness also refers to the fact that lenses may contain lenses within, and multiple lenses may coexist within the same editing space, all without any restrictions for working on their boundaries. This reflects the fact that most useful structures are composite. But it doesn't matter if the lenses are of different kinds: they can work and cooperate with other lenses and may even be contextually influenced.

The fact that all the lenses implement the same interaction interface and coexist with each other reinforce this idea that they are all really pieced together into a singular entity that you can work with as a cohesive whole, and not as a bunch of unrelated piled-up objects.

At last, I guess, the experience of working in such an environment may also be called seamless. At least, that's what I think it will be.

The structural editors of old didn't have the means of defining embedded editors and the interoperation mechanics, and, so, without that kind of attitude in place, they did the simplest thing, and that was to restrict the user in many ways.

As the bottom line: I don't see any obstacles to writing structural editors that replicate the string-based editing experience to the highest degree of comfort and beyond. Intermediate, ambiguous states may be accounted for until resolved, and, most importantly, they may be treated locally, i.e. without disturbing the surrounding structures.

Naturally, a little more thinking has to go into the interaction design for this, on a structure-by-structure basis, but the resulting experience will far exceed that of both the string-based editors and the traditional structural editors of the past.

Rune at a Glance

So, here's a bit of a summary:

  • A Runic editor, called a lens, maintains or interfaces its custom sources (such as a data structure).
  • The source is exposed to the user for direct programmatic access.
  • Each lens implements a Runic specification which makes integration and editing experience seamless.
  • Runic specification is about structural and semantic operations of navigation, modification and querying.
  • We can build a seamless textual interface based on that specification.
  • Lenses have contextual access to the environment and may interact with other lenses: they are just cells.
  • And since they are just cells, they form trees, process input and otherwise do everything a cell can.

Rune vs. a Plethora Of Text Editors

So, yes, there are a lot of text editors out there. With that many, we might as well build a tool for building them.

Rune aims to become such a tool. A specification for building structural editors, operating seamlessly. It's quite easy to imagine: for instance, if someone writes a spreadsheet lens, that lens should Just Work™ when, say, you use it as part of your Rune-based notes.

When you think about it, it's a very nice thing to have, because it effectively reduces M*N problem to M+N problem: you don't have to rewrite the spreadsheet formatting code whenever you write a new note-taking application, or another application that uses a spreadsheet within. Such reusability is harder to achieve in a conventional editor because they don't have a lens-like notion of embedding, which makes it harder to reuse existing code (even if they had good spreadsheets (which they can't)).

And what have we gained?

Everything: now you can programmatically access and modify the sources that the lens represents.

You can now work with structures programmatically.

And what have we got to lose? Those pesky tons of legacy. I don't think anything at all.

A lens is a very simple idea. A lens is a Rune-based editor that implements a seamless access interface for its sources.

Implementing lenses for particular sources may seem hard, but they will let us do what no string-based editor ever could.

A particular lens may require a bit of an upfront cost of thinking about its design, but those initial costs are generously reimbursed over and over again once you start using it, once you start chiseling your workflow with it to your taste.

And we don't just make the old thing better: we gain so much more beyond than that.

Structural approach makes interface-building a natural part of the design process.

And all the while, we retain flexibility.

The need to run a lisp image aside, with Rune, you should be able to go from as minimal to as advanced as you like. You don't have to have the complexity that you don't need. By mixing and matching lenses you can get just the kind of behavior that you need from an editor.

When people talk about editors, they mostly concentrate on the fluff – user interfaces, keybindings, defaults.

But all of these can be malleable and that's what Rune will achieve as it doesn't set as its goal to dictate what any of those should be. It's just a core specification that describes what a lens is and how to work with it. A core which will hopefully be simple to understand and reason about.

And Fern will take care of the rest.

Rune: Adding Things Up

Working with plain text seems to have become a virtue of sorts. And one of the finest ironies of plain text mania has been the existence of WYSIWYG (What You See Is What You Get) editors for Markdown.30 That may sound ridiculous and that's not what usually people think of when they hear about plain text, but plain text editing it The Original WYSIWYG. You literally get what you see.

And I understand why: I always forget where the link goes into the [wat](wat) construct. It must be that I am not the only struggling soul.

All sufficiently advanced text editors are, to a degree, rich text editors.31 Emacs as word processor.

And they are all bad at it.

So, they like to pretend they are just for "plain" text.

There was plain-text storage and now there's plain-text editing.

Calling it plain was quite unfortunate.

It's just text.

And we are just doing text-editing.

Text is full of structure; sometimes ad-hoc structure; sometimes a structure of compositions and embeddings of various types of data.

What we have come to call plain text is most often times highly and precisely rich.

And yet, "text" has somehow ended up being synonymous with "strings", and now we live in the structureless paradigm of editing.

And that's just not right. There are a lot of downsides to this rather brute treatment.

It would be much better if we were to accept and embrace the structural nature of textual data. Structure is valuable information and so it's worth preserving it, exposing it both for interaction and examination. That would simplify a great deal of things, and open up many possibilities.

This must come about via our ability to write specialized editors.

By establishing some common groundwork for such editors, we may just as well reap the benefits of their cross-interaction and composition. And by allowing these editors to represent whatever source of data they like in whatever way they like, we still have to gain even more.

All of this means flexibility. As it happens, flexibility is often times perceived to be at odds with efficiency.

To the contrary, the situation is quite the opposite when flexibility means the ability to choose the right way to approach a problem as a user, at runtime if necessary. And that includes the choice of data structures or appropriate algorithms.

Flexibility yields us both efficiency in terms of speed and in terms of user experience.

And that's what Rune strives to be: flexible. The capabilities of its editors will exceed well beyond those of standard text editors (including Emacs, which is often, and for no good reason, declared as omnipotent).

And, coupled with Fern, text editing is not the end of it either: new venues are opening up for building better development environments, computational notebooks, terminals, note-taking applications and who knows what else. In fact, the rest of Project Mage concerns itself with building and exploring the possibilities of such applications.

The tools sporting generic data structures have served us faithfully, but they are quickly becoming a burden of maintenance and a wall to further developments. And it seems we have been just as faithful to these tools in return. But faith is best examined sometimes.

Case Study: Scaffolding

Before we move on, I would like to talk about a particular problem which has the name of "scaffolding".

Scaffolding involves copious amounts of boilerplate, both in directory structure and in structure within the files in those directories. When you want to create a new project, you tend to copy certain files such as a project definition, licenses, maybe a few general-purpose source files, the testing suite.

However, there's work after copy-pasting and that's where the problem lies: you have to change certain places by hand. The license year. Or the package name. Project name – that tends to be all over the place. Or perhaps the author name and the year of creation.

Some programming language ecosystems have tools that facilitate the process by asking you a series of questions and then creating the necessary project structure. But far from everywhere are there such tools, and sometimes the author tends to rely on his own project structure anyway, with custom things modified and added.

With Rune, scaffolding could be done using transclusion32 Transclusion is the act of embedding the same piece of information into multiple places. See this Wikipedia article for more information.. Meaning: it would make it easy for you to mark the places of repetition in your project and just change them across multiple files at once.

You should be able to take some existing project files as a template and just mark the repetitive areas (or run an interactive search to identify them automatically). Now you can change any part that you have identified as such and that change will ripple across the whole new project.

And, then, with Rune, you could collect all your lenses that refer to the repetitive areas into another lens and that would be your configuration file (which is constructed automatically at runtime).

Or you could just navigate your new project and edit stuff in bulk.

Doing that with a command line utility or a configuration tool is rather difficult. If you get something wrong, rerunning is not typically an option.33 This article mentions some other pitfalls of conventional scaffolding tools (e.g. comment preservation), if you are interested. But, perhaps, the greatest downside of such tools is that they are not universal. You need to learn a new one every time you want to do something new; or switch back to manual editing whenever you want to change some old pattern.

A good set of editors that support lensing is what you really need. Moreover, you will have the power of a live image at your fingertips, so you can customize and visualize the whole process.

Kraken: KR-driven Programmatic Knowledge

Prerequisite reading: Knowledge Representation, Rune.

strong monstrous muscular kraken attacking a naval ship, stormy sea, shining sun

I think the problem with many note-taking applications is that they are very opinionated as to how you are supposed to take notes. They all have a system, an idea of some kind. They tell you that your thoughts are really interconnected bullet points with some strange formatting (Roam Research), or that you can't have a tree node in the middle of another tree node (Org mode), or that ALL your data besides images are text (because Emacs is, for what it's worth, limited to text and images), and on top of that there's no transclusion (very few even attempt it and nobody does it well, and not that they could).

All these advanced systems have some set of conceptual limitations which the user is forced to navigate within. It works out pretty well for most, up until the point when they try to do something nontrivial, because that's when they run into roadblocks. And, so, the user has to adapt to the realities of the developer's mindset.

It should really be the other way around: the application should be molded to the user's taste and needs. The user should choose how to view and work with the given information at hand.

And when it comes to exposing your data to you directly, it shouldn't be an afterthought, it should be the focus. Not some API or some export function. No: here's the data structure, and here's you, molding everything directly, and viewing it directly, because that's what it basically ought to be.

Kraken makes only one organizational choice on your behalf: the main data structure that represents your notes is a graph of KR schemas.

So, in reality, you don't so much get a note-taking system, as much as you get a text-driven structural environment which you can choose to keep your notes in.

All your data lives somewhere in a graph, and every piece of that data lives in a data structure suitable for storing it.

The top-level structure is a graph of various schemas. That's it, no other assumptions or predispositions. What you store in the slots of those schemas and how you view and how you edit them is up to you and the Mage/Runic/Kraken lens ecosystems.

You visually interface (edit, view, whatever) all of that via Rune.

In practice, that means you can recreate something like Roam-Research just by installing the appropriate lenses and then simply using them as part of your schemas.

What's Kraken?

The purpose of Kraken is to serve as a platform for working with custom types of objects and data in a structural manner, mostly in the pursuit to build an alternative vision to the conventional note-keeping applications and computational notebooks. I will discuss these two particular direction in terms of their features by the end of the section.

First, I just want to describe to you what I am aiming for the underlying layer of Kraken to be. It's important that I do this, even though it's a bit technical, in order to showcase the flexibility of the system, that the user is in total control of everything.

So, what I want is to have a good, structural way to work with a network of knowledge. The existing applications for notes and various notebooks don't cut it at all. They are inflexible and are quite a bit infuriating to use.

  • First of all, I want flexible means of organizing information.
  • Second, I want structural means of access to any kind of data.
  • Third, I want both textual and programmatic interfaces to all of it.

The second and third points are what Rune is for. And that's what Kraken essentially is: a collection of useful lenses and configurations for Rune.

So, what's left is to find the organizing principle.

And what do you know, Knowledge Representation is an excellent candidate for this. It's flexible. It allows dynamic ad-hoc modification of data. And it has multi-way constraints for the purposes of automation.

What else could anyone want?

Organizing Knowledge

Let's see how this would work in practice. Let's build ourselves a knowledge base.

(create-schema power-of-structure
  (:is-a kr-note)
  (:title "The Power of Structure")
  (:tags '("seamlessly-structural editing" "power"))
  (:introduction '("Welcome to Project Mage!"
                   "The purpose of this article is [...]"))
  (:table-of-contents nil)
  (:sections (create-schema))
  (:related-links nil)
  (:footnotes nil))

Important: if you have decided to strategically skip the section about Rune and aren't sure what lenses are, you might be thinking that I am offering you to edit lisp s-expressions by hand to write notes. Not so! The structure above will be edited via Rune's lenses. Think of lenses as custom editors, which you can place within each other. So, each slot in the schema above will turn into an editor, which will be pieced together with the rest of the editors of other slots. Each slot in the :sections slot will, in turn, also turn into an editor, all of them embedded into it's super (parent) right after the table of contents. All the editors are structural and integrate seamlessly with each other. As a result you get a fully-driven and customizable textual interface (like in any conventional editor), while gaining programmatic access to the underlying (arbitrary) structures (called sources). See the Rune section for more information.

So, Kraken really has just one prototype for all notes: kr-note. You derive all your other notes from it.

We define it as:

(create-schema kr-note
  (:uid nil))

where :uid is a unique identifier, automatically generated for each schema derived from it. There isn't much structure beyond that: each note defines its own slots to identify its structure. So, any particular note can go from as minimal as possible to as complicated as you need.

There isn't a concept of "the main document" or anything like that: every note is just a note, whether it's part of another note or not. In fact, a note could be a part of a few other notes. This forms a network. (By the way, if you haven't yet, I suggest skimming the Knowledge Representation section for a general overview.)

So, we have derived power-of-structure from kr-note, defined its slots. To actually edit it, we need to put it into a lens as a source. There are different kinds of lenses for different sources, and the one Kraken has for schemas is called kr-lens.

So, this is how the lens for our note from above is going to look like34 In case you are wondering why kr-notes aren't just lenses themselves (via inheritance), that's because it would be a leaky abstraction. Editors can hold their data, but they aren't their data.:

(create-schema lens-for-the-power-of-structure
  (:is-a kr-lens)
  (:source power-of-structure)
  (:order '(:title
  (:title (create-schema
            (:is-a string-lens)
             (create-constraint ((source :source)
                                 (title :super :source :title))
                 (:= source title)))))
  (:tags #|...|#)

Each slot has a corresponding lens associated with it. We just mirror the slot names: kr-lens will be geared to pick up on this correspondence automatically.

:order defines the order in which these lenses are going to be stacked.

The first lens we define is for the :title. The title is a string, so we assign a string-lens to it. It's not too important how string-lens is defined exactly, but it's going to be the simplest of all kind: it directly maps its source to what it shows the user. And we set its source by constraining it to the :title of our power-of-structure note.

The :tags would be a list of strings. Say, how would a list lens work? Well, if you would move to a certain representation of a tag (with a cursor), say to power, and delete this word (by whatever usual or unusual editing means, such as maybe keypressing "delete character at cursor" 5 times), the power will be effectively removed from the list. If you now type flexibility, the list will end up with a tag called flexibility (first with f, then with fl and so on, as you type). Maybe autocompletion would pop up, if it were configured to. But we got ourselves a rudimentary tag editor/lens. And the way it's represented on the screen depends on the lens itself.

Well, a somewhat similar story happens for the rest of the slots.

The interesting case is sections. Sections are just a schema of other kr-note objects. The mechanics of this isn't terribly important, but we would use a wildcard constraint to automatically maintain the tree of the section lenses for them.

And what about table-of-contents? It's empty in kr-note. Well, this one is a special one, because it involves transclusion.

Transclusion means: multiple places have the same piece of data.

So, what we can do is generate the table of contents just as well with a wildcard constraint that will pick out all the titles for us. But to achieve transclusion, we will go a step further: we will also put another constraint, for each title, that will declare that every title in the table of contents is equivalent to its corresponding title in the section tree.

This way you can edit the title in your table of contents and the changes will reflect back to the title slot in the appropriate section. And vice versa.

So, this is an interesting point. What is transclusion? It is an equivalence of two pieces of information in two different places. So, we just put a constraint to keep the two slots equivalent. If one changes, so will the other. The constraint in question is a very simple one and looks something like this:

(create-constraint (a b) (:= a b)) (:= b a))

And a one-way read-only constraint is simply:

(create-constraint (a b) (:= a b))

And that's transclusion: we do it by controlling the data. Of course, it doesn't have to end with the table of contents: wherever you refer to some title within your text, that can be transcluded just as easily (and, probably, automatically, if the UI is any good).

  • Properties of the System

    So, what are we looking at?

    • A note with an arbitrary structure to our taste. In this case, it forms a tree (via the :sections slot).
    • The tree of lenses for our sections has been generated and placed into our top-level lens lens-for-power-of-structure.
    • All the titles in the table of contents are transcluded right in our power-of-structure note. It's supposedly lensed via some kind of a tree lens (maybe just another tree of schemas).
    • The rest of the slots are ordered and lensed as well.

    Well, we got ourselves not only a tree of lenses. We also got a tree of cells: because lenses are cells. That means we can display our lens now and simply start editing our note.

    The way we edit the sections is like with all other lenses: it all depends on how one thinks the user interface for it should be done. But there are no particular limitations, really.

    But one interesting outcome of structural editing is that you probably don't ever need to use markup again (unless you really wanted to, of course, in which case you could program the lenses to do just that). My opinion: markup is not necessary and is a yet another dead weight relic of plain text editing.

    In fact, now, you are free to make up your own rules about what structure your notes possess, and, in fact, you may have many different kinds of notes.

    More than that, you decide what kind of entities you can work with: and that's pretty much anything you can write a lens for. Be it external databases, custom data structures, some processes you have to communicate with, whatever you want: as long as you can think up a text-driven interface to it.

    Moreover, you are not limited to purely textual interfaces, you can use graphical and interactive cells as well (which you could make behave in a text-like manner too, if you wanted to, like formulas or tables).

    The display of kr-lenses is tree-like, but Kraken schemas form a semantic network. You can share data between notes, and you can define one-way or two-way transclusion between any slots you want.

    And of course, everything has a clear programmatic interface to it. Either for examination or modification or automation via constraints.

    So, what's Kraken supposed to be? I see Kraken as a collection of lenses for knowledge keeping with computational capabilities. The user extends the system via external lenses and configurations to suit his own needs.

    Just as well, I have my own vision of what I want my note system to be, and that's what I am going to build for this project.

Kraken for Taking Notes

I am going to construct a set of lenses to create some hybrid mix of a Roam Research and Org-mode experience.

Approximate feature list:

Nested lists
I am sure you have seen these before.
Tables they are.
Ability to tag any piece of visible information, up to a single character, anywhere. Each tag may have its own meta information.
Meta Information
Ability to attach any piece information to any visible piece of data on screen. Tags just mentioned are just an example of this.
I quote or copypaste something all the time, almost always with a link or attribution. I need a standard way to do it. And I don't want the link visible, just stored as meta information that I have copied.
Like tags, but visible on the page, with a jump link.
Well, they are images.
Links are implemented by referencing the unique identifier of each kr-note. Slots are addressed via the identifier + the slot name (or multiple slot names, forming a pointer variable).
An idea from Roam Research. It's basically an automatically-constructed list that shows what other sections link to the current note. This, in general, is a pretty cool idea, but I think a sort of suggestion list of mentions or possibly related places would also be nice.

Functions such as searching or sorting will come out directly out of the structural organization of all these elements. (Search, for example, being universal, is a part of Runic specification for lenses: search by textual presentation or by a lisp object, structurally, over the whole source.)

But keep in mind that none of the features listed above stronghand the user into anything. They are just there for crafting your own note-taking workflow, you choose what you want to do.

And, what's more, it's not even about searching for an all-encompassing silver-bullet of a "note" that is universal for all your purposes. Just as there are different kinds of information, so there may be different kinds of kr-notes. Sometimes you may want to have a tree, and sometimes you just have pieces of information that are organized simply by tags. Mix as many systems of organization as you like and have unique ways of working with them all.

Kraken as a Computational Notebook

If you know what computational notebooks are, feel free to skip a bit forward. In the next few paragraphs, I am just thinking aloud.

A computational notebook is a medium for interactive computation where you interweave literate descriptions with computational descriptions and their results.

The primary means of computation is via code block execution.

The utility of a computational notebook seems to be twofold: to experiment in it and to present it to someone else for interaction, learning and further experimentation.

It's a predominantly visual medium in that the results of the computation are often times of graphical nature, sometimes of the kind with which you can interact (like zooming in into a resulting plot).

It's also necessary to emphasize the spirit of constant experimentation. Any given result of a computation may need to be remembered. Or a set of results.

Notebooks programming is not a typical kind of programming: there is an overarching concern with data that you either take from somewhere or generate yourself.

Also, interactivity is essential. And so is fault tolerance: you never want to lose your data.

Since you are basically playing around with code, you might find yourself going back and forth between various versions of a code block, or even, perhaps, multiple code trees. You might not know in advance what results are important.

Computational notebooks appear to be the working horses of the data science circles. But it's also easy to imagine other kinds of real scientists benefiting from these.

So, what do we want?

Snapshots & History
  • Ability to snapshot the state of a notebook, or any part of it. This, of course, includes binary output like images.35 Perhaps, some compression or difference tool could be applied to keep the footprint on these down.
  • Basically this forms a version control system.
  • Unlike using a non-integrated version control, you can go back between the states of a single slot.
  • Add timestamps and you got yourself a timeline.
  • This feature is just source control. See Hero.
  • The ability to merge two notebooks together. This comes down to merging two KR schemas together.
  • Slots also need to be merged. Since they contain arbitrary data, there is a possibility to write custom merging routines for these.
  • This feature is just source control. See Hero.
Direct Data Manipulation
  • Suppose you are working with a CSV file. You might want to see and edit it as a part of your notebook. If there is a lens for CSV editing, you just use that lens.
  • Nothing much to be done here. This is just how Rune works.

Many other desirable properties are taken care of by Rune: full programmatic control, custom workflow. Display things how you want them displayed. Hide things you don't care about.

Other than that, you can use Fern to write cells for interacting with custom types of data. Like building interactive plots.

Source blocks are interesting. You could create a project in the language that you want to use (an outside source code repository), then load it into a Runic IDE (if there's one), and, well, your source blocks are just lenses of that IDE. That's what no-nonsense IDE integration looks like: multiple embeddings of an actual IDE directly within your notebook. And your notebook only has the parts of the code that interest you.

The details of such an integration somewhat depend on the language used. I only plan to integrate Common Lisp when Alchemy CL is done.

Common Lisp is very suitable for computational notebooks, both in terms of interactivity and speed. See Why Common Lisp For This Project for more information about this language.

And I don't see any particular problems for integrating other languages in a similar manner.36 I mean, we can all be snide about Python and what not, but, at the end of the day, people have to work with languages other than the eternal kind, and if it makes someone's life easier, what's there to say? This whole project was basically born out of a pain management effort.

Execution dependencies between various blocks and results may be set by KR constraints. Stay constraints may control automatic re-execution.

Distribution of a notebook to other people should be possible if you are willing to distribute your environment's configuration and custom lenses, if any.

And if you need to build a clean presentation to show someone, an eye-pleasing view of sorts, you could programmatically generate a new lens tree with all the unwanted lenses removed or set invisible, or adjusted otherwise. Mix and match things how you want.

From the architectural standpoint, there isn't really any distinction between Kraken being used for computational notebooks or for note-keeping or for anything in between or beyond. Like notes, computational notebook capabilities will come from custom lenses for various kinds of sources, from the external packages, from your configurations.

And everything takes the form of the structure of how you think of the problem at hand. It will be a simple or as complex as you want, and it's just what you make of it.

And it's all just KR.

Alchemy: A Structural IDE for Lisp-like Languages

The plan is to build a Common Lisp IDE, but make the architecture flexible enough to accommodate other lisps (like, perhaps Scheme or Clojure). The main focus is on trees of parentheses, i.e. syntax trees. But the IDE-building ideas could as well generalize to other languages.

Hopefully, there will appear a common ground for working with lispy live images, e.g. connect, evaluate at point, etc., as a result of this project, so that it will then be easier to adapt any other lisp once Alchemy CL is done.

Alchemy CL: A Structural IDE for Common Lisp

Prerequisite reading: Rune.

First of all, it's important to understand that Alchemy will be a seamlessly-structural editing environment. This is a term I made up, but it is a different kind of structural editing than what Sedit of Interlisp provided in the 1980s. This is not where you choose a symbol before you can change it. Here you just use your keyboard, just like you would in Emacs. But an order of magnitude nicer. And all you really lose is your old Emacs config.

I recommend skimming over that Rune section to learn more about lenses and seamless editing.

So, why a structural editor? We have all heard of Paredit and other packages like it. Paredit keeps your parenthesis balanced. It's very nice: I have been using it with automatic indentation, and it's awesome. It's awesome to the point that when I see people matching out parenthesis by hand and then manually calling a binding to indent it, I get bothered by it to no end.

For some reason, I have only started using Paredit on the second try. I hated the thing the first time around. I don't think I got it.

"If you think paredit is not for you then you need to become the kind of person that paredit is for."37 Paredit quote. Also see this nice paredit demo.

And Parinfer is even better.

But I am not really here to tell you about the greatness of Paredit or Parinfer. Because the one thing to realize about structural editing is that it's not just about editing.

And in a case of a programming language, especially the kind that has a living connection to an image or a runtime, it becomes increasingly handy to keep all kinds of information about your structure, and, as we will soon see, for all kinds of purposes.

A structural code tree is not just a tree of atoms and s-exps. It's a tree of objects that represent atoms and s-exps, but each objects can contain all kinds of meta information that it needs.

As such, every s-exp and atom gains its own unique identity within the code tree. (This is of general utility, and doesn't just apply to code.)

If you are a Lisper, I suppose you have already bought into the whole meta thing. So, think of this as another level of meta: pretty much every feature I am going to discuss will rely on this mechanism to some degree.

Let's start with the low-hanging fruit.

Project-wide Editing Operations

You should be able to replace all the symbols of a specific package in your project. And other projects, if need be.

Not just some string-match repetition in your files in a local directory. Because try that with comments (which are ambiguous with respect to this operation).

It's a super simple operation and I don't understand how people can claim the greatness of Emacs development for Common Lisp when this is simply missing. It's not Emacs development that is great, it's Lisp development that is. And it's obviously not as good as it could be with the kind of tools we have.

So, say, if you wanted to rename a symbol, an editing operation would just walk each file, and each top-level form and rename the specified symbol (of a certain package) to whatever else you want. It will have options of dealing with comments, and it will have an interactive mode.


Copy-pasting symbols to defpackage is a chore. It shouldn't be.

So, it will be possible to add/remove an exported/imported symbol from/to the local defpackage definition form.

Stale Code Tracking

When a piece of code changes, it would be cool to have a visual indication that it hasn't been recompiled.

Sometimes I accidentally insert a stray character, or even replace one, and wonder where exactly. Forgetting to recompile is a rare one, but it can happen too.

Just a little usability feature.

Proportional Fonts

In Emacs, variable-width fonts for code are problematic due to indentation. I don't think syntax tables are capable of handling this.

With Rune, the lens is fully responsible for the layout, so that's that.

Proportional fonts tend to occupy less horizontal space, here's what the difference might be like:

(:= window-open-p
    (and (> temperature 25)
         (not (eql weather :scorching-sunbath))))
(:= window-open-p
      (and (> temperature 25)
           (not (eql weather :scorching-sunbath))))

Indentation & Coloration

For the most part, Emacs indentation is merely tolerable, but, not great by any means. As was duly noted38 This paper, for example, describes an incremental parser, used in Climacs, and its handling of custom reader macros. It also talks about indentation issues in Emacs and makes a point of regular expressions not being good enough. Climacs, however, is not structural, and so, the plain text ills are, naturally, all in the right place: constant parsing, constant caching, unnecessary complexity., there is a point at which regexp trickery meets its limits, and you need to actually understand what you are indenting. That includes communication with the running image; macroexpanding things to check for lexical bindings; seeing what package any given symbol belongs to.

Coloration is just a funny word for syntax highlighting. And it's a lot like indentation in what it requires for correctness.

So. Since we are storing the code in a tree, we can obviously take advantage of its structure.

I plan to develop a language that would easily allow matching trees. Kind of like regular expressions for trees (typical expressions, perhaps?). Of course, it wouldn't have to be as arcane and unusable as regexps are. The syntax would obviously be s-exp-based. It should be possible to transform the subexpressions; bind the matches, replace them, duplicate them; allow predicates; wildcards; etc, etc. It wouldn't work just on s-exps, though, but on any tree-like structures.

So, a single match could produce a tree with information on both indentation and coloration. And the specific matches would override the less-precise ones.

Also, I think the resulting tool would be just as useful for macrobuilding, a surprisingly repetitive and tiresome business for even some of the simpler stuff.

Structural Comments & Docstrings

Want your comments to follow a specific format? Lens them! Edit them like you would edit any other document of that nature, right within your code.

For instance, you could decide that you want your comments to be in Org-mode or Markdown.

You know, forget the stupid markup. Here's a little twist for you: in the section on Kraken, I said that you could build a computational notebook by using IDE lenses (like Alchemy) within your Kraken document. The fact of the matter is: you can do the opposite too.

In literate programming, the code is often presented like an afterthought, like a formal summary of your thoughts. However, in practice, most of our comments end up summarizing code.

What I mean is that you can apply Kraken lens to all your comments in your source code. I think that yields a more realistic version approach to literate programming.

This applies to the docstrings as well.

And in case you are wondering, in case of using KR, when the lens is saved as plain text, the comment would look like a serialized pretty-printed lisp tree. So, an Emacs user, or a user of any other editor, can still understand and modify it.

Editing Strings and Docstrings

When you are editing error messages, there's also this quite annoying thing when you have to manually wrap the line and put a newline separator on each line. There are two approaches and both of them are horrendous:

(defun f ()              |
  ;; <---- 80 cols ----> |
  (let ()                |
    #|a ton of code|#    |
    (error "***** ** \\  |
            ***** ***\\  |
            **** ***"))) |
(defun f ()              |
  ;; <---- 80 cols ----> |
  (let ()                |
    #|a ton of code|#    |
    (error "***** ** \\  |
***** *** **** ***"))) |

And what do you think happens when you change the indentation? Nothing good. Nothing good.

And by the way, you have to insert the end-line markers \\ because otherwise the resulting error message will have strange newlines.

Perhaps the enlightened way is to store the error in a global variable. Except, you would still have to insert the \\ and relocate them every time you edit it. Great, right?

Just as well, I have seen some people relegate all docstrings to a separate source file in the repo. You can't make this stuff up.

So, this is a lot of work for just writing a string. Lensing strings within code could be a pretty easy way out.


Prepending a function with some comment is nice.

But what if you need to comment an expression or a symbol?

And, perhaps, tag that as TODO while you are at it?

Well, all those can be the meta info of those code objects. You can search and filter the tags, or edit them in-place.

As for the comments, those end up being non-intrusive, but very precise. You could either mark short expressions or symbols or even whole functions or groups of functions.


One source of annoyance for me has been the missing autocompletion for symbols in the code you haven't compiled yet. This happens all the time when you are writing new code and introducing local variables.

Since you always have access to the latest up-to-date code structure, this problem is easy to fix.

Ant-driven REPL

The REPL will be Ant-based (with the input field being an Alchemy lens).

Project Scaffolding

Creating new projects can be a chore. See scaffolding.

Test Transclusion

A wonderful thing about Common Lisp development is how by the time you are done writing a function, you have ended up with a few test cases.

But all that test code can really bloat up your file.

Even then, to a degree, it's sad when you have to pile them all away into a separate file (sometimes into a separate project to avoid a test-framework dependency). Now you have to jump there and back when you start modifying that function again.

Rune can help us here: you can transclude the relevant test code on a whim.

Basically, if your test names correspond to the names of the tested functions, then, whenever necessary, you can ask the system to programmatically find the relevant test and transclude it right after the function. The lens responsible for the test is just another Alchemy lens that sources the test file and narrows down to the necessary slot holding the said test(s).

Right-Hand Side (RHS)

Now, here is something that's really exciting to me. I have explained it in another article: Overcoming the Print Statement.

There's one thing to add to that discussion.

So, I have already mentioned how you can treat your comments as embedded documents and how you get a sort of literate program in reverse.

Now, you could go further and say that since the RHS (defined in the said article) holds your output and results and visualization and history and what not, and since it's meant to be fully interactive, and, in fact, it can be represented by a Kraken document, as a result, you get a kind of computational notebook, but the kind where the code is placed at the forefront.

I am not saying it should be called a notebook (and by no means it is), and neither is it a canonical form of literate programming, but, for what it's worth, to a degree, it's amusing to think in those terms, at least in as far as to assess the interactive value of the result.


Please, see the section on sane defaults: bindings are defaults, and, so, aren't a part of the core package.

However, I like modular editing like in Vim and will provide a certain kind of emulation for it. The CUA kind as well.

I never learned Emacs bindings, because I think chords are ridiculous. That shouldn't stop anyone from making them work, of course.

So, bindings will be packaged as configurations which you can apply to your lenses.

Type Annotations & Declaration Hiding

Declamations and type annotations, whether regular or done via the serapeum:-> macro, tend to take up a lot of space. These can instead be mapped to the argument list of a given function. Inline declamations are just noise swarming all the other code: like with the argument list declaration, it can be viewed as meta info attached to some other piece of code, a function name in this case. Color-coding could help in identifying that a certain function has a certain declamation attached to it.

Handling of Reader Macros

Amidst all this happy sunshine and rejoicing, a little dark edge of a rain cloud might have suddenly crossed the blue energy at the horizons of your otherwise peaceful mind: what about the reader macros?

A reader macro uses a user-defined function to parse as many characters as it requires, so, in general, it's not possible to predict when it will halt. This is, again, problematic in plain-text editors38 This paper, for example, describes an incremental parser, used in Climacs, and its handling of custom reader macros. It also talks about indentation issues in Emacs and makes a point of regular expressions not being good enough. Climacs, however, is not structural, and so, the plain text ills are, naturally, all in the right place: constant parsing, constant caching, unnecessary complexity..

The issue is not bad, though, if you were to approach it from the structural point of view with this one simple trick:

Limit the reader macro to the extent of its own node in the tree.

After reading the source file, it will be clear what the extent of every form and atom is, including reader macros. Upon modification of a reader macro, we can assume that the extent doesn't suddenly need to expand to the next tree node. We assume the user changes are local to the node to which the reader macro belongs.

That way, you don't have to feed the macro the rest of the code in the source file.

Creating a new reader macro works by feeding it the rest of the expression until the read succeeds, and then we know the extent.

One of the interesting uses of reader macros in lisp is to define any custom syntax to "emulate" some other language. This is just another example of structural embedding, which means, if you can write a lens for that sublanguage, you can use that lens to edit that code right within your CL code.

It doesn't have to be a reader macro for you to embed a linguistic lens, though. I think I have seen some Lisp code that has a macro with assembler-like code. So, if you had an editor for assembler, you could just lens that macro.

Structural Correctness At All Times


A Stray Parenthesis?

A structural editor obviously keeps your parenthesis balanced and in perfect order.

But it doesn't have to.

Contrary to common expectations, inserting a stray parenthesis (one that doesn't have a matching pair) is not problematic in the slightest. The lens can simply recognize it as such and keep track of it until the user places the other half.

It's a bit like the situation with reader macros: you can assume, with a fairly good degree of confidence, that the user doesn't want to destroy the structure around the ambiguous input.

Same goes for stray quotation marks ". For instance, you could limit the extent of those till the expression it belongs to, or just, yet again, to the node that it has created in that expression.

A Bret Victor Type of Deal

Lenses are cells, and so, they render themselves. This is interesting because this allows you to have not-textual means of interaction.

See Bret Victor's presentation Inventing on Principle for some ideas that apply. For example, note how a number may be increased or decreased simply by dragging it. Or how visual objects map back in his source code.

Inventing on Principle is a great talk, and I am glad I can wrap this section up with a reference to something as nice.

Ant: Ant is Not a Terminal

Why are people still using terminals? They are fast, have a simple interface and are universally available.

Why are people still using terminals? They are legacy-ridden piles of mess, with bad conventions, and low capabilities of either customization or interactivity or anything at all.

On top of that, they all expect you to use some horrific version of some file-shuffling language (like sh). Suffice it to say, these languages are only suitable for quickly calling some program that you need right now fast.

You also have to use horrendous hacks to manipulate textual information on the screen. Let me not even mention graphics.

Why can't we order information or have even rudimentary graphics, you ask?

Well, you know, because it's illegal and everything.

Because puppies would die if you did that, because you would start World War III.

It's not like we live in the 21st century, you know. We are somewhere in the dark ages right now.

Look: for every attempt to move forward, we have some piece of mob-approved tech and an army of C-zealots placing a torch under anything that has a trace of sanity in it.

What is this terminal emulation frenzy? Where did it come from?

Somewhere along the lines of technological thinking, in the social context, there was born an idea that things must be as fast as they could possibly be, and forget about usability.

I think I might know why that is: because you would usually know what simple and fast is for a given problem. However, it's not at all clear what usability implies and how to accomplish it for everybody.

Everybody knows about pretty looking and about memory footprint, but usability? That's uncharted territory. Why? Because at the core of usability lies in extensibility and that's black magic, motherfucker.

Except it isn't, of course. Not unless the best you have seen for configuration is some .conf file in /etc.

What's more, if you run just one relatively-large image, each task that runs on it will be more memory-efficient relative to something that starts from scratch on the OS. So, what is this?

What is the true source of such bullshit? Where does this fake minimalism originate?

And, to be perfectly clear, the experience still sucks. Terminals aren't even that good at doing what they are supposed to be doing! And that should be providing a good command-line interface.

So, that's pretty bad!

In the end, all that leaves for an excuse is legacy. But one thing to recognize about legacy is: it's happening as we speak. And it will keep on growing until there's an alternative.

The worst thing about the terminal experience, though, is piping. Passing information around. Pretending like programs need to manually parse string output to understand each other. Pretending like this is some good, powerful thing.

The Unix-way.

The good part of the Unix-way is the idea of many tools doing their job well (specialized modularity, essentially, which I am all for). The fact of ad-hoc formatted byte-hosing is not the good part. And neither is the fact of unstructured output.

Good old terminal emulators with their precious shell languages are not simply unsatisfactory, they are downright hideous. (Not quite string-based-text-editor-level-hideous, but they are still up there.)

It might make you feel like a 1337 hAxXoR to type out a long command in a minimalistic black window. Or to scroll man pages with less. But the fact of it is, you are just doing microscale programming and you are doing it wrong.

What the experience should be like is that of an interactive programming environment with a REPL.

And that's Ant: Ant is Not a Terminal.

Inspectability, error-handling, structured information. And that's just the floor, not the ceiling.

At your eye-level is the UI that's capable of presenting the output via custom cells or lenses, all interactive.

And the input, the command-line part is simply a lensed IDE. For myself, I will be integrating Alchemy, though. (Otherwise, Ant is agnostic as to what input language you use.)

The basic model is that there's a history tree of inputs and correspondence of its nodes to outputs at different execution runs.

Some capabilities include:

  • Asynchronous execution (so one command doesn't prevent you from starting another command).
  • Timeline.
  • Reuse/reference results from other executions.
  • Real IDE autocompletion, not some string matching.
  • Stuff like directory-level command-history.


  • An OS-level integration layer.

Again, even though the input field could be anything, my goal is to integrate Alchemy: just reuse the IDE and be done!

Right away, let's clear something up: doing something like (ls) instead of ls is lame.

So, there will be shorthands. Shorthands are basically for doing simple things fast, like getting into some directory or listing files or calling a program without the need to use parentheses (even though that would have been just one keystroke longer, that's one keystroke more than desired for repetitive tasks).

It's expected that stuff like ls and cd should work without trouble. In addition, a list of all available programs will be fetched and added to a special package where all shorthands are listed. The rule would be simply that if a shorthand is encountered without an opening parenthesis, the rest of the input until a delimiter is treated as input to the referred program.

So, some work will go into replicating a few commonly-used file-system-related commands, always ensuring structural results.

Thinking of this further, this OS integration layer could really act as a staging area for the file-system operations. So, accidentally messing up your files would be difficult (for example when bulk-renaming). But that's beyond the scope of the campaign.

But still, the structural approach can prevent many accidents as you can inspect the intermediate values and display them as you work or preview the intended result of your immediate action.

The good thing about terminals is the experience of a tight interface which you can get reasonably efficient at.

The Unix philosophy lets you combine things in any manner you like, and that is, indeed, a power. But as I will keep saying: if you can't wield the power, it's not really a power.

The interfaces of Unix-like utilities are arcane. The way they talk to each other is ridiculous. Forget about interactivity. Forget about inspection.

So, instead of a terminal emulator, it makes more sense to be using capable programming languages with truly interactive environments.

And Ant will provide just that. And more: it can be used as a REPL interface to anything that needs it (for example, development environments).

Hero: A Decentralized Version-Controlled Tracker of Development Knowledge

Are bug trackers bad?

When you put a question like that, the answer is probably: They are okay: a bug tracker tracks bugs. What else do you want? Some trackers even have a half-decent interface (as far as Web goes, that is).

So, the better question is: are bug trackers what we really need?

The Issue with Issues

I mean, don't you find the laser focus on bugs out of hand?

And it's not a groundless observation.

I have seen developers turn software maintenance into a game of "0 open bug reports". Well, you know, because bugs are bad.

And that's a very vivid way of getting to "let's have 0 discussions".

Some projects even go as far so to place some arbitrary time limit before an issue is automatically closed.

Those are extremes, sure. But, somehow, bug trackers cultivate and even assume some variation of that general attitude. Maybe it's in the design, maybe it's the gamification aspects, maybe it's just because the whole section where discussions were supposed to happen are called issues and each looks like a waterfall of words before someone has to hit that close button at the bottom of the page.

I don't really know what it is exactly, but it's there, and it's broken.

Instead, go to any mailing list and see what they are up to. Many threads are open-ended and don't even have a particular goal other than to talk about something.

It's a breath of fresh air.

And many of those threads never end anywhere, yes, and that's fine: they don't have to. But I think this exact kind of communication ought to happen, because it's just necessary for the development process. And so, the design needs to reflect that, not counteract it.

Most bug trackers are also not the kind of place where you can just quickly ask something. Nope, go to our IRC/web forum/some external chat application.

Surely, you can ask a question on the tracker itself and will even get an answer, but it's not the expected place where you ask those questions. It's just not in the name.

So, alright, you have to have an external chat and the bug trackers aren't geared for meandering random discussions. Is that all?

Not even close.

The project news, home pages, documentation and wikis often times all reside on platforms separate from the bug tracker. So, you will go to a chat to ask for a quick advice, then go to the project's website to read the docs, and then end up on a yet different platform to file a bug. And in your editor to browse the code.

So, the development gets fractured into multiple applications, and not even for a technically-sound reason, and so you have no integration as a result.

I mean, how about selecting a piece of a conversation and linking it to the knowledge base of the software? Just as a registered user of the system?

Do we really need to tread through a jungle of web "apps" to accomplish anything as a group?

Virtually all of these require a web browser. You can't even modify the interface and often need to use a mouse to do anything.

What the hell?

And, yeah, the issues are still centralized. The code is decentralized and distributed, but the issues and conversations belong to various central entities.


And how did we get to the point where developing software demands A WEB-BROWSER?

And why can't we have it all in one place?

One Place

Am I asking for a chat, and a forum, and a wiki, and a bug tracker, and code, and an API reference to be all in the same place?

No: I need them to be that one place.

All of it is just information with various structure.

I am asking for a unified knowledge system for all of it.

Data-Supplied KR

Suppose we were to store all the project-related information in a Knowledge-Representation network, and use Kraken to work with it. What does that mean?

And what will the network contain? The way I see it is:

  • Conversations. These subsume bug reports and all kinds of discussions.
  • Notes. These subsume wiki-type information and documentation. Design docs, goals, whatever.
  • Code. So, the project store could either store code or store references to an outside code repository. (The first option gives great capabilities and high integration.) But, also, code snippets that people could share for configuration purposes.

But everything is kept within a kr-note, each with a structure of its own, reflecting its purpose and content.

To work with these notes, the project might assume a certain set of lenses that the store is compatible with. And it's important to understand: every project is in control of its structure and of the lenses and packages it requires. If you get such a project as a user, you will be expected to install these requirements.

We want the project store to be distributed in nature. To that end, the server for it should implement a few data-related operations. See the Data-Supplied Web as a Model of Free Web article for more info.

So, we adopt a data-supplied model for exposing a project to the outside world.

Ditch the web browser.

What we are going to distribute is data. (Note that even though transclusion requires constraints and is, basically, executable code, we can still allow it as a part of the data model.)

Think of Hero as a distributed version-control system (DVCS, like Git) repo, but also for everything other than code, the server part included. You can request partial data and subscribe to changes (again, of just the parts of the network that interest you).

A system of authorized access is expected to be in place (a part of the data-supplied protocol). So the users, for example, could only update their own comments and not someone else's.

This yields a self-contained highly-integrated distributed system, free of any external services.

Notably, all the meta information about the project, such as what packages are expected to be installed, or which server is used, are all supplied as well. So, the user is in full control of the code necessary to work with the network, and is free to supply the store himself.

Version Control

So, obviously, for this to work, a project store must itself be a version-control system.

Merging should be easy, visual and automatable. And easy.

In any case, I don't think making UI worse than Git's would be humanly possible. Isn't It Obvious That C Programmers Wrote Git?

The high-level merging routine would be pretty much the same for any kind of kr-note, whether code or not. I imagine a visual interface for multiple (directed, acyclic) history graphs. (You wouldn't have to do anything visually, though.)

There you would have a sort of a draft that you can finalize/accept/commit. Again, such commits could be partial/localized.

Since everything is timestamped, it should be possible to do a timeline of your history, so you can easily browse any state of the system at any point in time. A timeline undo type of thing. This could be useful for permanent rollbacks too (with an automatic snapshot of the lost information saved to an out-of-store place in case you change your mind).

Code can be stored semantically. You could go to the IDE and scroll through the history of some function in-place.

Generally, integrating version control with IDE could prove pretty useful.

Say, it should be possible to point to anything in your code and ask where that came from: the author, the commit that brought it, the latest time of modification. Or ask when and where a certain symbol was first introduced.

Note that the alternative of using an external code repository still exists. So is the possibility of contributions via plain-text modification. These would require some mapping, though. (These mappings are beyond the scope of the campaign.)


  • Hero is based on Knowledge Representation and will provide a distributed version control system accessible via supplied web.
  • Unlike usual version control systems, in a Hero-based project store, you can still work with things structurally and semantically. For instance, I imagine one could program compression and storage rules depending on the kind of data at hand (so, not just code).
  • It's worth emphasizing that every project defines its own project structure, its own way of managing it. It holds meta information about what the user should be using to work with the project store (like cells/lenses/configurations). The user is expected to install these (or their alternatives). That's harder than a unified approach, but grants the developer the ultimate freedom to run the project however they like. And yet, the contributor is still in full control over their own workflow.
  • With authorization, you can limit any given user to certain parts of the network.
  • You could write automated merging routines for things like chat.
  • A very high level of integration and access to many flexible tools: from Fern's cells to Runic lenses to Kraken notes. Interplay with Alchemy will be especially interesting.

The goal is to develop:

  • history-handling tools for version control (such as workflows for merging; visual UI), and
  • a data-supplying server (for KR).

Hero will run without a web browser, it won't be GitHub, it will be hackable. And yeah, it won't have many features. It will be, probably, simple as a rock, featurewise. At first, anyhow. And that will be enough.

An Ecosystem

This project and the campaign aren't just about writing a few applications.

It's about creating an ecosystem of text-driven and graphical programs that may interoperate and be reused within one another. An ecosystem of lenses and cells.

And, so, I think of Mage as a platform. To that purpose, by the end of the campaign, I plan to open a separate repository (likely via Ultralisp) for the packages based on Mage.

Does This Project Aim to Replace Emacs?

Yes and no.

It's important to remember that Emacs is already good enough at being Emacs, and that's structureless, string-oriented text editing. I do not wish to participate within that paradigm, I simply see that style of interaction as something that will and should go out of use at some point. And that includes all contemporary text editors, not just Emacs. There's no point in replicating any of them, not even the most powerful one. Moreover, replication is not simply a non-goal, it's an anti-goal.

I don't want just a mere replacement, but something that encompasses a whole swath of power-user applications. It would be unfair to all those other fine projects I have listed of me to claim that the main target is Emacs.

For my personal needs (which are note-taking and CL development), the project should indeed displace Emacs soon enough. And, adopting a birds-eye view, if this project works out, I believe we will indeed have a very sensible alternative to Emacs on our hands.

Obviously, it will take time and active involvement of the community to enrich the ecosystem with applications commonly expected from Emacs. At that, 1-to-1 rewrite of packages would not be wise at all, however; we can do much better than that.

And I believe that interest from the community, especially from power users, is very possible and probable. I wouldn't have created this website if I didn't believe that.


So, my personal preference for my environment is to be fully keyboard-driven.

I also use Vim bindings, and I will replicate those for my own workflow (which means, comprehensively enough, but probably not everything; obviously, there should be no problem with being fully comprehensive if one would wish to be that).

Note that these preferences really have no bearing on the platform as a whole. They simply get packaged into configurations and get distributed via packages that a user can install and apply to his own cells or lenses.

How Realistic is the Time Estimate?

Some people might say that the 5-year estimate is pretty sus. And I don't think time estimates are ever realistic either.

I mean, if you look at it from the perspective of count, you get: an interface toolkit that is an advanced tree-manipulator with graphical capabilities; something that looks like an editor platform but is just a specification; a terminal which is not a terminal; and a project version-control tracker that is based on what seems to be a notes application! And don't forget the IDE.

But neither of these are built standalone. There is a lot of reuse and composition that will take place (e.g. a lens written for Rune can be integrated into any Runic document).

Fern? Even though it still needs quite a bit of work, the most important parts (KR and constraints) are already in place. Linear algebra and geometry are there as well. I have already honed up an SDL2 FFI 39 The project currently lives as a fork here. Note that it requires this (rather trivial) pull request accepted in CFFI, so use my fork of that, if you are interested. and it sets a decent stage for writing a graphical backend.40 Note that SDL is just a backend and is never exposed directly to the user of Fern; it can be swapped for another platform-specific backend, as long as the necessary API requirements are fulfilled, and Fern never assumes any platform or backend-specific changes to itself, unless, of course, they are configurations applied directly by the user.

Rune is going to be mostly a specification for lenses. And note that any function built on top of that specification will automatically work for any lens.

Rune and Kraken will be developed simultaneously. Alchemy might inform the design of Rune even further.

Speaking of Alchemy, there is a portable reader for Common Lisp called Eclector and it should make importing plain-text CL code a breeze. In addition, Slime, at the very least, gives an example of the communication line between the image and the development environment (and then there's also the client for CL).

I am not saying it's all going to be a walk in the park, but I expect a lot of reuse due to the possibilities of embedding and integration. This is, perhaps, most apparent in things like Alchemy being an input field of Ant. And stuff like this is going to be all over the place. It's not quite plug'n'play, but it pretty much is, and that will simplify much of the development effort.

This brings me back to: power. Because that's the philosophy of what's being developed and how.

Power > Features.

It's easy to get bogged down in the detail. What I think is true is simple: power is what's really important.

When you don't have a feature, but have a power, you can build that feature.

The focus of my development is to provide the user with power.

So, I will heavily prioritize the construction of the building blocks that allow features, and the features I build often might be rudimentary and proving of concept, but will be open to growth and maturation.

And once the power is there, so will be the features.

Power ensures sustainability and that will be causal of growth. As such, features will be kept out of the core libraries and platforms, but in separate packages.

But, like I said, I am also building this project for myself, and not just for the community. And so, naturally, I will want the applications to be usable to myself right away.

In any case, I think the time estimate will work out just fine.

Documentation Goals

All too many software projects trickle into obscurity because no one understands how to use them or even what's so special about them. I want to ensure that every user will have an easy time getting the project to run and then, hopefully, learn to use and extend it.

Obviously, the documentation will be live.

The status quo for software manuals is to tell you how something works and then show you a dead picture of it, and then maybe throw a pile of function signatures at you.

But a respectable interactive platform can only have interactive tutorials and examples, guiding the user as he is already working within the system. The integrated, structured nature of the project ought to help with that.


It's important to think about issues such as accessibility in advance.

I am basing my knowledge about this issue on this article about Emacspeak.

So, Rune will ask lenses to implement the surface-level text commands. Whatever text is shown or assumed to be on screen, the user will be able to copy that text programmatically. So, you could ask a spec-conformant lens stuff like "get the surface level text of the semantic unit at point". Such functions are of general utility.

Next, Runic functions will be required to return some information about the results of their semantic actions. So, if you issue a delete-semantic-unit command, it will return what it has removed (and that would include the info about the surface level text as well as the data-structural unit itself).

The list of such functions and their return-value properties will all be a part of the spec. So, it will be possible to automatically advice all these spec functions for any given lens to invoke the speech functionality.

Target Platforms

Mage will run wherever there's Common Lisp available, and where a graphical backend is available.

There are a few CL compilers out there. For example, SBCL runs on Linux, various BSDs, macOS, Solaris, and Windows.

And ECL has been known to compile to Android and iOS.

SDL, which is the backend I plan to implement, officially supports Windows, macOS, Linux, iOS, and Android. 40 Note that SDL is just a backend and is never exposed directly to the user of Fern; it can be swapped for another platform-specific backend, as long as the necessary API requirements are fulfilled, and Fern never assumes any platform or backend-specific changes to itself, unless, of course, they are configurations applied directly by the user.

It's a Process

I don't want to create a feeling that everything is crystal clear to me about how this project will turn out to be. It's more like… emerald dark. Much of design is still there to take shape and to be refined.

The only thing I am certain about is that this is worth doing.

So, if you have some interesting ideas or concerns, don't hesitate to send me an e-mail or write to the mailing list group (see the Contacts page).

Why This Project?

Because it's the right way to do things, damn it!

And, frankly, I am just tired of all the mess.

I think it all started when I was debugging a network application, and I found it unreasonably hard to use print statements to order and analyze packets the way I wanted.

Then came Linux and Vim. And, by god, soon enough I was using Emacs, Org-mode and Sly (for programming Common Lisp).

Let me tell you this again and again: it's a mess.

It's not gonna do. At all. We need something much better.

And it's not like tools are the end of all. But I simply feel like this has to be done.

There's a huge hole out there that is screaming:

Lack of structure, life and integration; absence of systems-thinking.

And that hole is the reason why so much of our interaction with our applications is so mediocre.

And, yes, tool-building has been a well-known path to avoiding the real problems at hand, but, I think, not this time.

Sometimes, we simply need better tools.


See ~project-mage on

Also, check out the Wiki.

Campaign Details

See the Campaign page.


All the developed software will be free, as in beer and as in freedom, licensed under LGPL-3.0-or-later.



See the home page here. The latest Garnet repository may found here.


A notable exception on some implementations of Common Lisp is trying to change struct definitions that have live instantiations, in which case the behavior is typically undefined and a warning is issued – all due to the optimizations that apply to structs.


The only reason for this seems to be the assignment of copyright to FSF.


And even that might not be forever. CL compilers are getting ever more sophisticated and modular (see SICL, for instance), and, possibly, one day, (some) compiler internals might become user-space hackable just as well as anything else by the means of one of those defacto standards in the future. We haven't known the limits yet.


For an extensive overview of kr, see KR: an Efficient Knowledge Representation System by Dario Giuse. Also, Garnet maintains its own version of KR.1 See the home page here. The latest Garnet repository may found here.


Note that not all code below will work if you want to test it with Garnet's kr. I have changed some syntax for readability.


I can't seem to find the source of this quote.


Multi-Garnet is a project that integrates the SkyBlue constraint solver with Garnet. For a technical report, see: Multi-Garnet: Integrating Multi-Way Constraints with Garnet by Michael Sannella and Alan Borning.


Please, note that the code below will not work if you want to test it with Garnet's multi-garnet. I have changed some syntax for readability. Also, multi-garnet integrated with Garnet formulas, with a goal of replacing them. Garnet formulas are done away with altogether in Fern.


Arbitrary access for other data structures hasn't been implemented.


Even set-value has a strength value associated with it (:strong by default). This doesn't play a role in the example, though.


This could be achieved exactly adding and removing a high-strength stay constraint upon access. However, the values that depend on it, might also need to be recomputed upon access. Marking a slot as lazy would involve embedding information about it into the constraint system for the values that depend on it. It shouldn't be too hairy, but not a walk in the park either. I will probably skip this feature.


The syntax should be extendable to containers other than schemas, allow either predicates or filters, allow either values or pointers, and multiple arbitrary pickers/wildcards along the path.


:is-a slot is constrained to append a cell to it (even if the user supplies his own :is-a). The constraint may be removed if undesired, of course.


This is simply done via slot-removal/addition hooks, available to the user.


Not that anyone would want such a thing.


Or, rather, an abstraction of a texture over some real GPU texture.


It's important to note that using April compiler directly with multiple operations in a row may generate better-optimized code, though.


Manipulation is meant to be functional, and all slots are read-only.


For comfort and speed, I will likely be using polymorphic-functions for this.


Or, if you lose the marketing speak, this just means that pixels map back to objects, which map back to code, which may, in turn, be associated with some documentation.


Such as s-exps or, Stallman save us all, XML.


As has been duly noted, Unix pipes are like hoses for bytes.


Emacs people like to call their strings buffers, but that's but a trick of the tongue: a buffer is really no different than a string with some confetti flying around.


Emphasis mine.


That's: Issue Number One Hundred Twenty Eight Thousand Four Hundred Sixty Five. Jesus.


Yeah, O(log(N)), biatch!


I discuss non-seamless structural editors and language workbenches in All Else Is Not Enough.


That may sound ridiculous and that's not what usually people think of when they hear about plain text, but plain text editing it The Original WYSIWYG. You literally get what you see.


Transclusion is the act of embedding the same piece of information into multiple places. See this Wikipedia article for more information.


This article mentions some other pitfalls of conventional scaffolding tools (e.g. comment preservation), if you are interested.


In case you are wondering why kr-notes aren't just lenses themselves (via inheritance), that's because it would be a leaky abstraction. Editors can hold their data, but they aren't their data.


Perhaps, some compression or difference tool could be applied to keep the footprint on these down.


I mean, we can all be snide about Python and what not, but, at the end of the day, people have to work with languages other than the eternal kind, and if it makes someone's life easier, what's there to say? This whole project was basically born out of a pain management effort.


Paredit quote. Also see this nice paredit demo.


This paper, for example, describes an incremental parser, used in Climacs, and its handling of custom reader macros. It also talks about indentation issues in Emacs and makes a point of regular expressions not being good enough. Climacs, however, is not structural, and so, the plain text ills are, naturally, all in the right place: constant parsing, constant caching, unnecessary complexity.


The project currently lives as a fork here. Note that it requires this (rather trivial) pull request accepted in CFFI, so use my fork of that, if you are interested.


Note that SDL is just a backend and is never exposed directly to the user of Fern; it can be swapped for another platform-specific backend, as long as the necessary API requirements are fulfilled, and Fern never assumes any platform or backend-specific changes to itself, unless, of course, they are configurations applied directly by the user.

...proudly created, delivered and presented to you by: some-mthfka. Mthfka!