c5dde50
committed
Commits
Comments (0)
Files changed (1)

+417 415doc/cabra.html
doc/cabra.html
as several of the same issues are dealt with there. Notably, the notion of a <em>group over an equivalence
relation</em> is introduced there in order to make sense of how program texts (syntax) can be manipulated in
tandem with the programs (semantics) they represent. In short, many different program texts are equivalent to
the same (abstract) program, so the equality operator = used in the group axioms is simply replaced
by the equivalence operator, ≡. Obviously, this technique isn't restricted to groups, and the idea can be
under parallel and sequential composition operations. Selecting a semantics for parallel composition that
Idempotent semirings, sometimes called <em>dioids</em>, lack the additive inverses that rings have,
<p>Every Cabra program takes, as input, an <em>input set</em>, which is a set of nonnegative integers.
Every Cabra program either produces an <em>output set</em>, which is also a set of nonnegative integers,
<p>The instruction <code>UNSET <var>n</var></code>, where <var>n</var> is any nonnegative integer,
except that it never includes <var>n</var>, even if <var>n</var> was included in the input set.)</p>
<p>If <var>a</var> and <var>b</var> are Cabra programs, then <code>IFSET <var>n</var> THEN <var>a</var> ELSE <var>b</var></code>
is a Cabra program, where <var>n</var> is any nonnegative integer. If <var>n</var> is a member of the input set of the <code>IFSET</code> program,
then the input set is used as the input set of <var>a</var>, and the output set of <var>a</var> is used as the output set of the <code>IFSET</code>;
<var>b</var> is ignored. Conversely, if <var>n</var> is not a member of the input set the <code>IFSET</code>,
then the input set is used as the input set of <var>b</var>, and the output set of <var>b</var> is used as the output set of the <code>IFSET</code>;
<p>If <var>a</var> and <var>b</var> are Cabra programs, then <code><var>a</var>*<var>b</var></code>
is a Cabra program. The input set of <code><var>a</var>*<var>b</var></code> is used as the input set of <var>a</var>.
The output set of <var>a</var> is used as the input set of <var>b</var>. The output set of <var>b</var> is used
as the output set of <code><var>a</var>*<var>b</var></code>. (This is sequential composition; <var>a</var> is
executed, then <var>b</var>. A more usual notation might be <code><var>a</var>;<var>b</var></code>.)</p>
<p>If <var>a</var> and <var>b</var> are Cabra programs, then <code><var>a</var>+<var>b</var></code>
is a Cabra program. The input set of <code><var>a</var>+<var>b</var></code> is used as the input set of
terminates first is used as the output set of <code><var>a</var>+<var>b</var></code>. See below for the
final result determined by a kind of race condition. A more usual notation might be <var>a</var><var>b</var>.)</p>
<p>We still need to define what it means for one Cabra program to "terminate before" another, different Cabra program.
Execution of a Cabra program is considered to take a nonnegative integral number of imaginary things
called <em>cycles</em>. A Cabra program <var>a</var> terminates before <var>b</var> if the number of cycles taken by <var>a</var>
than the number of cycles taken by <var>b</var> on <var>S</var>; or, if the number of cycles taken by <var>a</var> on <var>S</var> is the same as the number
of cycles taken by <var>b</var> on <var>S</var>, then <var>a</var> terminates before <var>b</var> if <var>a</var> has a smaller lexical order than <var>b</var>.
then <var>a</var> = <var>b</var> and "terminate before" is meaningless because it only defined between
<p>The formula for calculating cycles taken in general depends on both the program and its input set, but is deterministic
in the sense that the same program on the same input set will always take the same number of cycles.
(This isn't the real world where clock speeds vary at +/0.01% or anything like that. It is as if execution is
one cycle of program <var>a</var> is executed, then one cycle of <var>b</var>, and so forth, until one or the other
<li><code><var>a</var>*<var>b</var></code> takes the sum of the number of cycles taken by <var>a</var> and the number of cycles taken by <var>b</var>.</li>
<li><code><var>a</var>+<var>b</var></code> takes the either the number of cycles taken by <var>a</var> or the number of cycles taken by <var>b</var>, whichever is fewer.</li>
<li><code>IFSET <var>n</var> THEN <var>a</var> ELSE <var>b</var></code> takes either the number of cycles taken by <var>a</var> or the number of cycles taken by <var>b</var>,
<li><code>SET <var>n</var></code> takes <var>n</var> cycles if <var>n</var> was not already set, but only 1 cycle if it was already set.</li>
<p>In fact, other formulae are possible. The somewhat unusual determination of cycles in the case of <code>SET <var>n</var></code>
is mainly to keep things interesting by ensuring that the number of cycles is dependent upon the contents of the input set.</p>
<p>Probably goes without saying, but to state it anyway: for a program <var>a</var> which is an element of a set of programs <var>P</var>,
we say <var>a</var> <em>terminates first</em> (with respect to <var>P</var>) if it terminates before every other program in <var>P</var>.</p>
<p>The formula for calculating lexical order depends only on the program. It acts as a "tiebreaker" when two programs take the same number of cycles.</p>
<p>For compound programs, the ordering is <code>IFSET <var>n</var> THEN <var>a</var> ELSE <var>b</var></code>
comes before <code><var>a</var>+<var>b</var></code> comes before <code><var>a</var>*<var>b</var></code>.</p>
<p>All primitive programs come before nonprimitive programs, and in general, programs with shorter length
(measured in terms of number of subterms) come before those with longer length. In programs with the same
length, subterms are compared lefttoright. (This happens to be the same kind of lexical ordering as you
<p>Oh, but I can hear you objecting now: <em>Waitaminnit! This language is totally weak. You can't do hardly anything with it.</em></p>
<p>True enough. Cabra is nowhere close to being a real programming language. But I decided to design it this way anyway,
<p>One reason is to demonstrate that these algebraical problems involving parallel composition occur even for what would be
very small subsets of a "real" programming language. You can imagine a fullfledged version of Cabra with
variables, arrays, <code>WHILE</code> loops, input/output, and the like, but you can readily see that these ameliorations don't make the central
<p>Another reason is that Cabra, despite almost being, say, just a kind of algebraical structure, <em>looks</em> a lot
like the beginnings of a programming language. It's got a conditional form, and imperative update. It almost looks like an overspecified language —
heck, a <em>machine</em> — because of the definition of how parallel composition works in terms of cycles and everything.</p>
<p>A third reason is that it's just a little... askew. Note that if we had a <code>WHILE</code> instruction,
because we could just do something like <code>WHILE FALSE SKIP</code>. But we don't have <code>WHILE</code>.
This is, in my opinion, pretty weird: here's this very weak language, yet it's still capable of somewhat unpleasant things
<p>And lastly I did it to irk people who think that the goal of esolang design is to make a language that
is Turingcomplete. Give me an interesting weak language over a boring Turingcomplete language anyday.</p>
<p>Now, recall — or go look up in an abstract algebra textbook — or just take my word for it — that
<li> + : <var>S</var> × <var>S</var> → <var>S</var> is a binary operation on <var>S</var>; and</li>
<li> * : <var>S</var> × <var>S</var> → <var>S</var> is another binary operation on <var>S</var>,</li>
<li> (<var>a</var> + <var>b</var>) + <var>c</var> ≡ <var>a</var> + (<var>b</var> + <var>c</var>) (addition is associative)</li>
<li> <var>a</var> + <var>b</var> ≡ <var>b</var> + <var>a</var> (addition is commutative)</li>
 <li> <var>a</var> + 0 ≡ 0 + <var>a</var> ≡ <var>a</var> (existence of additive identity)</li>
 <li> (<var>a</var> * <var>b</var>) * <var>c</var> ≡ <var>a</var> * (<var>b</var> * <var>c</var>) (multiplication is associative)</li>
<li><var>a</var> * 1 ≡ 1 * <var>a</var> ≡ <var>a</var> (existence of multiplicative identity)</li>
<li><var>a</var> * (<var>b</var> + <var>c</var>) ≡ (<var>a</var> * <var>b</var>) + (<var>a</var> * <var>c</var>) (multiplication leftdistributes over addition)</li>
<li>(<var>a</var> + <var>b</var>) * <var>c</var> ≡ (<var>a</var> * <var>c</var>) + (<var>b</var> * <var>c</var>) (multiplication rightdistributes over addition)</li>
<li><var>a</var> * 0 ≡ 0 * <var>a</var> ≡ 0 (additive identity is multiplicative annihiliator)</li>
<p>Now I'll attempt to show that Cabra programs form an idempotent semiring, over the equivalence relation of "semantically identical", under the
considered additive, and <code>*</code>, considered multiplicative. For each axiom, I'll argue informally that it holds for all Cabra programs, hoping that an appeal to your intuition will be sufficient to
convince you. A formal proof is also possible I'm sure, but it would be tedious, and probably not really illuminating.</p>
this is completely independent of the order in which they are considered to have been organized into a parallel arrangement.</p>
<p>Parallel composition is commutative. When running programs in parallel, one would naturally expect that the order of the programs
doesn't matter — in fact, the concept doesn't really make sense. In <code><var>a</var>+<var>b</var></code>, <var>a</var> and <var>b</var> aren't
really running in any order; they're running at the same time. The result of <code><var>a</var>+<var>b</var></code>
is determined by which of <var>a</var> or <var>b</var> terminates first, and this is not affected by which order they appear on either
side of the <code>+</code> operator. (In fact, that was why I introduced the "lexical order" tiebreaker,
<p>There is a neutral element for parallel composition. Indeed, <code>BOTTOM</code> is this neutral element.
therefore <code><var>a</var>+BOTTOM</code> ≡ <code>BOTTOM+<var>a</var></code> ≡ <code><var>a</var></code>.</p>
<p>Parallel composition is idempotent. Intuitively, executing two copies of the same program in parallel will have the same result as
compute the same thing, it doesn't matter which one terminates first (and in our definition, this is undefined.)</p>
<p>Sequential composition is associative. The result of <code><var>a</var>*<var>b</var>*<var>c</var></code>
does not differ if one first composes <var>a</var> and <var>b</var> to obtain a new program, say <var>d</var>, then composes <var>d</var> and <var>c</var>,
or if one first composes <var>b</var> and <var>c</var> to get <var>e</var>, then composes <var>a</var> and <var>e</var>.
like Pascal: the program <code>BEGIN A; B END; C</code> is semantically identical to <code>A; BEGIN B; C END</code>.
(Note that we are talking about <em>composition</em> here, not execution. When we put together programs to form a new program,
it doesn't matter what order we put them together in, we get the same new program. If we were to <em>execute</em> those component programs
<p>There is a neutral element for sequential composition. Indeed, <code>SKIP</code> is this neutral element.
Then the programs <code><var>a</var>*SKIP</code> and <code>SKIP*<var>a</var></code> will also, fairly obviously, produce <var>T</var> when given <var>S</var>.</p>
<p>Sequential composition leftdistributes over parallel composition. This is similar to the argument that parallel composition is
that runs before two programs in parallel <code><var>b</var>+<var>c</var></code>, it doesn't matter if one copy of <var>a</var> runs
sequentially before both <var>b</var> and <var>c</var>, or if two copies of <var>a</var> run in parallel, one sequentially before <var>b</var> and one sequentially before <var>c</var>.
In both cases it's as if one copy of <var>a</var> ran, and in both cases both <var>b</var> and <var>c</var> get the same input set.</p>
<p>Sequential composition rightdistributes over parallel composition. Consider <code><var>a</var>+<var>b</var></code>. On some input <var>S</var>,
<var>b</var> will take <var>y</var> cycles, and the output set <var>T</var> will from be whichever of these numbers of cycles is smaller.
So if there was a subsequent program <var>c</var>, it would take <var>T</var> as its input set, and it itself would take <var>z</var> cycles.
Because addition is monotonic — if x > y then x + z > y + z — whichever branch was the "winner" of <code><var>a</var>+<var>b</var></code>,
the same branch will be the winner of <code>(<var>a</var>*<var>c</var>)+(<var>b</var>*<var>c</var>)</code>.
Also, in this branch, the input set to <var>c</var> will still be <var>T</var>, the output set of the winning branch of <code><var>a</var>+<var>b</var></code>.
<p>The neutral element for parallel composition is an annihilator for sequential composition. Indeed, if we
run <code><var>a</var>*BOTTOM</code>, or <code>BOTTOM*<var>a</var></code>, neither of those terminate, so we get the same result as running just <code>BOTTOM</code>.</p>
<p>As I mentioned, the original intent was for Cabra programs to form a ring under sequential and parallel composition.
Here, we don't need to devise something that "undoes" concatenation (sequential composition) of two programs, and
specifically, we don't have to worry if either program fails to terminate; the concatenated program simply
<p>For Cabra programs to form a fullfledged ring, every program would need to have a unique additive inverse.
But there can be no such program, as Cabra has been defined: if <var>a</var> terminates, then there's nothing <var>b</var> can do to
<p>So Cabra programs do not form a ring. Further, it's unclear what would have to change to allow this.
A simple instruction <code>HANGOTHERS</code> could be defined as sort of throwing a monkey wrench into every other
currently executing program: <code><var>a</var>+HANGOTHERS</code> ≡ <code>HANGOTHERS+<var>a</var></code> ≡ <code>BOTTOM</code>.
But every element is supposed to have a <em>unique</em> additive inverse, and this doesn't do that, because <code>HANGOTHERS</code>
<p>The semantics Cabra ended up having for parallel composition are not those which I originally envisioned.
I was certainly hoping for something more interesting than a deterministic race condition. However, if one chooses
parallel semantics that are perhaps more commonplace, definate problems arise, whether in a ring or a semiring.</p>
<p>Take, for instance, a semantics where the output set of <code><var>a</var>+<var>b</var></code> is
the union of the output sets of <var>a</var> and <var>b</var> individually. This coincides with a fairly intuitive
notion of parallel processing where we fork different processes to compute different parts of a larger computation,
while sequential composition happily leftdistributes over parallel composition, it fails to rightdistribute over it,
<blockquote><code>(SET 1 + SET 2) * IFSET 1 THEN (IFSET 2 THEN SET 3 ELSE SKIP) ELSE SKIP</code></blockquote>
<p>evaluates to {1, 2, 3} on a null input set, because <code>(SET 1 + SET 2)</code> evaluates to {1, 2}, and
the remainder of the program tests for the presence of both 1 and 2 and, finding them, puts 3 in the output as well.
<p>evaluates to {1, 2}, because the tests for both 1 and 2 in each of the parallel programs fail, because each of those
the <em>intersection</em> of the output sets of <var>a</var> and <var>b</var> individually. While less
usefulseeming than union, perhaps, this still suggests a known parallel processing technique, namely, running the
<blockquote><code>(SET 4 + UNSET 4) * IFSET 4 THEN (UNSET 4 * SET 6) ELSE SET 5</code></blockquote>
<p>we get a null output set, because the output set of the first parallel program is {6}, and the output set of the second is {5}.</p>
<p>Also, both of these approaches would, naturally, require both of the parallel programs to terminate before
their results could be merged to form the combined result (whether it be union or intersection.) This means that if either of them was <code>BOTTOM</code>,
would be an annihilator for addition as well as for multiplication, and that (at least in the union case) <code>SKIP</code>
This is less than swell, in terms of conforming to ring axioms, because one theorem of ring theory is that
the multiplicative identity and the additive identity are equal iff the ring consists of <em>only one element</em>,
<p>(I think, also, that if you do manage to have a "ring" where 1 ≡ 0, but where there are clearly elements that aren't
either 0 or 1, it's that the additive operation and multiplicative operation are really the <em>same</em> operation
under the semantic equivalence relation. One of my very early designs for Cabra came up against this, and it's
and we're really talking about something that's a monoid or group or something, <em>not</em> a ring.)</p>
<p>Based on this, I will go so far as to conjecture that, in fact, <em>any</em> semantics for parallel composition
The only reason Cabra manages to be rightdistributive is because it has a semantics which does not
<p>There are ways to address the problems of Cabra — or otherwise try to make it more interesting —
A Kleene algebra is a dioid with an additional unary postfix operator *: <var>S</var> → <var>S</var>.
<var>a</var>* ≡ 0 + <var>a</var> + (<var>a</var> * <var>a</var>) + (<var>a</var> * <var>a</var> * <var>a</var>)
+ (<var>a</var> * <var>a</var> * <var>a</var> * <var>a</var>) + ... Kleene algebras are used for such things
as the theory of regular expressions, where the Kleene star indicates "zero or more occurrences."</p>
<p>If we try to extend Cabra from a dioid to a Kleene algebra by adding a Kleene star, we appear to get a nonplussing
result. Since <code><var>a</var></code> always takes fewer cycles than <code><var>a</var>*<var>a</var></code>
(except when <var>a</var> is <code>SKIP</code>, and in that case <code><var>a</var>*<var>a</var></code> ≡ <code><var>a</var></code>,)
<p>What does that get us? I'm not sure. I suspect nothing, unless there is some neat property of the
ordering induced by the Kleene algebra that shows up. But it doesn't seem worth checking, to me.</p>
<p>A perhaps more obvious "surgery" to make is to drop the requirement that semirings be rightdistributive (while keeping leftdistributivity.) This lets us have
"intuitive" semantics for parallel composition. I suppose then you get a kind of biased semiring. But I don't know what
can be said about biased semirings. I've not seen them mentioned elsewhere in the literature, and I suspect they are not very interesting
if there aren't many other examples of them, and if there are no nontrivial theorems about them.</p>
<p>Also, if it were not for <code>BOTTOM</code>, every program would have a multiplicative inverse: simply find which elements
of the set the program changes, and undo those changes: do a <code>SET</code> everywhere it did an <code>UNSET</code>,
and viceversa. Then, I suppose, you get an idempotent semiring with multiplicative inverses, for whatever that's worth; again,
<p>There is an implementation of Cabra in Haskell, <code>cabra.hs</code>, but it's really more of a token offering than a reference implementation.
There isn't even any parser: Cabra programs have to be given as Haskell terms, which syntactically only vaguely resemble
<p>I came up with the idea to make a language whose programs formed a ring shortly after getting Burro basically done —
fall or winter of 2005. I didn't develop the idea until around the spring of 2007, when it occurred to me that parallel and sequential execution
could work for the operators. Developing the language itself (and compromising on a dioid) occurred in late summer and fall of 2007.</p>
+<!DOCTYPE html PUBLIC "//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1strict.dtd">
+as several of the same issues are dealt with there. Notably, the notion of a <em>group over an equivalence
+relation</em> is introduced there in order to make sense of how program texts (syntax) can be manipulated in
+tandem with the programs (semantics) they represent. In short, many different program texts are equivalent to
+the same (abstract) program, so the equality operator = used in the group axioms is simply replaced
+by the equivalence operator, ≡. Obviously, this technique isn't restricted to groups, and the idea can be
+under parallel and sequential composition operations. Selecting a semantics for parallel composition that
+Idempotent semirings, sometimes called <em>dioids</em>, lack the additive inverses that rings have,
+<p>Every Cabra program takes, as input, an <em>input set</em>, which is a set of nonnegative integers.
+Every Cabra program either produces an <em>output set</em>, which is also a set of nonnegative integers,
+<p>The instruction <code>UNSET <var>n</var></code>, where <var>n</var> is any nonnegative integer,
+except that it never includes <var>n</var>, even if <var>n</var> was included in the input set.)</p>
+<p>If <var>a</var> and <var>b</var> are Cabra programs, then <code>IFSET <var>n</var> THEN <var>a</var> ELSE <var>b</var></code>
+is a Cabra program, where <var>n</var> is any nonnegative integer. If <var>n</var> is a member of the input set of the <code>IFSET</code> program,
+then the input set is used as the input set of <var>a</var>, and the output set of <var>a</var> is used as the output set of the <code>IFSET</code>;
+<var>b</var> is ignored. Conversely, if <var>n</var> is not a member of the input set the <code>IFSET</code>,
+then the input set is used as the input set of <var>b</var>, and the output set of <var>b</var> is used as the output set of the <code>IFSET</code>;
+<p>If <var>a</var> and <var>b</var> are Cabra programs, then <code><var>a</var>*<var>b</var></code>
+is a Cabra program. The input set of <code><var>a</var>*<var>b</var></code> is used as the input set of <var>a</var>.
+The output set of <var>a</var> is used as the input set of <var>b</var>. The output set of <var>b</var> is used
+as the output set of <code><var>a</var>*<var>b</var></code>. (This is sequential composition; <var>a</var> is
+executed, then <var>b</var>. A more usual notation might be <code><var>a</var>;<var>b</var></code>.)</p>
+<p>If <var>a</var> and <var>b</var> are Cabra programs, then <code><var>a</var>+<var>b</var></code>
+is a Cabra program. The input set of <code><var>a</var>+<var>b</var></code> is used as the input set of
+terminates first is used as the output set of <code><var>a</var>+<var>b</var></code>. See below for the
+final result determined by a kind of race condition. A more usual notation might be <var>a</var><var>b</var>.)</p>
+<p>We still need to define what it means for one Cabra program to "terminate before" another, different Cabra program.
+Execution of a Cabra program is considered to take a nonnegative integral number of imaginary things
+called <em>cycles</em>. A Cabra program <var>a</var> terminates before <var>b</var> if the number of cycles taken by <var>a</var>
+than the number of cycles taken by <var>b</var> on <var>S</var>; or, if the number of cycles taken by <var>a</var> on <var>S</var> is the same as the number
+of cycles taken by <var>b</var> on <var>S</var>, then <var>a</var> terminates before <var>b</var> if <var>a</var> has a smaller lexical order than <var>b</var>.
+then <var>a</var> = <var>b</var> and "terminate before" is meaningless because it only defined between
+<p>The formula for calculating cycles taken in general depends on both the program and its input set, but is deterministic
+in the sense that the same program on the same input set will always take the same number of cycles.
+(This isn't the real world where clock speeds vary at +/0.01% or anything like that. It is as if execution is
+one cycle of program <var>a</var> is executed, then one cycle of <var>b</var>, and so forth, until one or the other
+<li><code><var>a</var>*<var>b</var></code> takes the sum of the number of cycles taken by <var>a</var> and the number of cycles taken by <var>b</var>.</li>
+<li><code><var>a</var>+<var>b</var></code> takes the either the number of cycles taken by <var>a</var> or the number of cycles taken by <var>b</var>, whichever is fewer.</li>
+<li><code>IFSET <var>n</var> THEN <var>a</var> ELSE <var>b</var></code> takes either the number of cycles taken by <var>a</var> or the number of cycles taken by <var>b</var>,
+<li><code>SET <var>n</var></code> takes <var>n</var> cycles if <var>n</var> was not already set, but only 1 cycle if it was already set.</li>
+<p>In fact, other formulae are possible. The somewhat unusual determination of cycles in the case of <code>SET <var>n</var></code>
+is mainly to keep things interesting by ensuring that the number of cycles is dependent upon the contents of the input set.</p>
+<p>Probably goes without saying, but to state it anyway: for a program <var>a</var> which is an element of a set of programs <var>P</var>,
+we say <var>a</var> <em>terminates first</em> (with respect to <var>P</var>) if it terminates before every other program in <var>P</var>.</p>
+<p>The formula for calculating lexical order depends only on the program. It acts as a "tiebreaker" when two programs take the same number of cycles.</p>
+<p>For compound programs, the ordering is <code>IFSET <var>n</var> THEN <var>a</var> ELSE <var>b</var></code>
+comes before <code><var>a</var>+<var>b</var></code> comes before <code><var>a</var>*<var>b</var></code>.</p>
+<p>All primitive programs come before nonprimitive programs, and in general, programs with shorter length
+(measured in terms of number of subterms) come before those with longer length. In programs with the same
+length, subterms are compared lefttoright. (This happens to be the same kind of lexical ordering as you
+<p>Oh, but I can hear you objecting now: <em>Waitaminnit! This language is totally weak. You can't do hardly anything with it.</em></p>
+<p>True enough. Cabra is nowhere close to being a real programming language. But I decided to design it this way anyway,
+<p>One reason is to demonstrate that these algebraical problems involving parallel composition occur even for what would be
+very small subsets of a "real" programming language. You can imagine a fullfledged version of Cabra with
+variables, arrays, <code>WHILE</code> loops, input/output, and the like, but you can readily see that these ameliorations don't make the central
+<p>Another reason is that Cabra, despite almost being, say, just a kind of algebraical structure, <em>looks</em> a lot
+like the beginnings of a programming language. It's got a conditional form, and imperative update. It almost looks like an overspecified language —
+heck, a <em>machine</em> — because of the definition of how parallel composition works in terms of cycles and everything.</p>
+<p>A third reason is that it's just a little... askew. Note that if we had a <code>WHILE</code> instruction,
+because we could just do something like <code>WHILE FALSE SKIP</code>. But we don't have <code>WHILE</code>.
+This is, in my opinion, pretty weird: here's this very weak language, yet it's still capable of somewhat unpleasant things
+<p>And lastly I did it to irk people who think that the goal of esolang design is to make a language that
+is Turingcomplete. Give me an interesting weak language over a boring Turingcomplete language anyday.</p>
+<p>Now, recall — or go look up in an abstract algebra textbook — or just take my word for it — that
+<li> + : <var>S</var> × <var>S</var> → <var>S</var> is a binary operation on <var>S</var>; and</li>
+<li> * : <var>S</var> × <var>S</var> → <var>S</var> is another binary operation on <var>S</var>,</li>
+<li> (<var>a</var> + <var>b</var>) + <var>c</var> ≡ <var>a</var> + (<var>b</var> + <var>c</var>) (addition is associative)</li>
+<li> <var>a</var> + <var>b</var> ≡ <var>b</var> + <var>a</var> (addition is commutative)</li>
+ <li> <var>a</var> + 0 ≡ 0 + <var>a</var> ≡ <var>a</var> (existence of additive identity)</li>
+ <li> (<var>a</var> * <var>b</var>) * <var>c</var> ≡ <var>a</var> * (<var>b</var> * <var>c</var>) (multiplication is associative)</li>
+<li><var>a</var> * 1 ≡ 1 * <var>a</var> ≡ <var>a</var> (existence of multiplicative identity)</li>
+<li><var>a</var> * (<var>b</var> + <var>c</var>) ≡ (<var>a</var> * <var>b</var>) + (<var>a</var> * <var>c</var>) (multiplication leftdistributes over addition)</li>
+<li>(<var>a</var> + <var>b</var>) * <var>c</var> ≡ (<var>a</var> * <var>c</var>) + (<var>b</var> * <var>c</var>) (multiplication rightdistributes over addition)</li>
+<li><var>a</var> * 0 ≡ 0 * <var>a</var> ≡ 0 (additive identity is multiplicative annihiliator)</li>
+<p>Now I'll attempt to show that Cabra programs form an idempotent semiring, over the equivalence relation of "semantically identical", under the
+considered additive, and <code>*</code>, considered multiplicative. For each axiom, I'll argue informally that it holds for all Cabra programs, hoping that an appeal to your intuition will be sufficient to
+convince you. A formal proof is also possible I'm sure, but it would be tedious, and probably not really illuminating.</p>
+this is completely independent of the order in which they are considered to have been organized into a parallel arrangement.</p>
+<p>Parallel composition is commutative. When running programs in parallel, one would naturally expect that the order of the programs
+doesn't matter — in fact, the concept doesn't really make sense. In <code><var>a</var>+<var>b</var></code>, <var>a</var> and <var>b</var> aren't
+really running in any order; they're running at the same time. The result of <code><var>a</var>+<var>b</var></code>
+is determined by which of <var>a</var> or <var>b</var> terminates first, and this is not affected by which order they appear on either
+side of the <code>+</code> operator. (In fact, that was why I introduced the "lexical order" tiebreaker,
+<p>There is a neutral element for parallel composition. Indeed, <code>BOTTOM</code> is this neutral element.
+therefore <code><var>a</var>+BOTTOM</code> ≡ <code>BOTTOM+<var>a</var></code> ≡ <code><var>a</var></code>.</p>
+<p>Parallel composition is idempotent. Intuitively, executing two copies of the same program in parallel will have the same result as
+compute the same thing, it doesn't matter which one terminates first (and in our definition, this is undefined.)</p>
+<p>Sequential composition is associative. The result of <code><var>a</var>*<var>b</var>*<var>c</var></code>
+does not differ if one first composes <var>a</var> and <var>b</var> to obtain a new program, say <var>d</var>, then composes <var>d</var> and <var>c</var>,
+or if one first composes <var>b</var> and <var>c</var> to get <var>e</var>, then composes <var>a</var> and <var>e</var>.
+like Pascal: the program <code>BEGIN A; B END; C</code> is semantically identical to <code>A; BEGIN B; C END</code>.
+(Note that we are talking about <em>composition</em> here, not execution. When we put together programs to form a new program,
+it doesn't matter what order we put them together in, we get the same new program. If we were to <em>execute</em> those component programs
+<p>There is a neutral element for sequential composition. Indeed, <code>SKIP</code> is this neutral element.
+Then the programs <code><var>a</var>*SKIP</code> and <code>SKIP*<var>a</var></code> will also, fairly obviously, produce <var>T</var> when given <var>S</var>.</p>
+<p>Sequential composition leftdistributes over parallel composition. This is similar to the argument that parallel composition is
+that runs before two programs in parallel <code><var>b</var>+<var>c</var></code>, it doesn't matter if one copy of <var>a</var> runs
+sequentially before both <var>b</var> and <var>c</var>, or if two copies of <var>a</var> run in parallel, one sequentially before <var>b</var> and one sequentially before <var>c</var>.
+In both cases it's as if one copy of <var>a</var> ran, and in both cases both <var>b</var> and <var>c</var> get the same input set.</p>
+<p>Sequential composition rightdistributes over parallel composition. Consider <code><var>a</var>+<var>b</var></code>. On some input <var>S</var>,
+<var>b</var> will take <var>y</var> cycles, and the output set <var>T</var> will from be whichever of these numbers of cycles is smaller.
+So if there was a subsequent program <var>c</var>, it would take <var>T</var> as its input set, and it itself would take <var>z</var> cycles.
+Because addition is monotonic — if x > y then x + z > y + z — whichever branch was the "winner" of <code><var>a</var>+<var>b</var></code>,
+the same branch will be the winner of <code>(<var>a</var>*<var>c</var>)+(<var>b</var>*<var>c</var>)</code>.
+Also, in this branch, the input set to <var>c</var> will still be <var>T</var>, the output set of the winning branch of <code><var>a</var>+<var>b</var></code>.
+<p>The neutral element for parallel composition is an annihilator for sequential composition. Indeed, if we
+run <code><var>a</var>*BOTTOM</code>, or <code>BOTTOM*<var>a</var></code>, neither of those terminate, so we get the same result as running just <code>BOTTOM</code>.</p>
+<p>As I mentioned, the original intent was for Cabra programs to form a ring under sequential and parallel composition.
+Here, we don't need to devise something that "undoes" concatenation (sequential composition) of two programs, and
+specifically, we don't have to worry if either program fails to terminate; the concatenated program simply
+<p>For Cabra programs to form a fullfledged ring, every program would need to have a unique additive inverse.
+But there can be no such program, as Cabra has been defined: if <var>a</var> terminates, then there's nothing <var>b</var> can do to
+<p>So Cabra programs do not form a ring. Further, it's unclear what would have to change to allow this.
+A simple instruction <code>HANGOTHERS</code> could be defined as sort of throwing a monkey wrench into every other
+currently executing program: <code><var>a</var>+HANGOTHERS</code> ≡ <code>HANGOTHERS+<var>a</var></code> ≡ <code>BOTTOM</code>.
+But every element is supposed to have a <em>unique</em> additive inverse, and this doesn't do that, because <code>HANGOTHERS</code>
+<p>The semantics Cabra ended up having for parallel composition are not those which I originally envisioned.
+I was certainly hoping for something more interesting than a deterministic race condition. However, if one chooses
+parallel semantics that are perhaps more commonplace, definate problems arise, whether in a ring or a semiring.</p>
+<p>Take, for instance, a semantics where the output set of <code><var>a</var>+<var>b</var></code> is
+the union of the output sets of <var>a</var> and <var>b</var> individually. This coincides with a fairly intuitive
+notion of parallel processing where we fork different processes to compute different parts of a larger computation,
+while sequential composition happily leftdistributes over parallel composition, it fails to rightdistribute over it,
+<blockquote><p><code>(SET 1 + SET 2) * IFSET 1 THEN (IFSET 2 THEN SET 3 ELSE SKIP) ELSE SKIP</code></p></blockquote>
+<p>evaluates to {1, 2, 3} on a null input set, because <code>(SET 1 + SET 2)</code> evaluates to {1, 2}, and
+the remainder of the program tests for the presence of both 1 and 2 and, finding them, puts 3 in the output as well.
+<p>evaluates to {1, 2}, because the tests for both 1 and 2 in each of the parallel programs fail, because each of those
+the <em>intersection</em> of the output sets of <var>a</var> and <var>b</var> individually. While less
+usefulseeming than union, perhaps, this still suggests a known parallel processing technique, namely, running the
+<blockquote><p><code>(SET 4 + UNSET 4) * IFSET 4 THEN (UNSET 4 * SET 6) ELSE SET 5</code></p></blockquote>
+<p>we get a null output set, because the output set of the first parallel program is {6}, and the output set of the second is {5}.</p>
+<p>Also, both of these approaches would, naturally, require both of the parallel programs to terminate before
+their results could be merged to form the combined result (whether it be union or intersection.) This means that if either of them was <code>BOTTOM</code>,
+would be an annihilator for addition as well as for multiplication, and that (at least in the union case) <code>SKIP</code>
+This is less than swell, in terms of conforming to ring axioms, because one theorem of ring theory is that
+the multiplicative identity and the additive identity are equal iff the ring consists of <em>only one element</em>,
+<p>(I think, also, that if you do manage to have a "ring" where 1 ≡ 0, but where there are clearly elements that aren't
+either 0 or 1, it's that the additive operation and multiplicative operation are really the <em>same</em> operation
+under the semantic equivalence relation. One of my very early designs for Cabra came up against this, and it's
+and we're really talking about something that's a monoid or group or something, <em>not</em> a ring.)</p>
+<p>Based on this, I will go so far as to conjecture that, in fact, <em>any</em> semantics for parallel composition
+The only reason Cabra manages to be rightdistributive is because it has a semantics which does not
+<p>There are ways to address the problems of Cabra — or otherwise try to make it more interesting —
+A Kleene algebra is a dioid with an additional unary postfix operator *: <var>S</var> → <var>S</var>.
+<var>a</var>* ≡ 0 + <var>a</var> + (<var>a</var> * <var>a</var>) + (<var>a</var> * <var>a</var> * <var>a</var>)
++ (<var>a</var> * <var>a</var> * <var>a</var> * <var>a</var>) + ... Kleene algebras are used for such things
+as the theory of regular expressions, where the Kleene star indicates "zero or more occurrences."</p>
+<p>If we try to extend Cabra from a dioid to a Kleene algebra by adding a Kleene star, we appear to get a nonplussing
+result. Since <code><var>a</var></code> always takes fewer cycles than <code><var>a</var>*<var>a</var></code>
+(except when <var>a</var> is <code>SKIP</code>, and in that case <code><var>a</var>*<var>a</var></code> ≡ <code><var>a</var></code>,)
+<p>What does that get us? I'm not sure. I suspect nothing, unless there is some neat property of the
+ordering induced by the Kleene algebra that shows up. But it doesn't seem worth checking, to me.</p>
+<p>A perhaps more obvious "surgery" to make is to drop the requirement that semirings be rightdistributive (while keeping leftdistributivity.) This lets us have
+"intuitive" semantics for parallel composition. I suppose then you get a kind of biased semiring. But I don't know what
+can be said about biased semirings. I've not seen them mentioned elsewhere in the literature, and I suspect they are not very interesting
+if there aren't many other examples of them, and if there are no nontrivial theorems about them.</p>
+<p>Also, if it were not for <code>BOTTOM</code>, every program would have a multiplicative inverse: simply find which elements
+of the set the program changes, and undo those changes: do a <code>SET</code> everywhere it did an <code>UNSET</code>,
+and viceversa. Then, I suppose, you get an idempotent semiring with multiplicative inverses, for whatever that's worth; again,
+<p>There is an implementation of Cabra in Haskell, <code>cabra.hs</code>, but it's really more of a token offering than a reference implementation.
+There isn't even any parser: Cabra programs have to be given as Haskell terms, which syntactically only vaguely resemble
+<p>I came up with the idea to make a language whose programs formed a ring shortly after getting Burro basically done —
+fall or winter of 2005. I didn't develop the idea until around the spring of 2007, when it occurred to me that parallel and sequential execution
+could work for the operators. Developing the language itself (and compromising on a dioid) occurred in late summer and fall of 2007.</p>