Commits

Anonymous committed 14b4a3c Draft

Import of Cabra version 1.0 revision 2011.1214 (just HTML fixes.)

Comments (0)

Files changed (1)

 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
+<!-- encoding: UTF-8 -->
 <html xmlns="http://www.w3.org/1999/xhtml" lang="en">
 <head>
 <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
 <title>The Cabra Programming Language</title>
+  <!-- begin html doc dynamic markup -->
+  <script type="text/javascript" src="/contrib/jquery-1.6.4.min.js"></script>
+  <script type="text/javascript" src="/scripts/documentation.js"></script>
+  <!-- end html doc dynamic markup -->
 </head>
 <body>
 
 relation</em> is introduced there in order to make sense of how program texts (syntax) can be manipulated in
 tandem with the programs (semantics) they represent.  In short, many different program texts are equivalent to
 the same (abstract) program, so the equality operator = used in the group axioms is simply replaced
-by the equivalence operator, &equiv;.  Obviously, this technique isn't restricted to groups, and the idea can be
+by the equivalence operator, .  Obviously, this technique isn't restricted to groups, and the idea can be
 extended to that of any <em>algebra over an equivalence relation</em>, and it is this notion that
 is applied in Cabra to dioids.</p>
 
 problems any easier.</p>
 
 <p>Another reason is that Cabra, despite almost being, say, just a kind of algebraical structure, <em>looks</em> a lot
-like the beginnings of a programming language.  It's got a conditional form, and imperative update.  It almost looks like an overspecified language &mdash;
-heck, a <em>machine</em> &mdash; because of the definition of how parallel composition works in terms of cycles and everything.</p>
+like the beginnings of a programming language.  It's got a conditional form, and imperative update.  It almost looks like an overspecified language —
+heck, a <em>machine</em> — because of the definition of how parallel composition works in terms of cycles and everything.</p>
 
 <p>A third reason is that it's just a little... askew.  Note that if we had a <code>WHILE</code> instruction,
 we wouldn't need a <code>BOTTOM</code> instruction
 
 <h2>Dioid Theory</h2>
 
-<p>Now, recall &mdash; or go look up in an abstract algebra textbook &mdash; or just take my word for it &mdash; that
-an idempotent semiring is a triple &lang;<var>S</var>, +, *&rang; where:</p>
+<p>Now, recall — or go look up in an abstract algebra textbook — or just take my word for it — that
+an idempotent semiring is a triple 〈<var>S</var>, +, *〉 where:</p>
 
 <ul>
 <li><var>S</var> is a set of elements;</li>
-<li>  + : <var>S</var> &times; <var>S</var> &rarr; <var>S</var> is a binary operation on <var>S</var>; and</li>
-<li> * : <var>S</var> &times; <var>S</var> &rarr; <var>S</var> is another binary operation on <var>S</var>,</li>
+<li>+ : <var>S</var> × <var>S</var> → <var>S</var> is a binary operation on <var>S</var>; and</li>
+<li>* : <var>S</var> × <var>S</var> → <var>S</var> is another binary operation on <var>S</var>,</li>
 </ul>
 <p>where the following axioms of dioid theory (over an equivalence relation!) hold:</p>
 
 <ul>
-<li> (<var>a</var> + <var>b</var>) + <var>c</var> &equiv; <var>a</var> + (<var>b</var> + <var>c</var>)    (addition is associative)</li>
-<li>       <var>a</var> + <var>b</var> &equiv; <var>b</var> + <var>a</var>          (addition is commutative)</li>
- <li>      <var>a</var> + 0 &equiv; 0 + <var>a</var> &equiv; <var>a</var>              (existence of additive identity)</li>
- <li>      <var>a</var> + <var>a</var> &equiv; <var>a</var>              (addition is idempotent)</li>
- <li> (<var>a</var> * <var>b</var>) * <var>c</var> &equiv; <var>a</var> * (<var>b</var> * <var>c</var>)    (multiplication is associative)</li>
-<li><var>a</var> * 1 &equiv; 1 * <var>a</var> &equiv; <var>a</var>      (existence of multiplicative identity)</li>
-<li><var>a</var> * (<var>b</var> + <var>c</var>) &equiv; (<var>a</var> * <var>b</var>) + (<var>a</var> * <var>c</var>)  (multiplication left-distributes over addition)</li>
-<li>(<var>a</var> + <var>b</var>) * <var>c</var> &equiv; (<var>a</var> * <var>c</var>) + (<var>b</var> * <var>c</var>)  (multiplication right-distributes over addition)</li>
-<li><var>a</var> * 0 &equiv; 0 * <var>a</var> &equiv; 0           (additive identity is multiplicative annihiliator)</li>
+<li> (<var>a</var> + <var>b</var>) + <var>c</var> ≡ <var>a</var> + (<var>b</var> + <var>c</var>)    (addition is associative)</li>
+<li>       <var>a</var> + <var>b</var> ≡ <var>b</var> + <var>a</var>          (addition is commutative)</li>
+ <li>      <var>a</var> + 0 ≡ 0 + <var>a</var> ≡ <var>a</var>              (existence of additive identity)</li>
+ <li>      <var>a</var> + <var>a</var> ≡ <var>a</var>              (addition is idempotent)</li>
+ <li> (<var>a</var> * <var>b</var>) * <var>c</var> ≡ <var>a</var> * (<var>b</var> * <var>c</var>)    (multiplication is associative)</li>
+<li><var>a</var> * 1 ≡ 1 * <var>a</var> ≡ <var>a</var>      (existence of multiplicative identity)</li>
+<li><var>a</var> * (<var>b</var> + <var>c</var>) ≡ (<var>a</var> * <var>b</var>) + (<var>a</var> * <var>c</var>)  (multiplication left-distributes over addition)</li>
+<li>(<var>a</var> + <var>b</var>) * <var>c</var> ≡ (<var>a</var> * <var>c</var>) + (<var>b</var> * <var>c</var>)  (multiplication right-distributes over addition)</li>
+<li><var>a</var> * 0 ≡ 0 * <var>a</var> ≡ 0           (additive identity is multiplicative annihiliator)</li>
 </ul>
 
 <p>Now I'll attempt to show that Cabra programs form an idempotent semiring, over the equivalence relation of "semantically identical", under the
 this is completely independent of the order in which they are considered to have been organized into a parallel arrangement.</p>
 
 <p>Parallel composition is commutative.  When running programs in parallel, one would naturally expect that the order of the programs
-doesn't matter &mdash; in fact, the concept doesn't really make sense.  In <code><var>a</var>+<var>b</var></code>, <var>a</var> and <var>b</var> aren't
+doesn't matter — in fact, the concept doesn't really make sense.  In <code><var>a</var>+<var>b</var></code>, <var>a</var> and <var>b</var> aren't
 really running in any order; they're running at the same time. The result of <code><var>a</var>+<var>b</var></code>
 is determined by which of <var>a</var> or <var>b</var> terminates first, and this is not affected by which order they appear on either
 side of the <code>+</code> operator.  (In fact, that was why I introduced the "lexical order" tie-breaker,
 
 <p>There is a neutral element for parallel composition.  Indeed, <code>BOTTOM</code> is this neutral element.
 Any program <var>a</var> that terminates will by definition terminate before <code>BOTTOM</code>,
-therefore <code><var>a</var>+BOTTOM</code> &equiv; <code>BOTTOM+<var>a</var></code> &equiv; <code><var>a</var></code>.</p>
+therefore <code><var>a</var>+BOTTOM</code>  <code>BOTTOM+<var>a</var></code>  <code><var>a</var></code>.</p>
 
 <p>Parallel composition is idempotent.  Intuitively, executing two copies of the same program in parallel will have the same result as
 executing only one copy would.  Since they both have the same input set and they both
 Now suppose we sequentially compose <var>c</var> onto each of these subprograms:
 <code><var>a</var>*<var>c</var></code> will take <var>x</var> + <var>z</var> cycles, and
 <code><var>b</var>*<var>c</var></code> will take <var>y</var> + <var>z</var> cycles.
-Because addition is monotonic &mdash; if x &gt; y then x + z &gt; y + z &mdash; whichever branch was the "winner" of <code><var>a</var>+<var>b</var></code>,
+Because addition is monotonic — if x &gt; y then x + z &gt; y + z — whichever branch was the "winner" of <code><var>a</var>+<var>b</var></code>,
 the same branch will be the winner of <code>(<var>a</var>*<var>c</var>)+(<var>b</var>*<var>c</var>)</code>.
 Also, in this branch, the input set to <var>c</var> will still be <var>T</var>, the output set of the winning branch of <code><var>a</var>+<var>b</var></code>.
 Thus the final result will be the same.  (See below for much more on this.)</p>
 
 <p>For Cabra programs to form a full-fledged ring, every program would need to have a unique additive inverse.
 That is, for every program <var>a</var>, there would need to be another program <var>b</var>,
-where <code><var>a</var>+<var>b</var></code> &equiv; <code>BOTTOM</code>.
+where <code><var>a</var>+<var>b</var></code>  <code>BOTTOM</code>.
 But there can be no such program, as Cabra has been defined: if <var>a</var> terminates, then there's nothing <var>b</var> can do to
 stop <var>a</var> from terminating.</p>
 
 <p>So Cabra programs do not form a ring.  Further, it's unclear what would have to change to allow this.
 A simple instruction <code>HANGOTHERS</code> could be defined as sort of throwing a monkey wrench into every other
-currently executing program: <code><var>a</var>+HANGOTHERS</code> &equiv; <code>HANGOTHERS+<var>a</var></code> &equiv; <code>BOTTOM</code>.
+currently executing program: <code><var>a</var>+HANGOTHERS</code>  <code>HANGOTHERS+<var>a</var></code>  <code>BOTTOM</code>.
 But every element is supposed to have a <em>unique</em> additive inverse, and this doesn't do that, because <code>HANGOTHERS</code>
 causes every other program to hang.</p>
 
 
 <p>Also, both of these approaches would, naturally, require both of the parallel programs to terminate before
 their results could be merged to form the combined result (whether it be union or intersection.)  This means that if either of them was <code>BOTTOM</code>,
-the result would be <code>BOTTOM</code> &mdash; which in turn suggests that <code>BOTTOM</code>
+the result would be <code>BOTTOM</code>  which in turn suggests that <code>BOTTOM</code>
 would be an annihilator for addition as well as for multiplication, and that (at least in the union case) <code>SKIP</code>
 also serves as a neutral element for both multiplication and addition.
 This is less than swell, in terms of conforming to ring axioms, because one theorem of ring theory is that
 the multiplicative identity and the additive identity are equal iff the ring consists of <em>only one element</em>,
 which is clearly not the case here.</p>
 
-<p>(I think, also, that if you do manage to have a "ring" where 1 &equiv; 0, but where there are clearly elements that aren't
+<p>(I think, also, that if you do manage to have a "ring" where 1  0, but where there are clearly elements that aren't
 either 0 or 1, it's that the additive operation and multiplicative operation are really the <em>same</em> operation
 under the semantic equivalence relation.  One of my very early designs for Cabra came up against this, and it's
 somewhat intuitive once you think about
 it: if two programs which <em>don't depend on each other at all</em> have some given result
 when one is executed after the other, they'll have the <em>same</em> result when
-they are executed concurrently &mdash; because they don't depend on each other at all!  Thus
+they are executed concurrently  because they don't depend on each other at all!  Thus
 in this case <code>+</code> = <code>*</code>
 and we're really talking about something that's a monoid or group or something, <em>not</em> a ring.)</p>
 
 
 <h3>Other Algebras</h3>
 
-<p>There are ways to address the problems of Cabra &mdash; or otherwise try to make it more interesting &mdash; 
+<p>There are ways to address the problems of Cabra  or otherwise try to make it more interesting  
 by formulating other algebras using other sets of axioms.</p>
 
 <p>Take, for example, Kleene algebras.
-A Kleene algebra is a dioid with an additional unary postfix operator *: <var>S</var> &rarr; <var>S</var>.
+A Kleene algebra is a dioid with an additional unary postfix operator *: <var>S</var>  <var>S</var>.
 It denotes a kind of additive closure: for any element <var>a</var>,
-<var>a</var>* &equiv; 0 + <var>a</var> + (<var>a</var> * <var>a</var>) + (<var>a</var> * <var>a</var> * <var>a</var>)
+<var>a</var>*  0 + <var>a</var> + (<var>a</var> * <var>a</var>) + (<var>a</var> * <var>a</var> * <var>a</var>)
 + (<var>a</var> * <var>a</var> * <var>a</var> * <var>a</var>) + ...  Kleene algebras are used for such things
 as the theory of regular expressions, where the Kleene star indicates "zero or more occurrences."</p>
 
 <p>If we try to extend Cabra from a dioid to a Kleene algebra by adding a Kleene star, we appear to get a nonplussing
 result.  Since <code><var>a</var></code> always takes fewer cycles than <code><var>a</var>*<var>a</var></code>
-(except when <var>a</var> is <code>SKIP</code>, and in that case <code><var>a</var>*<var>a</var></code> &equiv; <code><var>a</var></code>,)
+(except when <var>a</var> is <code>SKIP</code>, and in that case <code><var>a</var>*<var>a</var></code>  <code><var>a</var></code>,)
 and since any <var>a</var> takes fewer cycles than <code>BOTTOM</code>, it appears that
-<code><var>a</var>*</code> &equiv; <code><var>a</var></code> in Cabra.</p>
+<code><var>a</var>*</code>  <code><var>a</var></code> in Cabra.</p>
 
 <p>What does that get us?  I'm not sure.  I suspect nothing, unless there is some neat property of the
 ordering induced by the Kleene algebra that shows up.  But it doesn't seem worth checking, to me.</p>
 
 <h2>History</h2>
 
-<p>I came up with the idea to make a language whose programs formed a ring shortly after getting Burro basically done &mdash;
+<p>I came up with the idea to make a language whose programs formed a ring shortly after getting Burro basically done 
 fall or winter of 2005.  I didn't develop the idea until around the spring of 2007, when it occurred to me that parallel and sequential execution
 could work for the operators.  Developing the language itself (and compromising on a dioid) occurred in late summer and fall of 2007.</p>
 
Tip: Filter by directory path e.g. /media app.js to search for public/media/app.js.
Tip: Use camelCasing e.g. ProjME to search for ProjectModifiedEvent.java.
Tip: Filter by extension type e.g. /repo .js to search for all .js files in the /repo directory.
Tip: Separate your search with spaces e.g. /ssh pom.xml to search for src/ssh/pom.xml.
Tip: Use ↑ and ↓ arrow keys to navigate and return to view the file.
Tip: You can also navigate files with Ctrl+j (next) and Ctrl+k (previous) and view the file with Ctrl+o.
Tip: You can also navigate files with Alt+j (next) and Alt+k (previous) and view the file with Alt+o.