Commits

Anonymous committed 02ca84f

Initial release of Orange Textable v1.3 as a registered Orange Canvas Add-on

Comments (0)

Files changed (187)

+                    GNU GENERAL PUBLIC LICENSE
+                       Version 3, 29 June 2007
+
+ Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
+ Everyone is permitted to copy and distribute verbatim copies
+ of this license document, but changing it is not allowed.
+
+                            Preamble
+
+  The GNU General Public License is a free, copyleft license for
+software and other kinds of works.
+
+  The licenses for most software and other practical works are designed
+to take away your freedom to share and change the works.  By contrast,
+the GNU General Public License is intended to guarantee your freedom to
+share and change all versions of a program--to make sure it remains free
+software for all its users.  We, the Free Software Foundation, use the
+GNU General Public License for most of our software; it applies also to
+any other work released this way by its authors.  You can apply it to
+your programs, too.
+
+  When we speak of free software, we are referring to freedom, not
+price.  Our General Public Licenses are designed to make sure that you
+have the freedom to distribute copies of free software (and charge for
+them if you wish), that you receive source code or can get it if you
+want it, that you can change the software or use pieces of it in new
+free programs, and that you know you can do these things.
+
+  To protect your rights, we need to prevent others from denying you
+these rights or asking you to surrender the rights.  Therefore, you have
+certain responsibilities if you distribute copies of the software, or if
+you modify it: responsibilities to respect the freedom of others.
+
+  For example, if you distribute copies of such a program, whether
+gratis or for a fee, you must pass on to the recipients the same
+freedoms that you received.  You must make sure that they, too, receive
+or can get the source code.  And you must show them these terms so they
+know their rights.
+
+  Developers that use the GNU GPL protect your rights with two steps:
+(1) assert copyright on the software, and (2) offer you this License
+giving you legal permission to copy, distribute and/or modify it.
+
+  For the developers' and authors' protection, the GPL clearly explains
+that there is no warranty for this free software.  For both users' and
+authors' sake, the GPL requires that modified versions be marked as
+changed, so that their problems will not be attributed erroneously to
+authors of previous versions.
+
+  Some devices are designed to deny users access to install or run
+modified versions of the software inside them, although the manufacturer
+can do so.  This is fundamentally incompatible with the aim of
+protecting users' freedom to change the software.  The systematic
+pattern of such abuse occurs in the area of products for individuals to
+use, which is precisely where it is most unacceptable.  Therefore, we
+have designed this version of the GPL to prohibit the practice for those
+products.  If such problems arise substantially in other domains, we
+stand ready to extend this provision to those domains in future versions
+of the GPL, as needed to protect the freedom of users.
+
+  Finally, every program is threatened constantly by software patents.
+States should not allow patents to restrict development and use of
+software on general-purpose computers, but in those that do, we wish to
+avoid the special danger that patents applied to a free program could
+make it effectively proprietary.  To prevent this, the GPL assures that
+patents cannot be used to render the program non-free.
+
+  The precise terms and conditions for copying, distribution and
+modification follow.
+
+                       TERMS AND CONDITIONS
+
+  0. Definitions.
+
+  "This License" refers to version 3 of the GNU General Public License.
+
+  "Copyright" also means copyright-like laws that apply to other kinds of
+works, such as semiconductor masks.
+
+  "The Program" refers to any copyrightable work licensed under this
+License.  Each licensee is addressed as "you".  "Licensees" and
+"recipients" may be individuals or organizations.
+
+  To "modify" a work means to copy from or adapt all or part of the work
+in a fashion requiring copyright permission, other than the making of an
+exact copy.  The resulting work is called a "modified version" of the
+earlier work or a work "based on" the earlier work.
+
+  A "covered work" means either the unmodified Program or a work based
+on the Program.
+
+  To "propagate" a work means to do anything with it that, without
+permission, would make you directly or secondarily liable for
+infringement under applicable copyright law, except executing it on a
+computer or modifying a private copy.  Propagation includes copying,
+distribution (with or without modification), making available to the
+public, and in some countries other activities as well.
+
+  To "convey" a work means any kind of propagation that enables other
+parties to make or receive copies.  Mere interaction with a user through
+a computer network, with no transfer of a copy, is not conveying.
+
+  An interactive user interface displays "Appropriate Legal Notices"
+to the extent that it includes a convenient and prominently visible
+feature that (1) displays an appropriate copyright notice, and (2)
+tells the user that there is no warranty for the work (except to the
+extent that warranties are provided), that licensees may convey the
+work under this License, and how to view a copy of this License.  If
+the interface presents a list of user commands or options, such as a
+menu, a prominent item in the list meets this criterion.
+
+  1. Source Code.
+
+  The "source code" for a work means the preferred form of the work
+for making modifications to it.  "Object code" means any non-source
+form of a work.
+
+  A "Standard Interface" means an interface that either is an official
+standard defined by a recognized standards body, or, in the case of
+interfaces specified for a particular programming language, one that
+is widely used among developers working in that language.
+
+  The "System Libraries" of an executable work include anything, other
+than the work as a whole, that (a) is included in the normal form of
+packaging a Major Component, but which is not part of that Major
+Component, and (b) serves only to enable use of the work with that
+Major Component, or to implement a Standard Interface for which an
+implementation is available to the public in source code form.  A
+"Major Component", in this context, means a major essential component
+(kernel, window system, and so on) of the specific operating system
+(if any) on which the executable work runs, or a compiler used to
+produce the work, or an object code interpreter used to run it.
+
+  The "Corresponding Source" for a work in object code form means all
+the source code needed to generate, install, and (for an executable
+work) run the object code and to modify the work, including scripts to
+control those activities.  However, it does not include the work's
+System Libraries, or general-purpose tools or generally available free
+programs which are used unmodified in performing those activities but
+which are not part of the work.  For example, Corresponding Source
+includes interface definition files associated with source files for
+the work, and the source code for shared libraries and dynamically
+linked subprograms that the work is specifically designed to require,
+such as by intimate data communication or control flow between those
+subprograms and other parts of the work.
+
+  The Corresponding Source need not include anything that users
+can regenerate automatically from other parts of the Corresponding
+Source.
+
+  The Corresponding Source for a work in source code form is that
+same work.
+
+  2. Basic Permissions.
+
+  All rights granted under this License are granted for the term of
+copyright on the Program, and are irrevocable provided the stated
+conditions are met.  This License explicitly affirms your unlimited
+permission to run the unmodified Program.  The output from running a
+covered work is covered by this License only if the output, given its
+content, constitutes a covered work.  This License acknowledges your
+rights of fair use or other equivalent, as provided by copyright law.
+
+  You may make, run and propagate covered works that you do not
+convey, without conditions so long as your license otherwise remains
+in force.  You may convey covered works to others for the sole purpose
+of having them make modifications exclusively for you, or provide you
+with facilities for running those works, provided that you comply with
+the terms of this License in conveying all material for which you do
+not control copyright.  Those thus making or running the covered works
+for you must do so exclusively on your behalf, under your direction
+and control, on terms that prohibit them from making any copies of
+your copyrighted material outside their relationship with you.
+
+  Conveying under any other circumstances is permitted solely under
+the conditions stated below.  Sublicensing is not allowed; section 10
+makes it unnecessary.
+
+  3. Protecting Users' Legal Rights From Anti-Circumvention Law.
+
+  No covered work shall be deemed part of an effective technological
+measure under any applicable law fulfilling obligations under article
+11 of the WIPO copyright treaty adopted on 20 December 1996, or
+similar laws prohibiting or restricting circumvention of such
+measures.
+
+  When you convey a covered work, you waive any legal power to forbid
+circumvention of technological measures to the extent such circumvention
+is effected by exercising rights under this License with respect to
+the covered work, and you disclaim any intention to limit operation or
+modification of the work as a means of enforcing, against the work's
+users, your or third parties' legal rights to forbid circumvention of
+technological measures.
+
+  4. Conveying Verbatim Copies.
+
+  You may convey verbatim copies of the Program's source code as you
+receive it, in any medium, provided that you conspicuously and
+appropriately publish on each copy an appropriate copyright notice;
+keep intact all notices stating that this License and any
+non-permissive terms added in accord with section 7 apply to the code;
+keep intact all notices of the absence of any warranty; and give all
+recipients a copy of this License along with the Program.
+
+  You may charge any price or no price for each copy that you convey,
+and you may offer support or warranty protection for a fee.
+
+  5. Conveying Modified Source Versions.
+
+  You may convey a work based on the Program, or the modifications to
+produce it from the Program, in the form of source code under the
+terms of section 4, provided that you also meet all of these conditions:
+
+    a) The work must carry prominent notices stating that you modified
+    it, and giving a relevant date.
+
+    b) The work must carry prominent notices stating that it is
+    released under this License and any conditions added under section
+    7.  This requirement modifies the requirement in section 4 to
+    "keep intact all notices".
+
+    c) You must license the entire work, as a whole, under this
+    License to anyone who comes into possession of a copy.  This
+    License will therefore apply, along with any applicable section 7
+    additional terms, to the whole of the work, and all its parts,
+    regardless of how they are packaged.  This License gives no
+    permission to license the work in any other way, but it does not
+    invalidate such permission if you have separately received it.
+
+    d) If the work has interactive user interfaces, each must display
+    Appropriate Legal Notices; however, if the Program has interactive
+    interfaces that do not display Appropriate Legal Notices, your
+    work need not make them do so.
+
+  A compilation of a covered work with other separate and independent
+works, which are not by their nature extensions of the covered work,
+and which are not combined with it such as to form a larger program,
+in or on a volume of a storage or distribution medium, is called an
+"aggregate" if the compilation and its resulting copyright are not
+used to limit the access or legal rights of the compilation's users
+beyond what the individual works permit.  Inclusion of a covered work
+in an aggregate does not cause this License to apply to the other
+parts of the aggregate.
+
+  6. Conveying Non-Source Forms.
+
+  You may convey a covered work in object code form under the terms
+of sections 4 and 5, provided that you also convey the
+machine-readable Corresponding Source under the terms of this License,
+in one of these ways:
+
+    a) Convey the object code in, or embodied in, a physical product
+    (including a physical distribution medium), accompanied by the
+    Corresponding Source fixed on a durable physical medium
+    customarily used for software interchange.
+
+    b) Convey the object code in, or embodied in, a physical product
+    (including a physical distribution medium), accompanied by a
+    written offer, valid for at least three years and valid for as
+    long as you offer spare parts or customer support for that product
+    model, to give anyone who possesses the object code either (1) a
+    copy of the Corresponding Source for all the software in the
+    product that is covered by this License, on a durable physical
+    medium customarily used for software interchange, for a price no
+    more than your reasonable cost of physically performing this
+    conveying of source, or (2) access to copy the
+    Corresponding Source from a network server at no charge.
+
+    c) Convey individual copies of the object code with a copy of the
+    written offer to provide the Corresponding Source.  This
+    alternative is allowed only occasionally and noncommercially, and
+    only if you received the object code with such an offer, in accord
+    with subsection 6b.
+
+    d) Convey the object code by offering access from a designated
+    place (gratis or for a charge), and offer equivalent access to the
+    Corresponding Source in the same way through the same place at no
+    further charge.  You need not require recipients to copy the
+    Corresponding Source along with the object code.  If the place to
+    copy the object code is a network server, the Corresponding Source
+    may be on a different server (operated by you or a third party)
+    that supports equivalent copying facilities, provided you maintain
+    clear directions next to the object code saying where to find the
+    Corresponding Source.  Regardless of what server hosts the
+    Corresponding Source, you remain obligated to ensure that it is
+    available for as long as needed to satisfy these requirements.
+
+    e) Convey the object code using peer-to-peer transmission, provided
+    you inform other peers where the object code and Corresponding
+    Source of the work are being offered to the general public at no
+    charge under subsection 6d.
+
+  A separable portion of the object code, whose source code is excluded
+from the Corresponding Source as a System Library, need not be
+included in conveying the object code work.
+
+  A "User Product" is either (1) a "consumer product", which means any
+tangible personal property which is normally used for personal, family,
+or household purposes, or (2) anything designed or sold for incorporation
+into a dwelling.  In determining whether a product is a consumer product,
+doubtful cases shall be resolved in favor of coverage.  For a particular
+product received by a particular user, "normally used" refers to a
+typical or common use of that class of product, regardless of the status
+of the particular user or of the way in which the particular user
+actually uses, or expects or is expected to use, the product.  A product
+is a consumer product regardless of whether the product has substantial
+commercial, industrial or non-consumer uses, unless such uses represent
+the only significant mode of use of the product.
+
+  "Installation Information" for a User Product means any methods,
+procedures, authorization keys, or other information required to install
+and execute modified versions of a covered work in that User Product from
+a modified version of its Corresponding Source.  The information must
+suffice to ensure that the continued functioning of the modified object
+code is in no case prevented or interfered with solely because
+modification has been made.
+
+  If you convey an object code work under this section in, or with, or
+specifically for use in, a User Product, and the conveying occurs as
+part of a transaction in which the right of possession and use of the
+User Product is transferred to the recipient in perpetuity or for a
+fixed term (regardless of how the transaction is characterized), the
+Corresponding Source conveyed under this section must be accompanied
+by the Installation Information.  But this requirement does not apply
+if neither you nor any third party retains the ability to install
+modified object code on the User Product (for example, the work has
+been installed in ROM).
+
+  The requirement to provide Installation Information does not include a
+requirement to continue to provide support service, warranty, or updates
+for a work that has been modified or installed by the recipient, or for
+the User Product in which it has been modified or installed.  Access to a
+network may be denied when the modification itself materially and
+adversely affects the operation of the network or violates the rules and
+protocols for communication across the network.
+
+  Corresponding Source conveyed, and Installation Information provided,
+in accord with this section must be in a format that is publicly
+documented (and with an implementation available to the public in
+source code form), and must require no special password or key for
+unpacking, reading or copying.
+
+  7. Additional Terms.
+
+  "Additional permissions" are terms that supplement the terms of this
+License by making exceptions from one or more of its conditions.
+Additional permissions that are applicable to the entire Program shall
+be treated as though they were included in this License, to the extent
+that they are valid under applicable law.  If additional permissions
+apply only to part of the Program, that part may be used separately
+under those permissions, but the entire Program remains governed by
+this License without regard to the additional permissions.
+
+  When you convey a copy of a covered work, you may at your option
+remove any additional permissions from that copy, or from any part of
+it.  (Additional permissions may be written to require their own
+removal in certain cases when you modify the work.)  You may place
+additional permissions on material, added by you to a covered work,
+for which you have or can give appropriate copyright permission.
+
+  Notwithstanding any other provision of this License, for material you
+add to a covered work, you may (if authorized by the copyright holders of
+that material) supplement the terms of this License with terms:
+
+    a) Disclaiming warranty or limiting liability differently from the
+    terms of sections 15 and 16 of this License; or
+
+    b) Requiring preservation of specified reasonable legal notices or
+    author attributions in that material or in the Appropriate Legal
+    Notices displayed by works containing it; or
+
+    c) Prohibiting misrepresentation of the origin of that material, or
+    requiring that modified versions of such material be marked in
+    reasonable ways as different from the original version; or
+
+    d) Limiting the use for publicity purposes of names of licensors or
+    authors of the material; or
+
+    e) Declining to grant rights under trademark law for use of some
+    trade names, trademarks, or service marks; or
+
+    f) Requiring indemnification of licensors and authors of that
+    material by anyone who conveys the material (or modified versions of
+    it) with contractual assumptions of liability to the recipient, for
+    any liability that these contractual assumptions directly impose on
+    those licensors and authors.
+
+  All other non-permissive additional terms are considered "further
+restrictions" within the meaning of section 10.  If the Program as you
+received it, or any part of it, contains a notice stating that it is
+governed by this License along with a term that is a further
+restriction, you may remove that term.  If a license document contains
+a further restriction but permits relicensing or conveying under this
+License, you may add to a covered work material governed by the terms
+of that license document, provided that the further restriction does
+not survive such relicensing or conveying.
+
+  If you add terms to a covered work in accord with this section, you
+must place, in the relevant source files, a statement of the
+additional terms that apply to those files, or a notice indicating
+where to find the applicable terms.
+
+  Additional terms, permissive or non-permissive, may be stated in the
+form of a separately written license, or stated as exceptions;
+the above requirements apply either way.
+
+  8. Termination.
+
+  You may not propagate or modify a covered work except as expressly
+provided under this License.  Any attempt otherwise to propagate or
+modify it is void, and will automatically terminate your rights under
+this License (including any patent licenses granted under the third
+paragraph of section 11).
+
+  However, if you cease all violation of this License, then your
+license from a particular copyright holder is reinstated (a)
+provisionally, unless and until the copyright holder explicitly and
+finally terminates your license, and (b) permanently, if the copyright
+holder fails to notify you of the violation by some reasonable means
+prior to 60 days after the cessation.
+
+  Moreover, your license from a particular copyright holder is
+reinstated permanently if the copyright holder notifies you of the
+violation by some reasonable means, this is the first time you have
+received notice of violation of this License (for any work) from that
+copyright holder, and you cure the violation prior to 30 days after
+your receipt of the notice.
+
+  Termination of your rights under this section does not terminate the
+licenses of parties who have received copies or rights from you under
+this License.  If your rights have been terminated and not permanently
+reinstated, you do not qualify to receive new licenses for the same
+material under section 10.
+
+  9. Acceptance Not Required for Having Copies.
+
+  You are not required to accept this License in order to receive or
+run a copy of the Program.  Ancillary propagation of a covered work
+occurring solely as a consequence of using peer-to-peer transmission
+to receive a copy likewise does not require acceptance.  However,
+nothing other than this License grants you permission to propagate or
+modify any covered work.  These actions infringe copyright if you do
+not accept this License.  Therefore, by modifying or propagating a
+covered work, you indicate your acceptance of this License to do so.
+
+  10. Automatic Licensing of Downstream Recipients.
+
+  Each time you convey a covered work, the recipient automatically
+receives a license from the original licensors, to run, modify and
+propagate that work, subject to this License.  You are not responsible
+for enforcing compliance by third parties with this License.
+
+  An "entity transaction" is a transaction transferring control of an
+organization, or substantially all assets of one, or subdividing an
+organization, or merging organizations.  If propagation of a covered
+work results from an entity transaction, each party to that
+transaction who receives a copy of the work also receives whatever
+licenses to the work the party's predecessor in interest had or could
+give under the previous paragraph, plus a right to possession of the
+Corresponding Source of the work from the predecessor in interest, if
+the predecessor has it or can get it with reasonable efforts.
+
+  You may not impose any further restrictions on the exercise of the
+rights granted or affirmed under this License.  For example, you may
+not impose a license fee, royalty, or other charge for exercise of
+rights granted under this License, and you may not initiate litigation
+(including a cross-claim or counterclaim in a lawsuit) alleging that
+any patent claim is infringed by making, using, selling, offering for
+sale, or importing the Program or any portion of it.
+
+  11. Patents.
+
+  A "contributor" is a copyright holder who authorizes use under this
+License of the Program or a work on which the Program is based.  The
+work thus licensed is called the contributor's "contributor version".
+
+  A contributor's "essential patent claims" are all patent claims
+owned or controlled by the contributor, whether already acquired or
+hereafter acquired, that would be infringed by some manner, permitted
+by this License, of making, using, or selling its contributor version,
+but do not include claims that would be infringed only as a
+consequence of further modification of the contributor version.  For
+purposes of this definition, "control" includes the right to grant
+patent sublicenses in a manner consistent with the requirements of
+this License.
+
+  Each contributor grants you a non-exclusive, worldwide, royalty-free
+patent license under the contributor's essential patent claims, to
+make, use, sell, offer for sale, import and otherwise run, modify and
+propagate the contents of its contributor version.
+
+  In the following three paragraphs, a "patent license" is any express
+agreement or commitment, however denominated, not to enforce a patent
+(such as an express permission to practice a patent or covenant not to
+sue for patent infringement).  To "grant" such a patent license to a
+party means to make such an agreement or commitment not to enforce a
+patent against the party.
+
+  If you convey a covered work, knowingly relying on a patent license,
+and the Corresponding Source of the work is not available for anyone
+to copy, free of charge and under the terms of this License, through a
+publicly available network server or other readily accessible means,
+then you must either (1) cause the Corresponding Source to be so
+available, or (2) arrange to deprive yourself of the benefit of the
+patent license for this particular work, or (3) arrange, in a manner
+consistent with the requirements of this License, to extend the patent
+license to downstream recipients.  "Knowingly relying" means you have
+actual knowledge that, but for the patent license, your conveying the
+covered work in a country, or your recipient's use of the covered work
+in a country, would infringe one or more identifiable patents in that
+country that you have reason to believe are valid.
+
+  If, pursuant to or in connection with a single transaction or
+arrangement, you convey, or propagate by procuring conveyance of, a
+covered work, and grant a patent license to some of the parties
+receiving the covered work authorizing them to use, propagate, modify
+or convey a specific copy of the covered work, then the patent license
+you grant is automatically extended to all recipients of the covered
+work and works based on it.
+
+  A patent license is "discriminatory" if it does not include within
+the scope of its coverage, prohibits the exercise of, or is
+conditioned on the non-exercise of one or more of the rights that are
+specifically granted under this License.  You may not convey a covered
+work if you are a party to an arrangement with a third party that is
+in the business of distributing software, under which you make payment
+to the third party based on the extent of your activity of conveying
+the work, and under which the third party grants, to any of the
+parties who would receive the covered work from you, a discriminatory
+patent license (a) in connection with copies of the covered work
+conveyed by you (or copies made from those copies), or (b) primarily
+for and in connection with specific products or compilations that
+contain the covered work, unless you entered into that arrangement,
+or that patent license was granted, prior to 28 March 2007.
+
+  Nothing in this License shall be construed as excluding or limiting
+any implied license or other defenses to infringement that may
+otherwise be available to you under applicable patent law.
+
+  12. No Surrender of Others' Freedom.
+
+  If conditions are imposed on you (whether by court order, agreement or
+otherwise) that contradict the conditions of this License, they do not
+excuse you from the conditions of this License.  If you cannot convey a
+covered work so as to satisfy simultaneously your obligations under this
+License and any other pertinent obligations, then as a consequence you may
+not convey it at all.  For example, if you agree to terms that obligate you
+to collect a royalty for further conveying from those to whom you convey
+the Program, the only way you could satisfy both those terms and this
+License would be to refrain entirely from conveying the Program.
+
+  13. Use with the GNU Affero General Public License.
+
+  Notwithstanding any other provision of this License, you have
+permission to link or combine any covered work with a work licensed
+under version 3 of the GNU Affero General Public License into a single
+combined work, and to convey the resulting work.  The terms of this
+License will continue to apply to the part which is the covered work,
+but the special requirements of the GNU Affero General Public License,
+section 13, concerning interaction through a network will apply to the
+combination as such.
+
+  14. Revised Versions of this License.
+
+  The Free Software Foundation may publish revised and/or new versions of
+the GNU General Public License from time to time.  Such new versions will
+be similar in spirit to the present version, but may differ in detail to
+address new problems or concerns.
+
+  Each version is given a distinguishing version number.  If the
+Program specifies that a certain numbered version of the GNU General
+Public License "or any later version" applies to it, you have the
+option of following the terms and conditions either of that numbered
+version or of any later version published by the Free Software
+Foundation.  If the Program does not specify a version number of the
+GNU General Public License, you may choose any version ever published
+by the Free Software Foundation.
+
+  If the Program specifies that a proxy can decide which future
+versions of the GNU General Public License can be used, that proxy's
+public statement of acceptance of a version permanently authorizes you
+to choose that version for the Program.
+
+  Later license versions may give you additional or different
+permissions.  However, no additional obligations are imposed on any
+author or copyright holder as a result of your choosing to follow a
+later version.
+
+  15. Disclaimer of Warranty.
+
+  THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
+APPLICABLE LAW.  EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
+HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
+OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
+THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+PURPOSE.  THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
+IS WITH YOU.  SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
+ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
+
+  16. Limitation of Liability.
+
+  IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
+WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
+THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
+GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
+USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
+DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
+PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
+EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
+SUCH DAMAGES.
+
+  17. Interpretation of Sections 15 and 16.
+
+  If the disclaimer of warranty and limitation of liability provided
+above cannot be given local legal effect according to their terms,
+reviewing courts shall apply local law that most closely approximates
+an absolute waiver of all civil liability in connection with the
+Program, unless a warranty or assumption of liability accompanies a
+copy of the Program in return for a fee.
+
+                     END OF TERMS AND CONDITIONS
+
+            How to Apply These Terms to Your New Programs
+
+  If you develop a new program, and you want it to be of the greatest
+possible use to the public, the best way to achieve this is to make it
+free software which everyone can redistribute and change under these terms.
+
+  To do so, attach the following notices to the program.  It is safest
+to attach them to the start of each source file to most effectively
+state the exclusion of warranty; and each file should have at least
+the "copyright" line and a pointer to where the full notice is found.
+
+    <one line to give the program's name and a brief idea of what it does.>
+    Copyright (C) <year>  <name of author>
+
+    This program is free software: you can redistribute it and/or modify
+    it under the terms of the GNU General Public License as published by
+    the Free Software Foundation, either version 3 of the License, or
+    (at your option) any later version.
+
+    This program is distributed in the hope that it will be useful,
+    but WITHOUT ANY WARRANTY; without even the implied warranty of
+    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+    GNU General Public License for more details.
+
+    You should have received a copy of the GNU General Public License
+    along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+Also add information on how to contact you by electronic and paper mail.
+
+  If the program does terminal interaction, make it output a short
+notice like this when it starts in an interactive mode:
+
+    <program>  Copyright (C) <year>  <name of author>
+    This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
+    This is free software, and you are welcome to redistribute it
+    under certain conditions; type `show c' for details.
+
+The hypothetical commands `show w' and `show c' should show the appropriate
+parts of the General Public License.  Of course, your program's commands
+might be different; for a GUI interface, you would use an "about box".
+
+  You should also get your employer (if you work as a programmer) or school,
+if any, to sign a "copyright disclaimer" for the program, if necessary.
+For more information on this, and how to apply and follow the GNU GPL, see
+<http://www.gnu.org/licenses/>.
+
+  The GNU General Public License does not permit incorporating your program
+into proprietary programs.  If your program is a subroutine library, you
+may consider it more useful to permit linking proprietary applications with
+the library.  If this is what you want to do, use the GNU Lesser General
+Public License instead of this License.  But first, please read
+<http://www.gnu.org/philosophy/why-not-lgpl.html>.
+recursive-include _textable *.png
+recursive-include docs/rst *
+include docs/Makefile docs/make.bat
+include docs/user_guide/guide_utilisateur.pdf
+include LICENSE
+include *.py *.rst
+Orange Textable
+===============
+
+Textable is an add-on for Orange_ data mining software package. It enables users to build data
+tables on the basis of text data, by means of a flexible and intuitive
+interface. It offers in particular the following features:
+
+- import text data from various sources
+- apply systematic recoding operations
+- apply analytical processes such as segmentation and annotation
+- manually, automatically or randomly select unit subsets
+- build concordances and collocation lists
+- compute quantitative indices such as frequency and complexity measures
+
+.. _Orange: http://orange.biolab.si/
+
+Textable was designed and implemented `LangTech Sarl <http://langtech.ch>`_ on behalf of the department of language and information sciences (SLI_) at the `University of Lausanne <http://www.unil.ch>`_.
+
+.. _SLI: http://www.unil.ch/sli
+

_textable/__init__.py

Empty file added.

_textable/widgets/LTTL/Address.py

+#=============================================================================
+# Class LTTL.Address, v0.05
+# Copyright 2012-2013 LangTech Sarl (info@langtech.ch)
+#=============================================================================
+# This file is part of the LTTL package v1.3
+#
+# LTTL v1.3 is free software: you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation, either version 3 of the License, or
+# (at your option) any later version.
+#
+# LTTL v1.3 is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with LTTL v1.3. If not, see <http://www.gnu.org/licenses/>.
+#=============================================================================
+
+class Address():
+
+    """A class for storing the address of a segment in LTTL."""
+      
+    def __init__(self, str_index, start=None, end=None):
+        """Initialize an Address instance"""
+        self.str_index  = str_index
+        self.start      = start
+        self.end        = end
+

_textable/widgets/LTTL/Input.py

+#=============================================================================
+# Class LTTL.Input, v0.09
+# Copyright 2012-2013 LangTech Sarl (info@langtech.ch)
+#=============================================================================
+# This file is part of the LTTL package v1.3
+#
+# LTTL v1.3 is free software: you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation, either version 3 of the License, or
+# (at your option) any later version.
+#
+# LTTL v1.3 is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with LTTL v1.3. If not, see <http://www.gnu.org/licenses/>.
+#=============================================================================
+
+from Segment        import Segment
+from Address        import Address
+from Segmentation   import Segmentation
+
+class Input(Segmentation):
+                                                                                                     
+    """A class for representing a segmentation storing data in LTTL"""
+    
+    def __init__(self, text=None, label=u'my_input'):
+        """Initialize an Input instance"""
+        Segmentation.data.append(None)
+        str_index = len(Segmentation.data) - 1
+        Segmentation.__init__(self, [Segment(Address(str_index))])
+        self.update(text, label)
+
+    def update(self, text=None, label=None):
+        """Set text and label values associated with an Input"""
+        Segmentation.data[self.segments[0].address.str_index] = text
+        self.label = label
+    
+    def clear(self):
+        """Reset text and label values associated with an Input to None"""
+        self.update()
+

_textable/widgets/LTTL/Processor.py

+#=============================================================================
+# Class LTTL.Processor, v0.14
+# Copyright 2012-2013 LangTech Sarl (info@langtech.ch)
+#=============================================================================
+# This file is part of the LTTL package v1.3
+#
+# LTTL v1.3 is free software: you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation, either version 3 of the License, or
+# (at your option) any later version.
+#
+# LTTL v1.3 is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with LTTL v1.3. If not, see <http://www.gnu.org/licenses/>.
+#=============================================================================
+
+from __future__   import division
+import math
+
+from Segmentation import Segmentation
+from Table        import *
+from Utils        import (
+        get_average,
+        get_variety,
+        tuple_to_simple_dict,
+        sample_dict,
+        prepend_unit_with_category,
+        generate_random_annotation_key,
+        get_unused_char_in_segmentation,
+)
+
+class Processor(object):
+
+    """A class for building tables based on segmentations."""
+
+    def count_in_context(
+            self,
+            units               = None,
+            contexts            = None,
+            progress_callback   = None,
+    ):
+        """Count units in contexts.
+
+        Parameter units is a dict with the following keys and values:
+        KEY                     VALUE                           DEFAULT
+        segmentation            unit segmentation               None
+        annotation_key          annotation to be counted        None
+        seq_length              length of unit sequences        1
+        intra_seq_delimiter     string for joining sequences    u'#'
+
+        Parameter contexts is a dict with the following keys and values:
+        KEY                     VALUE                           DEFAULT
+        segmentation            context segmentation            None
+        annotation_key          annotation to be counted        None
+        merge                   merge contexts together?        False
+
+        Returns a PivotCrosstab table.
+        """
+
+        default_units = {
+                'segmentation':         None,
+                'annotation_key':       None,
+                'seq_length':           1,
+                'intra_seq_delimiter':  u'#',
+        }
+        default_contexts = {
+                'segmentation':         None,
+                'annotation_key':       None,
+                'merge':                False,
+        }
+        if units is not None:
+            default_units.update(units)
+        units = default_units
+        if contexts is not None:
+            default_contexts.update(contexts)
+        contexts = default_contexts
+
+        freq            = {}
+        context_types   = []
+        unit_types      = []
+
+        # CASE 1: context segmentation is specified...
+        if (
+                contexts['segmentation'] is not None
+            and units['segmentation']    is not None
+        ):
+
+            # Set default context type.
+            context_type = contexts['segmentation'].label or u'__data__'
+
+            # Optimization...
+            context_annotation_key      = contexts['annotation_key']
+            context_segmentation        = contexts['segmentation']
+            unit_segmentation           = units['segmentation']
+            unit_segmentation_length    = len(units['segmentation'])
+            unit_annotation_key         = units['annotation_key']
+            unit_seq_length             = units['seq_length']
+            seq_join                    = units['intra_seq_delimiter'].join
+
+            # CASE 1A: unit sequence length is greater than 1...
+            if unit_seq_length > 1:
+
+                # Get the list of contexts in final format...
+                if context_annotation_key is not None:
+                    context_list = [
+                            c.annotations.get(
+                                    context_annotation_key,
+                                    u'__none__',
+                            )
+                                    for c in context_segmentation
+                    ]
+                else:
+                    context_list = [
+                            c.get_content() for c in context_segmentation
+                    ]
+
+                # Get the list of units in final format...
+                if unit_annotation_key is not None:
+                    unit_list = [
+                            u.annotations.get(
+                                    unit_annotation_key,
+                                    u'__none__',
+                            )
+                                    for u in unit_segmentation
+                    ]
+                else:
+                    unit_list = [u.get_content() for u in unit_segmentation]
+
+                # Loop over context token indices...
+                for context_index in xrange(len(context_segmentation)):
+
+                    # Get context token...
+                    context_token = context_segmentation[context_index]
+
+                    # Get and store context type...
+                    if not contexts['merge']:
+                        context_type = context_list[context_index]
+                        if context_type not in context_types:
+                            context_types.append(context_type)
+
+                    # Loop over contained unit sequences
+                    for unit_seq_index in \
+                            context_token.get_contained_sequence_indices(
+                                    unit_segmentation,
+                                    unit_seq_length,
+                            ):
+
+                        # Get unit type...
+                        unit_type = seq_join(unit_list[
+                                unit_seq_index
+                                :
+                                unit_seq_index + unit_seq_length
+                        ])
+
+                        # Store unit type...
+                        if unit_type not in unit_types:
+                            unit_types.append(unit_type)
+
+                        # Increment count of context-unit pair...
+                        freq[(context_type, unit_type)] = freq.get(
+                                (context_type, unit_type),
+                                0
+                        ) + 1
+
+                    if progress_callback:
+                        progress_callback()
+
+                # Store default context type if needed...
+                if len(freq) > 0 and len(context_types) == 0:
+                    context_types.append(context_type)
+
+            # CASE 1B: unit sequence length is 1...
+            else:
+
+                # Loop over context tokens (=containing segments)
+                for context_token in contexts['segmentation']:
+
+                    # Determine if context types are surface or annotations.
+                    if not contexts['merge']:
+                        if context_annotation_key is not None:
+                            context_type = context_token.annotations.get(
+                                    context_annotation_key,
+                                    u'__none__',        # Default annotation
+                            )
+                        else:
+                            context_type = context_token.get_content()
+
+                    # Loop over unit tokens (=contained segments)
+                    for unit_token in \
+                            context_token.get_contained_segments(
+                                    unit_segmentation
+                            ):
+
+                        # Determine if unit types are surface or anno.
+                        if unit_annotation_key:
+                            unit_type = unit_token.annotations.get(
+                                    unit_annotation_key,
+                                    u'__none__',    # Default annotation
+                            )
+                        else:
+                            unit_type = unit_token.get_content()
+
+                        # Store context and unit type...
+                        if context_type not in context_types:
+                            context_types.append(context_type)
+                        if unit_type not in unit_types:
+                            unit_types.append(unit_type)
+
+                        # Increment count of context-unit pair...
+                        freq[(context_type, unit_type)] = freq.get(
+                                (context_type, unit_type),
+                                0
+                        ) + 1
+
+                    if progress_callback:
+                        progress_callback()
+
+        # CASE 2: no context segmentation is specified...
+        elif (units['segmentation'] is not None):
+
+            # Set default (unique) context type to '__data__'...
+            context_type = units['segmentation'].label or u'__data__'
+            context_types.append(context_type)
+
+            # Optimization...
+            unit_annotation_key         = units['annotation_key']
+            unit_segmentation           = units['segmentation']
+            unit_segmentation_length    = len(units['segmentation'])
+            unit_seq_length             = units['seq_length']
+            seq_join                    = units['intra_seq_delimiter'].join
+
+            # CASE 2A: unit sequence length is greater than 1...
+            if unit_seq_length > 1:
+
+                # Get the list of units in final format...
+                if unit_annotation_key is not None:
+                    unit_list = [
+                            u.annotations.get(
+                                    unit_annotation_key,
+                                    u'__none__',
+                            )
+                                    for u in unit_segmentation
+                    ]
+                else:
+                    unit_list = [u.get_content() for u in unit_segmentation]
+
+                # Loop over unit sequences (=contained segments)
+                for unit_index in xrange(
+                        unit_segmentation_length - (unit_seq_length - 1)
+                ):
+                
+                    # Get unit type...
+                    unit_type = seq_join(unit_list[
+                            unit_index : unit_index + unit_seq_length
+                    ])
+
+                    # Store unit type...
+                    if unit_type not in unit_types:
+                        unit_types.append(unit_type)
+
+                    # Increment count of context-unit pair...
+                    freq[(context_type, unit_type)] = freq.get(
+                            (context_type, unit_type),
+                            0
+                    ) + 1
+
+                    if progress_callback:
+                        progress_callback()
+
+            # CASE 2B: unit sequence length is 1...
+            else:
+
+                # Get the list of units in final format...
+                if unit_annotation_key is not None:
+                    unit_list = [
+                            u.annotations.get(
+                                    unit_annotation_key,
+                                    u'__none__',
+                            )
+                                    for u in unit_segmentation
+                    ]
+                else:
+                    unit_list = [u.get_content() for u in unit_segmentation]
+
+                # Get unit types...
+                unit_types = list(set(unit_list))
+
+                # Loop over unit tokens (=contained segments)
+                for unit_type in unit_list:
+
+                    # Increment count of context-unit pair...
+                    type_pair       = (context_type, unit_type)
+                    freq[(context_type, unit_type)] = freq.get(
+                            (context_type, unit_type),
+                            0
+                    ) + 1
+
+                    if progress_callback:
+                        progress_callback()
+
+        # Create pivot crosstab...
+        return(PivotCrosstab(
+                context_types,
+                unit_types,
+                freq,
+                {
+                    'id':   u'__unit__',
+                    'type': u'string',
+                },
+                {
+                    'id':   u'__context__',
+                    'type': u'string',
+                },
+                dict([(u, u'continuous') for u in unit_types]),
+                None,
+                0,
+                None,
+        ))
+
+  
+    def count_in_window(
+            self,
+            units               = None,
+            window_size         = 1,
+            progress_callback   = None,
+    ):
+        """Count units in sliding window.
+
+        Parameter units is a dict with the following keys and values:
+        KEY                     VALUE                            DEFAULT
+        segmentation            unit segmentation                None
+        annotation_key          annotation to be counted         None
+        seq_length              length of unit sequences         1
+        intra_seq_delimiter     string for joining sequences     u'#'
+
+        Returns a PivotCrosstab table.
+        """
+
+        default_units = {
+                'segmentation':             None,
+                'annotation_key':           None,
+                'seq_length':               1,
+                'intra_seq_delimiter':      u'#',
+        }
+        if units is not None:
+            default_units.update(units)
+        units = default_units
+
+        freq        = {}
+        unit_types  = []
+        window_type = 1
+
+        if (
+                units['segmentation'] is not None
+            and window_size <= len(units['segmentation'])
+            and window_size >= units['seq_length']
+        ):
+
+            # Optimization...
+            unit_segmentation   = units['segmentation']
+            unit_annotation_key = units['annotation_key']
+
+            # Get the list of units in final format (content or annotation)...
+            if unit_annotation_key is not None:
+                unit_list = [
+                        unit_token.annotations.get(
+                                unit_annotation_key,
+                                u'__none__',            # Default annotation
+                        )
+                                for unit_token in unit_segmentation
+                ]
+            else:
+                unit_list = [u.get_content() for u in unit_segmentation]
+
+            # CASE 1: unit sequence length is greater than 1...
+            if units['seq_length'] > 1:
+
+                # Optimization...
+                seq_join        = units['intra_seq_delimiter'].join
+                unit_seq_length = units['seq_length']
+
+                # Get counts for first window...
+                first_window = unit_list[:window_size]
+                window_freq = {}
+                for unit_index in xrange(window_size - (unit_seq_length - 1)):
+                    unit_type = seq_join(first_window[
+                       unit_index : unit_index + unit_seq_length
+                    ])
+                    if unit_type not in unit_types:
+                        unit_types.append(unit_type)
+                    window_freq[unit_type] = window_freq.get(unit_type, 0) + 1
+
+                # Update main counts...
+                freq = dict(
+                    [((u'1', k), v) for (k, v) in window_freq.iteritems()]
+                )
+
+                if progress_callback:
+                    progress_callback()
+
+                # Loop over other window indices...
+                for window_index in xrange(
+                        1,
+                        len(units['segmentation']) - (window_size - 1)
+                ):
+
+                    # Decrement count of first unit in previous window...
+                    window_freq[seq_join(unit_list[
+                            window_index - 1
+                            :
+                            window_index + unit_seq_length - 1
+                    ])] -= 1
+
+                    # Increment count of last unit in current window...
+                    new_unit = seq_join(unit_list[
+                        window_index + window_size - unit_seq_length
+                        :
+                        window_index + window_size
+                    ])
+                    window_freq[new_unit] = window_freq.get(new_unit, 0) + 1
+                    if new_unit not in unit_types:
+                        unit_types.append(new_unit)
+
+                    # Get window type...
+                    window_type = window_index + 1
+                    window_str  = unicode(window_type)
+
+                    # Update main counts...
+                    freq.update(dict([
+                            ((window_str, k), v)
+                                    for (k, v) in window_freq.iteritems()
+                    ]))
+
+                    if progress_callback:
+                        progress_callback()
+
+            # CASE 2: unit sequence length is 1...
+            else:
+
+                # Get unit types...
+                unit_types = list(set(unit_list))
+
+                # Get counts for first window...
+                first_window = unit_list[:window_size]
+                window_freq = {}
+                for unit_token in first_window:
+                    window_freq[unit_token] = window_freq.get(unit_token, 0)+1
+
+                # Update main counts...
+                freq = dict(
+                    [((u'1', k), v) for (k, v) in window_freq.iteritems()]
+                )
+
+                if progress_callback:
+                    progress_callback()
+
+                # Loop over other window indices...
+                for window_index in xrange(
+                        1,
+                        len(units['segmentation']) - (window_size - 1)
+                ):
+
+                    # Decrement count of first unit in previous window...
+                    window_freq[unit_list[window_index-1]] -= 1
+
+                    # Increment count of last unit in current window...
+                    new_unit = unit_list[window_index + window_size - 1]
+                    window_freq[new_unit] = window_freq.get(new_unit, 0) + 1
+
+                    # Get window type...
+                    window_type = window_index + 1
+                    window_str  = unicode(window_type)
+                    # Update main counts...
+                    freq.update(dict([
+                            ((window_str, k), v)
+                                    for (k, v) in window_freq.iteritems()
+                    ]))
+
+                    if progress_callback:
+                        progress_callback()
+
+        # Create pivot crosstab...
+        return(PivotCrosstab(
+                [unicode(i) for i in range(1, window_type+1)],
+                unit_types,
+                freq,
+                {
+                    'id':   u'__unit__',
+                    'type': u'string',
+                },
+                {
+                    'id':   u'__context__',
+                    'type': u'continuous',
+                },
+                dict([(u, u'continuous') for u in unit_types]),
+                None,
+                0,
+                None,
+        ))
+
+
+
+    def count_in_chain(
+            self,
+            units               = None,
+            contexts            = None,
+            progress_callback   = None,
+    ):
+        """Count units given left and/or right context.
+
+        Parameter units is a dict with the following keys and values:
+        KEY                     VALUE                             DEFAULT
+        segmentation            unit segmentation                 None
+        annotation_key          annotation to be counted          None
+        seq_length              length of unit sequences          1
+        intra_seq_delimiter     string for joining sequences      u'#'
+
+        Parameter contexts is a dict with the following keys and values:
+        KEY                     VALUE                             DEFAULT
+        left_size               size of left context              1
+        right_size              size of right context             0
+        unit_pos_marker         string indicating unit position   u'_'
+
+        Returns a PivotCrosstab table.
+        """
+
+        default_units = {
+                'segmentation':         None,
+                'annotation_key':       None,
+                'seq_length':           1,
+                'intra_seq_delimiter':  u'#',
+        }
+        default_contexts = {
+                'left_size':            1,
+                'right_size':           0,
+                'unit_pos_marker':      u'_',
+        }
+        if units is not None:
+            default_units.update(units)
+        units = default_units
+        if contexts is not None:
+            default_contexts.update(contexts)
+        contexts = default_contexts
+
+        freq            = {}
+        context_types   = []
+        unit_types      = []
+
+        # Optimization...
+        unit_segmentation   = units['segmentation']
+        unit_annotation_key = units['annotation_key']
+        unit_seq_length     = units['seq_length']
+        context_left_size   = contexts['left_size']
+        context_right_size  = contexts['right_size']
+        unit_pos_marker     = contexts['unit_pos_marker']
+        seq_join            = units['intra_seq_delimiter'].join
+        window_size = context_left_size  \
+                    + context_right_size \
+                    + unit_seq_length
+
+        if (
+                unit_segmentation is not None
+            and window_size <= len(unit_segmentation)
+        ):
+
+            # Get the list of units in final format (content or annotation)...
+            if unit_annotation_key is not None:
+                unit_list = [
+                        unit_token.annotations.get(
+                                unit_annotation_key,
+                                u'__none__',            # Default annotation
+                        )
+                                for unit_token in unit_segmentation
+                ]
+            else:
+                unit_list = [u.get_content() for u in unit_segmentation]
+
+            # CASE 1: unit sequence length is greater than 1...
+            if unit_seq_length > 1:
+
+                # Loop over window indices...
+                for window_index in xrange(
+                        len(units['segmentation']) - (window_size - 1)
+                ):
+
+                    # Get context type...
+                    context_type = '%s%s%s' % (
+                            seq_join(unit_list[
+                                    window_index
+                                    :
+                                    window_index + context_left_size
+                            ]),
+                            unit_pos_marker,
+                            seq_join(unit_list[
+                                      window_index
+                                    + context_left_size
+                                    + unit_seq_length
+                                    :
+                                    window_index + window_size
+                            ])
+                    )
+
+                    # Get unit type...
+                    unit_type = seq_join(unit_list[
+                            window_index + context_left_size
+                            :
+                            window_index + context_left_size + unit_seq_length
+                    ])
+
+                    # Store context and unit type...
+                    if context_type not in context_types:
+                        context_types.append(context_type)
+                    if unit_type not in unit_types:
+                        unit_types.append(unit_type)
+
+                    # Increment count of context-unit pair...
+                    freq[(context_type, unit_type)] = freq.get(
+                            (context_type, unit_type),
+                            0
+                    ) + 1
+
+                    if progress_callback:
+                        progress_callback()
+
+            # CASE 2: unit sequence length is 1...
+            else:
+
+                # Get unit types...
+                unit_types = list(set(unit_list[
+                        context_left_size
+                        :
+                        len(units['segmentation']) - context_right_size
+                ]))
+
+                # Loop over window indices...
+                for window_index in xrange(
+                        len(units['segmentation']) - (window_size - 1)
+                ):
+
+                    # Get context type...
+                    context_type = '%s%s%s' % (
+                            seq_join(unit_list[
+                                    window_index
+                                    :
+                                    window_index + context_left_size
+                            ]),
+                            unit_pos_marker,
+                            seq_join(unit_list[
+                                    window_index + context_left_size + 1
+                                    :
+                                    window_index + window_size
+                            ])
+                    )
+                    
+                    # Get unit type...
+                    unit_type = unit_list[window_index + context_left_size]
+
+                    # Store context and unit type...
+                    if context_type not in context_types:
+                        context_types.append(context_type)
+
+                    # Increment count of context-unit pair...
+                    freq[(context_type, unit_type)] = freq.get(
+                            (context_type, unit_type),
+                            0
+                    ) + 1
+
+                    if progress_callback:
+                        progress_callback()
+
+        # Create pivot crosstab...
+        return(PivotCrosstab(
+                context_types,
+                unit_types,
+                freq,
+                {
+                    'id':   u'__unit__',
+                    'type': u'string',
+                },
+                {
+                    'id':   u'__context__',
+                    'type': u'string',
+                },
+                dict([(u, u'continuous') for u in unit_types]),
+                None,
+                0,
+                None,
+        ))
+
+
+    def length_in_context(
+            self,
+            units               = None,
+            averaging           = None,
+            contexts            = None,
+            progress_callback   = None,
+    ):
+        """Compute length of segmentation / av. length of units in contexts.
+
+        Parameter averaging is a dict with the following keys and values:
+        KEY                     VALUE                           DEFAULT
+        segmentation            averaging unit segmentation     None
+        std_deviation           compute standard deviation      False
+
+        Parameter contexts is a dict with the following keys and values:
+        KEY                     VALUE                           DEFAULT
+        segmentation            context segmentation            None
+        annotation_key          annotation to be used           None
+        merge                   merge contexts together?        False
+
+        NB: When some form of averaging is performed with large segmentations,
+        execution can be *dramatically* slower if standard deviation is
+        computed.
+        
+        Returns a Table.
+        """
+
+        default_contexts = {
+                'segmentation':         None,
+                'annotation_key':       None,
+                'merge':                False,
+        }
+        default_averaging = {
+                'segmentation':         None,
+                'std_deviation':        False,
+        }
+        if contexts is not None:
+            default_contexts.update(contexts)
+        contexts = default_contexts
+        if averaging is not None:
+            default_averaging.update(averaging)
+        averaging = default_averaging
+
+        values          = {}
+        context_types   = []
+        col_ids         = []
+
+        # CASE 1: context segmentation is specified...
+        if contexts['segmentation'] is not None and units is not None:
+
+            # Optimization...
+            context_annotation_key  = contexts['annotation_key']
+            context_segmentation    = contexts['segmentation']
+
+            # If contexts should be merged...
+            if contexts['merge']:
+            
+                # Set default context type.
+                context_type = contexts['segmentation'].label or u'__data__'
+
+            # Else get the list of contexts in final format...
+            else:
+                if context_annotation_key is not None:
+                    context_list = [
+                            c.annotations.get(
+                                    context_annotation_key,
+                                    u'__none__',
+                            )
+                                    for c in context_segmentation
+                    ]
+                else:
+                    context_list = [
+                            c.get_content() for c in context_segmentation
+                    ]
+
+            # CASE 1A: averaging units are specified...
+            if averaging['segmentation'] is not None:
+
+                # Optimization...
+                averaging_segmentation  = averaging['segmentation']
+                averaging_std_deviation = averaging['std_deviation']
+
+                # CASE 1A.i: Standard deviation should be computed...
+                if averaging_std_deviation:
+
+                    lengths = {}
+
+                    # Initialize lengths if contexts should be merged...
+                    if contexts['merge']:
+                            lengths[context_type] = []
+
+                    # Loop over context token indices...
+                    for context_index in xrange(
+                            len(contexts['segmentation'])
+                    ):
+
+                        # Get context token...
+                        context_token = context_segmentation[context_index]
+
+                        # Get and store context type...
+                        if not contexts['merge']:
+                            context_type = context_list[context_index]
+                            if context_type not in context_types:
+                                context_types.append(context_type)
+                                lengths[context_type] = []
+
+                        # Optimization...
+                        lengths_append = lengths[context_type].append
+
+                        # Loop over contained averaging units...
+                        for averaging_unit in \
+                                context_token.get_contained_segments(
+                                        averaging_segmentation
+                                ):
+
+                            # Add length of this averaging unit...
+                            lengths_append(
+                                    len(averaging_unit.get_contained_segments(
+                                            units
+                                    ))
+                            )
+
+                        if progress_callback:
+                            progress_callback()
+
+                    # If contexts should be merged...
+                    if contexts['merge']:
+
+                        # Get average length and stdev...
+                        (average, stdev) = get_average(
+                            lengths[context_type]
+                        )
+                        values[
+                                (context_type, u'__length_average__')
+                        ] = average
+                        values[
+                                (context_type,u'__length_std_deviation__')
+                        ] = stdev
+                        values[
+                                (context_type, u'__length_count__')
+                        ] = len(lengths[context_type])
+
+                    # Otherwise loop over context types...
+                    else:
+
+                        for context_type in context_types:
+
+                            # Get average length and stdev for this type...
+                            (average, stdev) = get_average(
+                                lengths[context_type]
+                            )
+                            values[
+                                    (context_type, u'__length_average__')
+                            ] = average
+                            values[
+                                    (context_type,u'__length_std_deviation__')
+                            ] = stdev
+                            values[
+                                    (context_type, u'__length_count__')
+                            ] = len(lengths[context_type])
+
+                # CASE 1A.ii: Standard deviation need not be computed...
+                else:
+
+                    num_averaging_units = {}
+                    num_units           = {}
+
+                    # Initialize counts if contexts should be merged...
+                    if contexts['merge']:
+                        num_averaging_units[context_type] = 0
+                        num_units[context_type]           = 0
+
+                    # Loop over context token indices...
+                    for context_index in xrange(
+                            len(contexts['segmentation'])
+                    ):
+
+                        # Get context token...
+                        context_token = context_segmentation[context_index]
+
+                        # Get and store context type...
+                        if not contexts['merge']:
+                            context_type = context_list[context_index]
+                            if context_type not in context_types:
+                                context_types.append(context_type)
+                                num_averaging_units[context_type] = 0
+                                num_units[context_type]           = 0
+
+                        # Get number of averaging units for this context...
+                        num_averaging_units[context_type] += len(
+                                context_token.get_contained_segments(
+                                        averaging_segmentation
+                                )
+                        )
+
+                        # Get number of units for this context...
+                        num_units[context_type] += len(
+                            context_token.get_contained_segments(units)
+                        )
+
+                        if progress_callback:
+                            progress_callback()
+
+                    # If contexts should be merged...
+                    if contexts['merge']:
+
+                        # Store average and count...
+                        average_length = (
+                                num_units[context_type]
+                              / num_averaging_units[context_type]
+                        )
+                        if average_length < 1:
+                            average_length = 0.0
+                        values[
+                                (context_type, u'__length_average__')
+                        ] = average_length
+                        values[
+                                (context_type, u'__length_count__')
+                        ] = num_averaging_units[context_type]
+
+                    # Otherwise loop over context types...
+                    else:
+
+                        for context_type in context_types:
+
+                            # Store average and count for this context...
+                            average_length = (
+                                    num_units[context_type]
+                                  / num_averaging_units[context_type]
+                            )
+                            if average_length < 1:
+                                average_length = 0.0
+                            values[
+                                    (context_type, u'__length_average__')
+                            ] = average_length
+                            values[
+                                    (context_type, u'__length_count__')
+                            ] = num_averaging_units[context_type]
+
+                # Store col ids...
+                if len(values) > 0:
+                    col_ids.append(u'__length_average__')
+                    if averaging_std_deviation:
+                        col_ids.append(u'__length_std_deviation__')
+                    col_ids.append(u'__length_count__')
+
+                # Store default context type if needed...
+                if len(values) > 0 and len(context_types) == 0:
+                    context_types.append(context_type)
+
+            # CASE 1B: no averaging units are specified...
+            else:
+
+                # Loop over context token indices...
+                for context_index in xrange(len(contexts['segmentation'])):
+
+                    # Get context token...
+                    context_token = context_segmentation[context_index]
+
+                    # Get and store context type...
+                    if not contexts['merge']:
+                        context_type = context_list[context_index]
+                        if context_type not in context_types:
+                            context_types.append(context_type)
+
+                    # Increment length for this context...
+                    values[(context_type, u'__length__')] = values.get(
+                            (context_type, u'__length__'), 0
+                    ) + len(
+                            context_token.get_contained_segments(units)
+                    )
+
+                    if progress_callback:
+                        progress_callback()
+
+                # Store col ids...
+                if len(values) > 0:
+                    col_ids.append(u'__length__')
+
+        # CASE 2: context segmentation is not specified...
+        elif (units is not None):
+
+            # CASE 2A: averaging units are specified...
+            if averaging['segmentation'] is not None:
+
+                # Set default (unique) context type to '__data__'...
+                context_type = averaging['segmentation'].label or u'__data__'
+                context_types.append(context_type)
+
+                # CASE 2A.i: Standard deviation should be computed...
+                if averaging['std_deviation']:
+                    lengths = [
+                            len(a.get_contained_segments(units))
+                                    for a in averaging['segmentation']
+                    ]
+
+                    # Get average length and stdev for this context...
+                    (average, stdev) = get_average(lengths)
+                    values[
+                            (context_type, u'__length_average__')
+                    ] = average
+                    values[
+                            (context_type, u'__length_std_deviation__')
+                    ] = stdev
+                    values[
+                            (context_type, u'__length_count__')
+                    ] = len(lengths)
+
+                # CASE 2A.ii: Standard deviation need not be computed...
+                else:
+
+                    # Get number of averaging units...
+                    num_averaging_units = len(averaging['segmentation'])
+
+                    # Store average and count...
+                    average_length = len(units) / num_averaging_units
+                    if average_length < 1:
+                        average_length = 0.0
+                    values[
+                            (context_type, u'__length_average__')
+                    ] = average_length
+                    values[
+                            (context_type, u'__length_count__')
+                    ] = num_averaging_units
+
+                # Store col ids...
+                if len(values) > 0:
+                    col_ids.append(u'__length_average__')
+                    if averaging['std_deviation']:
+                        col_ids.append(u'__length_std_deviation__')
+                    col_ids.append(u'__length_count__')
+
+            # CASE 2B: no averaging units are specified...
+            else:
+
+                # Set default (unique) context type to '__data__'...
+                context_type = units.label or u'__data__'
+                context_types.append(context_type)
+
+                # Get length...
+                values[(context_type, u'__length__')] = len(units)
+
+                # Store col ids...
+                if len(values) > 0:
+                    col_ids.append(u'__length__')
+
+        # Store default context type if needed...
+        if len(values) > 0 and len(context_types) == 0:
+            context_types.append(context_type)
+
+        # Create Table...
+        return(Table(
+                context_types,
+                col_ids,
+                values,
+                {},
+                {
+                    'id':   u'__context__',
+                    'type': u'string',
+                },
+                dict([(c, u'continuous') for c in col_ids]),
+                None,
+                None,
+                None,
+        ))
+
+
+    def length_in_window(
+            self,
+            units               = None,
+            averaging           = None,
+            window_size         = 1,
+            progress_callback   = None,
+    ):
+        """Compute average length of units in sliding window.
+
+        Parameter averaging is a dict with the following keys and values:
+        KEY                     VALUE                           DEFAULT
+        segmentation            averaging unit segmentation     None
+        std_deviation           compute standard deviation      False
+
+        NB: When some form of averaging is performed with large segmentations,
+        execution can be dramatically slower if standard deviation is
+        computed.
+
+        Returns a Table.
+        """
+
+        default_averaging = {
+                'segmentation':         None,
+                'std_deviation':        False,
+        }
+        if averaging is not None: