Class structure at a glance

Issue #9 resolved
David Lucas repo owner created an issue

Because some of the discussions on design start to grow, I tried to summarize our design and the resulted file can be found under source tab, in preliminary_work folder, name Class_diagram You'll find an editable.dia file- use with : https://wiki.gnome.org/Apps/Dia - if you wish to modify the UML.

It's of course a work in progress, which will be modified as the design changes.

Comments (4)

  1. Johan Rosenkilde

    Some random thoughts:

    Apart from the things we discussed yesterday which should be fixed on the diagram, there's missing various obvious field-methods: Decoder.code, Encoder.code, Encoder.message_space, Code.minimum_distance. LinearBlockCode should not have a method message_space.

    Decoder should probably have methods to allow to identify which type of semantics the decoder has: i.e. is it unique decoder, list-decoder, ML decoder, bounded-distance ML decoder (such as my Multi-trial algorithm), probabilistic bounded-distance (such as Power), or LDPC-type just-probably-lowers-the-bit-error-rate-but-doesn't-promise-anything. Suggestions? In Codinglib, there is Decoder.is_list_decoder, since this determines the type of the return values from the decode_to_* functions. But there is no way to distinguish the other decoder types.

    For most of the decoder types, we would also like to have a Decoder.decoding_radius but for some decoders, that might not make sense or have slightly confusing semantics. For instance, for AG codes the Guruswami-Sudan has a promised decoding radius. However, probably it succeeds on decoding many more errors that this, and in my newest paper, we have a conjecture on this number. Similarly, Power decoding AG codes also has a "probably" decoding radius (which coincides with the "probably" radius of G-S when the parameters are the same), while its promised decoding radius is only d/2-genus/2. Other strange examples include ML-decoding (e.g. Irene's Grobner-based algorithm), where the decoding radius is in some sense n; or LPDC-type decoding which doesn't have a sensible decoding radius measure.

    Daniel suggested that we could therefore consider not to have one Decoder class, but several, corresponding to the different semantics. But I'm against that since I see the precise decoder semantics as secondary to the fact that they are decoding algorithms.

    Topic switch: Code.syndrome should probably have the same optional argument as Code.parity_check_matrix. My thought is that syndrome always returns the vector resulting from multiplication with a parity check matrix. For e.g. Goppa codes, however, we might naturally expect a syndrome polynomial. Should the user do the conversion himself, or should this then be a syndrome function attached to the Decoder of the Goppa code (since it is in the "usual" decoding algorithm for Goppa codes that this syndrome polynomial is useds)?

    How should we support asking for "natural" or cheap bounds on the minimum distance. E.g. a Goppa code has a designed minimum distance. But for any linear code, perhaps one would want to run a polynomial-time algorithm for getting pretty good upper/lower bounds? At least, I think the clearest is to reserve Code.minimum_distance for the true minimum distance (which might be exponential to calculate). What's Code_Family?

  2. Log in to comment