net.lisias.retro Web Services

This is the public repository for the net.lisias.retro Distributed Web Services (aka "Confederation").

Abbreviations and acronyms

Used in all documents on this project:

  • WiP : Work In Progress
  • RiP : Research In Progress ;-)

What is it?

A series of Micro Services, trimmed to be serviced by a myriad of small appliances (as the Raspberry Pi Model B - 2011.12). Obviously, the services can be serviced by more powerful appliances, VPS including (go cloud!).

Most of the services can be further trimmed down, reducing the range of services being provided. In order to bring some sanity to the resulting mess, a central service called "Proxy/Router" is up an where the Providers register themselves. Clients, so, make the requests to the Proxy, and it redirects (or proxy, if it is on my intranet) the request to someone that can handle it. We call this a "Confederated Micro Services", more information follows.


A Provider is any service willing to... well... provide a service. :) The FTP Search Engines are examples of such services, but anything goes - there's a WebRadio with publicly editable playlist, a database front-end for some related data bases (yet Research on Progress), and more to come. A full description is available, and a development kit will be available as soon as possible!


That "Confederation" thing is a kind of "cloud", where anyone willing to provide a service from any device that can handle it register itself so the Proxy can redirect the requests. Of course, more than one Provider can handle that request, and so the Proxy chooses one to service it - the more Providers we have, the better. We aim to be a high availability pool os services to the Retro Computing scene.

Any Provider can register itself, and later unregister itself, and then come back and register itself again - no questions asked. You have idle appliances at specific times, and want to use the sparing CPU cycles it with us? Go for it. Two hours a day, 2 days by week, you name it : you are welcome to contribute the way you can (and want).

The current central node is Anyone willing to consume services can pinpoint his/her clients to this address: if the service is available, it will be redirect to a capable Provider. Clients can probe the current serviced Services, the full documentation is available here.

A full blown Confederation is Research in Progress. Having only one point of failure is far from acceptable, ideally many Proxies scattered around the Globe should, somehow, be in touch and supporting themselves. I just don't know how to to it. Yet. :)

Available Providers

FTP Search Engines

Most retro computing resources are serviced by FTP, however it's not easy to find what you want from on the directory structure. Some repositories use a 00_INDEX or ls-lR file, but looking on them is cumbersome. So, such Providers indexes FTP repositories, allowing you to search for what you want. Each repository has its own data adapter, trying to grab as much metadata is possible to make your life easier.

A (crude) HTML front-end is available here. At least one Provider is being hosted by a Raspberry Pi. :-)

Please note that these front-ends aims to be guidelines to more user friendly (or even automated) services.

WebRadio by FTP

From the Search Engines above, one of them indexes the ModLand's FTP. An IceCast2 audio stream is available, allowing users to pick up a mod file from Modland (and some other mod repositories) and add it into the WebRadio's Play list. It's being served by my Raspberry Pi here (client for Desktops - for small appliances, use this ).

Planned Providers

  • Database front-ends
    • ZXDB is in alpha phase at this moment
  • File converters
    • You have a wave from a tape and want the binary for your emulator?
    • Do you know a FTP with the tape image for that game, but wants to load it from a Wave file?
    • This will solve all your problems =]
  • File repositories
    • A way to distribute the burden of serving files to a increasing audience.
  • Mirroring service
    • A way to pinpoint the known (official or confederated) mirrors of a known file.
  • DataSet repositories
    • Small appliances are getting a harsh time trying to build the DataSets (the data the feeds the Search Engines) by themselves.
    • This service will prevent all your appliances from having to download and parse all data by themselves - instead, only one (powerful) one will do the job and the rest will just sync.
  • Suggestions?

How to instance a Provider?

Once you choose an appliance to host it (a VPS on AWS, a Docker container on Sloppy, or a Funtoo one in Funtoo - or that old Raspberry Pi over you desk :), all you have to do is to git clone this repository and follow the INSTALL instructions.

Everything is IPv6 ready, but it's also almost untested on it. My appliances on AWS are all still under IPv4 (common, AWS!! Fix this!!), and just recently I got IPv6 at home.

If you have a Raspberry Pi to spare, a pre made SD Image is available saving you a lot of trouble. Details on CONFIGURE.

What else is there?

Besides the WS Runners (explained in INSTALL and CONFIGURE, the following artefacts are available for use:


A shorcut to AppleCommander, a java tool to handle Apple2 Disk Images. Running it without parameters will try to invoke the GUI (if you have X installed and running).

A tool to manually invoke the Data Miners to (re)build the datasets used by the Search Engines if needed.

Calling it without parameters will (re)build all the datasets configured in ~/configure's net_lisias_retro_file_search_SERVED_ARCHIVES. This file must be available in your home directory or the tool will not work.

You can specify the datasets to be (re)build by specifying them in the command line by their Data Miner's Package Name. See the net_lisias_retro_file_ALL_ARCHIVES in the ~/configure for the available ones.

For example, ./ AmigaScne A2.Asimov Gaby will (re)build only the datasets for amigascne, a2.asimov and gaby archives. The naming difference is due the packaging scheme of the source code - nothing I can do for while - sorry.

Trying to (re)build Archives not configured in net_lisias_retro_file_search_SERVED_ARCHIVES will be ignored.

This tool should be used by appliances with memory constraints, as the Datasets will be (re)built sequentially - minimizing the memory footprint in exchange of the time needed to complete the jobs.

Exactly the same tool as slow, but all the tasks will be executed in parallel. This will demand huge amount of memory, as potentially all the Archives will be (re)built at once - your mileage will vary by the number of CPUs available.

However, even by having only one CPU this will help somehow due I/O : with many tasks running in parallel, a blocking one will not prevent the processing of the others.

This tool preload/update the prebuilt datasets from the Confederation's ftp. It's the preferred way to get a running setup, unless you want to double check the work or wants to fork the project and compete with me. :-)

The commandline syntax is exactly the same from the previous tools.

Legacy tool needed when the official WoS repository was offline, and only a limited HTTP mirror was available - preventing the Data Miner to work correctly. This tool fetches the needed data "manually", easing the Miner's life.

Since that mirror is also dead nowadays, and the WoS files were uploaded into Archive, the usefulness of this tool is uncertain, and it's being keep just in case.


  • net.lisias.retro.proxy
    • Fully Operational
    • Fully Operational
  • net.lisias.retro.db
    • Operational with restrictions
      • The whole shebang is beta
      • The databases being served need to be updated
    • the service is Fully Operational.
    • some Archives have restrictions. See below.
  • net.lisias.retro.file.mirror
    • In development. The thing is not even Alpha yet.

net.lisias.retro.file Archives

Archive Code Status Dataset Status Alive? URL
a2.asimov Fully Operational Fully Operational Yep main local
a2.doc.proj In Development Not usable yet Yep main
amigascne Fully Operational Fully Operational Yep main
aminet Fully Operational Fully Operational Yep main
c64.arnold Fully Operational Fully Operational Yep main
c64.padua Fully Operational Fully Operational Yep main
freedos Fully Operational Fully Operational Yeo main
funet.amiga Fully Operational Fully Operational Nope main
funet.atari Fully Operational Fully Operational Nope main
funet.msx Fully Operational Fully Operational Nope main
gaby Fully Operational Fully Operational Yep main
garbo Fully Operational No Surviving Official Repo Nope random mirror
hobbes Fully Operational Fully Operational Yep main
hornet Fully Operational Fully Operational Nope main
metalab.pdp Fully Operational Fully Operational Yes main
modland Fully Operational Fully Operational Yep main
msx.vitrola Fully Operational Fully Operational Yep main Fully Operational Fully Operational Yep main
nvg.cpc Fully Operational Fully Operational Yep main
nvg.hw Fully Operational Fully Operational Maybe main
nvg.samcoupe Fully Operational Fully Operational Yep main
nvg.sinclair Fully Operational Fully Operational Maybe main
nvg.sounds Fully Operational Fully Operational Maybe main
nvg.vms Fully Operational Fully Operational Maybe main
pigwa_net Fully Operational Fully Operational Yep main
scene_org Fully Operational Fully Operational Yep main
simtel Fully Operational No Surviving Official Repo Nope random mirror
tv-dog Fully Operational Fully Operational Yep main
whtech Fully Operational Fully Operational Yep main
wos Fully Operational No Repo Unavailable :-( Nope dead main dead mirror
x2ftp Fully Operational No Surviving Official Repo Nope random mirror


  • Alive?
    • Yep : Repository is being updated and is sync'd (and rebuilt) by the Data Miners
    • Nope : Repository is dead in the water. It's sync'd only when the service is bootstrapped by the first time, as there's no point on updating it after.
    • Maybe : Repository can be alive, but it's not updated for some time. It's handled the same way as dead repos until further notice.

See the Planning Datasheets for technical information.

Development Process

Until the moment, the project wasn't big enough to worth the pain of a Development Process =P , but from 2018/01, I adopted a workflow for this project. Yeah, the baby is growing up. :-)

A gitglow is the aimed one, but for now it's overkill. It will be useful only for DEV anyway.

So this is how things works now:

  • PUB (yeah, this one)
    • two branches:
      • master - The one that should be pulled into production.
        • No rebases anymore :-)
        • No untested code
        • Documentation will be usually updated directly into it, as this doesn't brake things and the information usually is needed ASAP.
      • stage - Short lived one aimed to testing and validation.
        • Rebases galore.
        • Testings for the sake of "what if"
        • Documentation will be update only when applied to the testing artifacts
        • Don't use it unless you are somewhat masochist. =D
        • It's deleted once merged into master (or just deleted if I found I just messed up)
        • A new one with the same name is immediately forked from master to begin a new cycle.
  • SDK
    • Exactly the same workflow used on PUB.
  • DEV (accessible by invitation only)
    • A somewhat unholy merge from Centralized Workflow and Feature Workflow.
      • I'm currently the sole developer of the thing, no need to make a fuss on it.
    • Branches are created ad-hoc, and rarely deleted after merge to allow rollbacks
    • And that's it. :-)

EVERY RELEASE is tagged in all repositories. If you need, by some obscure reason, to roll back to a specific published version, checkout it by the tag.

The tag naming follows the structure RELEASE.\<date>.\<version_or_name>, where:

  • \<date> : is the date of the release, Japanese style, formatted as YYYY-MMDD with leading zeros
  • <version_or_name> : is the version number, using _ replacing the dots. Alternatively, some milestones releases are published with internal names, all caps with _ replacing any non alphanumeric characters.
  • examples:
    • RELEASE.2016-1118.FIRST_BLOOD
    • RELEASE.2017-0210.1_0_5
    • RELEASE.2017-0521.1_0_6c

The current release are not specifically tagged. The HEAD of the master branch is, always, the current published release for PUB and SDK.

The DEV is always the most recent code and documentation, being them published or not, and should be considered unstable. Currently, there's no stable branch on DEV (but the released, stable code us properly tagged).

It's unusual, but already happened, that a RELEASE is bogged and must be reissued besides the codebase being OK (for example, when I messed up the configuration files and/or the support scripts). These releases have different \<date> , but the same <version_or_name> . If you really need to rollback to such release, use the most recent one. These tags are echoed back to the DEV besides no code being changed.


Drop a mail to support at lisias dot net. I will be glad to help!

There's also a Forum on Google+ and a site here.

Facebook? I don't do anything serious there. :-D

-- 2018/01 Lisias