NOAA's National Centers for Environmental Information (NCEI), FTP site

Issue #10 closed
Jan Galkowski
created an issue

FTP data from

ftp://eclipse.ncdc.noaa.gov/

mostly NCDF files, to

azi01:/home/jan/local_data/eclipse.ncdc.noaa.gov-pub-ftp

Comments (85)

  1. Jan Galkowski reporter

    Going to restart this on the server. It's probably not going to make it on my box.

    Using

    wget --dns-timeout=10 --connect-timeout=20 --read-timeout=120 --wait=12 --random-wait --prefer-family=IPv4 --tries=40 --timestamping=on --recursive --level=8 --no-remove-listing --follow-ftp -nv --output-file=ftp-oar-noaa-gov-ftp.log --no-check-certificate ftp://eclipse.ncdc.noaa.gov

  2. Jan Galkowski reporter

    When restarted, forgot to do a cd .., so will need to do an mv when this is finished to organize the directories. Current layout is:

    total 20
    drwxrwxr-x 3 jan jan 4096 Dec 20 20:46 .
    drwxr-xr-x 5 jan jan 4096 Dec 20 20:42 ..
    drwxrwxr-x 3 jan jan 4096 Dec 20 20:46 eclipse.ncdc.noaa.gov
    -rw-rw-r-- 1 jan jan 1036 Dec 20 20:47 ftp-oar-noaa-gov-ftp.log
    -rw-rw-r-- 1 jan jan  234 Dec 20 20:45 ftp-oar-noaa-gov-pub-ftp.log
    
    ./eclipse.ncdc.noaa.gov:
    total 16
    drwxrwxr-x 3 jan jan 4096 Dec 20 20:46 .
    drwxrwxr-x 3 jan jan 4096 Dec 20 20:46 ..
    drwxrwxr-x 3 jan jan 4096 Dec 20 20:46 cdr
    -rw-rw-r-- 1 jan jan  367 Dec 20 20:46 .listing
    
    ./eclipse.ncdc.noaa.gov/cdr:
    total 16
    drwxrwxr-x 3 jan jan 4096 Dec 20 20:46 .
    drwxrwxr-x 3 jan jan 4096 Dec 20 20:46 ..
    drwxrwxr-x 2 jan jan 4096 Dec 20 20:47 amsu-ch7
    -rw-rw-r-- 1 jan jan 2676 Dec 20 20:46 .listing
    
    ./eclipse.ncdc.noaa.gov/cdr/amsu-ch7:
    total 24
    drwxrwxr-x 2 jan jan 4096 Dec 20 20:47 .
    drwxrwxr-x 3 jan jan 4096 Dec 20 20:46 ..
    -rw-rw-r-- 1 jan jan  883 Jan 16  2015 AMSU-CH7-RO-CAL-BT-CDR_V01R00_ANOM_latest_Aggregation.ncml
    -rw-rw-r-- 1 jan jan  360 Jan 16  2015 AMSU-CH7-RO-CAL-BT-CDR_V01R00_CLIM_latest.ncml
    -rw-rw-r-- 1 jan jan  880 Jan 16  2015 AMSU-CH7-RO-CAL-BT-CDR_V01R00_MON_latest_Aggregation.ncml
    -rw-rw-r-- 1 jan jan  543 Dec 20 20:46 .listing
    
  3. Sakari Maaranen

    @marsroverdriver above, you say you are also downloading this. Do you already know the size?

    If we have a potentially too large data set here that we do not know, if we can download within our resource limits, we must stop the work and make a decision whether to keep it or not. If we know someone else has a complete copy, then we should delete ours and not try again.

    If any of us has a complete copy, and it's within our resource limits, then plan publishing according to our process. That means the last step here: https://bitbucket.org/azimuth-backup/azimuth-inventory/wiki/Climate%20data%20sources

    Any opinions as to whether we should drop this or keep this: @Greg Kochanski @Jan Galkowski @marsroverdriver @John Baez ?

    I am not a climate scientist, so I cannot make an informed decision of the relative importance of this data set. Please assess the cost (size) vs. importance, and availability of copies elsewhere, and make a decision.

  4. Jan Galkowski reporter

    As noted, when began on 20th December, did not know. Ben was recently able to estimate at 18 Tb. All that was known on the 20th was that the dataset was very important. Still is.

  5. Jan Galkowski reporter

    This dataset is important, and we have invested a considerable amount of time and bandwidth to doing it, and storing it. I think it's irresponsible to drop it now.

    Ben's estimate is that it is 18 Tb.

    I don't know how long it will take to finish it.

    I have kept the ticket updated with my methods, etc.

    • Jan
  6. Sakari Maaranen

    Making a partial copy may not be fruitful. The single database hogs a large chunk of our total capacity and is not necessarily very useful if it remains incomplete.

    I repeat: Are we aware of others having backed up this one? For example, is @Benjamin Rose doing it?

    Does Ben want to do it on Princeton resources?

    If Yes, does he want our work transferred to him, or does he rather take it again from the source?

    I could set up a larger server, but then we would be talking significant cost increase until all our current servers have been consolidated to the large one. What does @John Baez think about this? Is this data set so important, that we can temporarily spend 310 euros per month more? It should be over in one month, but could take two. The end result would be, after the transition period (1 or 2 months) all our data sets would have been moved to the single large server, in big contiguous space, and our total server cost would stabilize at 310 € monthly.

    Note that I am personally not proposing a spending increase. I am merely listing our options -- and one option is to increase our spending to cover this, if it's so important.

    I am not a climate scientist at all. I should not be making prioritization decisions.

  7. Jan Galkowski reporter

    The plan always was, I think, to get off the per monthly servers onto a free-per-month resource. In ordinary circumstances, we could move what we have, and wait until other resources came online. Unfortunately, we don't know when the curtain might fall. I'd just would prefer us having an "incomplete" because the curtain fell than we ran out of resources.

    In short, from what I understand, we cannot go up to a 20 Tb (say) server with our current provider without the 310 Euros thing, for at least a month, true? I wonder if that's possible. I mean, it has taken since 20 December to get shy of 8 Tb. We have 10 Tb more to go. If we could throw money at it, and solve it, fine. But I don't know what about bandwidth? We need to speed this up.

    I would not want to throw the data we have away. Perhaps tarball it. But we need some way of pulling down faster.

    • Jan
  8. Benjamin Rose

    @Sakari Maaranen @Jan Galkowski the total size is 18T. Above, @Jan Galkowski said he had 7.6T downloaded, and 11.5T still available. So this means if no one else is actively downloading to this target, it will fit with about 1T to spare.

    I am willing to mirror, but currently lack space until my new 500T server arrives, at which point, it would probably honestly be quicker for me to grab the data over Internet2 than from your servers over commodity Internet.

  9. Sakari Maaranen

    The pub servers we are using now have 11 terabytes total each (4x4T RAID5), Gigabit guaranteed bandwidth, 30 terabytes traffic included monthly, powerful physical servers, 16 GB RAM, DDoS protection, ~65 euros monthly including taxes, so about €6/TB.

    A data set of 18T will not fit on a single 11T server.

    We can deploy a larger server that has 10x6TB (48T usable) or 15x6TB (78T usable) capacity, if @John Baez as finance manager authorizes that: https://www.hetzner.de/fi/hosting/produktmatrix/rootserver-produktmatrix-sx

    If we deploy a larger server, I will consolidate all our work there.

  10. Benjamin Rose

    @Sakari Maaranen , ah, guessing @Jan Galkowski meant total capacity then, not free capacity. Indeed, it won't fit. I'm willing to download it onto my new infrastructure when it arrives, but to do so, I'm more likely to pull it down via I2 from the source rather than from you. I think @Jan Galkowski should be the one best to make this decision, though, whether to remove wholesale or keep the partial until I can mirror it myself. Expanding your capacity is outside my wheelhouse, and up to you guys and @John Baez

  11. Jan Galkowski reporter

    Concur. And, since we can't tell where the download will leave off, Ben might as well start afresh.

    I do think, however, we should retain the existing download, perhaps as a tarball, in the event that the servers providing the data get shut down by 45 and ilk.

  12. Sakari Maaranen

    A large incomplete data set will encumber us more than be useful. My vote is we delete it. Using 20% of our total capacity for incomplete database someone else is also doing makes little sense. It prevents us from using the space for something more complete and actually usable.

    @Greg Kochanski @John Baez , please comment do you really think we should keep this?

  13. Jan Galkowski reporter

    I have stopped the process on pub05.

    I am completely willing to reassign the task to someone else.

    So far, I consider saving this important dataset a collective failure. I guess that's okay. And platitudes that somehow scientists all save this stuff anyway don't satisfy me.

    If someone else wants to take charge, please feel free.

    I'll finish off all the tickets I have, but I won't be taking on anything else.

    Sorry, I've been working on this since 20th December.

  14. Sakari Maaranen

    Firstly, this is not a failure. Not collective, not individual. This is a normal resource overrun situation, trying to download an overly large data set, size of which was initially unknown.

    We only need to make a simple prioritization decision: Do we keep the incomplete data set or do we not? This decision can only be made by people who understand the data. Is it any valuable when incomplete?

    This is a simple prioritization matter:

    • If we decide to not keep the data set, we delete it.
    • If we decide to keep it, we publish it according to the normal process.

    If we do not have people who understand the data, then I propose we delete this data set, and do not attempt to download it again. Ben will do it when he gets the larger server.

    @John Baez, please advise. We have all the options available, explained above.

    Cc: @Jan Galkowski @Benjamin Rose @marsroverdriver @Greg Kochanski

  15. Jan Galkowski reporter

    @Sakari Maaranen @Benjamin Rose @marsroverdriver @Greg Kochanski @John Baez These data consist of a large number of NetCDF files recording data from various sensors, divided up by years (one directory in the tree is assigned to a year), and then by spatial coverage. So, for instance, a lot of data is under the cdr directory. Let's take three subdirectories as examples.

    One is /cdr/ocean-atmos-props/files/ and consists of a set of years, 1988-1997, each in their own directory, and each year contains a NetCDF file which records ocean-atmosphere energy flux for each day of that year.

    Another is /cdr/hirs-olr which is described here, as "The new Climate Data Record (CDR) provides daily global climate data that are valuable as inputs into Radiation Budget Studies and verifying numerical models and can identify the variations in tropical clouds and rainfall that drive global weather patterns. The daily climate data of OLR can provide radiance observations at a 1.0x1.0 Degree resolution, creating a consistent long term climate record of observations since 1979."

    Another is /cdr/appx which is the Extended AVHRR Polar Pathfinder (APP-x) which measures daily albedo from the poles, both Northern Hemisphere and Southern Hemisphere. Again it consists of a large number of NetCDF files, described at https://www.ncdc.noaa.gov/cdr/atmospheric/extended-avhrr-polar-pathfinder-app-x.

    I knew this, but apparently I need to prove that these are worthwhile, citing references.

    These are key measurements from the A-train polar constellation which monitor radiation balance and warming.

  16. Sakari Maaranen

    @Jan Galkowski the experts are you and John. You don't need to prove me anything. You only need to make a decision. You know best. You decide. Don't expect your sysadmin to decide for you. I only need you to say are you going to keep it or not.

    If yes, then I need you (John, financier), to say do you want the extra space to take it all in. The cost is as detailed above. Again, you are the experts. You decide.

    Please advise. Thank you for your cooperation.

  17. John Baez

    Okay, here is the discussion of ftp://eclipse.ncdc.noaa.gov/ on ClimateMirror. They say this ftp site is CLAIMED, but I suspect the person claiming it is Jan. Is that right, Jan? The original claimant wrote:

    Name: NOAA's National Centers for Environmental Information (NCEI) Organization: NOAA Description URL: https://www.ncdc.noaa.gov Download URL: ftp://eclipse.ncdc.noaa.gov/pub File Types: NetCDF Size: Status: CLAIMED 2016-12-14. So far, 40gb of data has been downloaded. I am not sure how big the entire data set is. I will provide regular updates in the meantime.

    I suspect the person who wrote that is Jan.

  18. Jan Galkowski reporter

    Yes. I claimed it. My first pokes at it estimated 11 Tb, so I thought we could handle it. I also did not appreciate that the ceiling on the storage of the servers was a hard ceiling. I don't have that much sysadmin experience, but at work they seem to be able to just add more and more store.

    I also seem to remember there was someone else backing it up, but I have not dug into the tickets at ClimateMirror to check status on that recently.

    Anyway, I've thought about this a bit more, and an alternative is to see how many subdirectories I've gotten, by store size, and then partition the dataset to put some in one place and some elsewhere.

    Of course, we'll still end up with 18 Tb, based upon Ben's latest assessment.

    What do you all think?

    • Jan
  19. Sakari Maaranen

    If you guys have energy, I would deploy the larger server, but you have done the heavy lifting of the actual data, so you know if you have still time and energy left to raise our goal. Especially, if we are going to keep serving the data for less than several years -- for example moving it to institutional providers earlier, then we should be spending more per month. If we stick with the plan to keep serving it by ourselves for several years, then we should stick with the 40 TiB limit.

  20. John Baez

    Is @marsroverdriver still downloading this one, as he mentioned on 2016-12-21? If so, does he anticipate eventual success? If this is the case, I think Jan should give up. Soon we'll be able to transfer a lot of data to larger free storage spaces.

    Does @Benjamin Rose expect to be able to download this one fairly soon?

    If the answer to all these questions are no, then I'd be willing to spend more of our money on servers now, if it allows us to acquire this data.

  21. Jan Galkowski reporter

    Sakari,

    Let me see what I can do about consolidating and sectioning the dataset.

    Do you have a pref where I might put the rest of it if I can succeed in doing this?

    • Jan
  22. Benjamin Rose

    @John Baez Unsure - I just wrote Dell to ask for a delivery date estimate. The PO was dispatched on Thursday Feb 2, and they're not usually slouches. Knee-jerk reaction, perhaps inside the month of February I'll be able to start the download. When Dell gets back to me I'll let you know.

  23. Benjamin Rose

    @John Baez @Jan Galkowski @Sakari Maaranen : From Dell,

    "I just checked the order. The MD array is showing to ship out early this week. The R730 however is on hold right now because the NVMe drives are showing supply shortage They automatically push it out if we are out of a part for 99 days. It has been put on expedited shipping for when the part arrives. I will check tomorrow with the supply constraint team to see what a more accurate arrival time for the server will be. "

    I will be able to plug the MD box into another server though, temporarily, and start the move of data, then the new downloads. Then I will switch it over to the new server when it arrives. The NVMe drive I'm really excited about, it will act as an LVM cache layer atop the slower spinning drives, so I should be able to download even faster - on my current server I've been I/O bound, writing 200 Megs/sec when data could be transferring much quicker over a 10-gig Internet2 link.

    So for Azimuth's part with this data, while @Jan Galkowski looks into what he can do with the partial download, I should be able to start a new complete download as early as next week.

  24. Jan Galkowski reporter

    There was an overlap between the current dataset and an earlier copy. I have verified (see attached) that the earlier copy can be dropped without impact. This let us pick up 5% of the space on the store.

    I am partitioning the grab, and am sizing the remaining, and noting it per the procedure recommended.

    Here are some sizes of directories:

    lftp eclipse.ncdc.noaa.gov:/cdr> du -s -c -b ./ocean-heat-fluxes
    288643830516    ./ocean-heat-fluxes
    288643830516    total
    lftp eclipse.ncdc.noaa.gov:/cdr> du -s -c -b ./ozone-zonal-mean-esrl/
    70228504        ./ozone-zonal-mean-esrl/
    70228504        total
    lftp eclipse.ncdc.noaa.gov:/cdr> du -s -c -b ./patmosx
    467     ./patmosx
    467     total
    lftp eclipse.ncdc.noaa.gov:/cdr> du -s -c -b ./persiann
    13907480788     ./persiann
    13907480788     total
    lftp eclipse.ncdc.noaa.gov:/cdr> du -s -c -b ./rss-uat-msu-amsu/
    163826152       ./rss-uat-msu-amsu/
    163826152       total
    lftp eclipse.ncdc.noaa.gov:/cdr> du -s -c -b ./sea-surface-temp-whoi/
    141210711175    ./sea-surface-temp-whoi/
    141210711175    total
    lftp eclipse.ncdc.noaa.gov:/cdr> du -s -c -b ./snowcover/
    24706078        ./snowcover/
    24706078        total
    lftp eclipse.ncdc.noaa.gov:/cdr> du -s -c -b ./solar-irradiance/
    840609579       ./solar-irradiance/
    840609579       total
    
  25. Jan Galkowski reporter

    I am also checking the directories I have supposedly downloaded:

    lftp eclipse.ncdc.noaa.gov:/cdr> du -s -c -b ./amsu-ch7
    14566643        ./amsu-ch7
    14566643        total
    lftp eclipse.ncdc.noaa.gov:/cdr> du -s -c -b ./amsu-ch9
    14572379        ./amsu-ch9
    14572379        total
    lftp eclipse.ncdc.noaa.gov:/cdr> du -s -c -b ./amsu-msu
    51173360        ./amsu-msu
    51173360        total
    lftp eclipse.ncdc.noaa.gov:/cdr> du -s -c -b ./appx
    481894261412    ./appx
    481894261412    total
    lftp eclipse.ncdc.noaa.gov:/cdr> du -s -c -b ./avhrr-aot-v2/
    275754221969    ./avhrr-aot-v2/
    275754221969    total
    lftp eclipse.ncdc.noaa.gov:/cdr> du -s -c -b ./avhrr-land
    2383160771715   ./avhrr-land
    2383160771715   total
    lftp eclipse.ncdc.noaa.gov:/cdr> du -s -c -b ./gridsat
    3737776341433   ./gridsat
    3737776341433   total
    lftp eclipse.ncdc.noaa.gov:/cdr> du -s -c -b ./hirs-ch12/
    19598618        ./hirs-ch12/
    19598618        total
    lftp eclipse.ncdc.noaa.gov:/cdr> du -s -c -b ./hirs-olr
    3635650140      ./hirs-olr
    3635650140      total
    lftp eclipse.ncdc.noaa.gov:/cdr> du -s -c -b ./msu-ch3-amsu-ch7/
    29809735        ./msu-ch3-amsu-ch7/
    29809735        total
    lftp eclipse.ncdc.noaa.gov:/cdr> du -s -c -b ./msu-ch4-amsu-ch9/
    36612951        ./msu-ch4-amsu-ch9/
    36612951        total
    lftp eclipse.ncdc.noaa.gov:/cdr> du -s -c -b ./msu-mlt-noaa/
    41165324        ./msu-mlt-noaa/
    41165324        total
    lftp eclipse.ncdc.noaa.gov:/cdr> du -s -c -b ./nesdis-msu-amsu/
    41527093        ./nesdis-msu-amsu/
    41527093        total
    lftp eclipse.ncdc.noaa.gov:/cdr> du -s -c -b ./ocean-atmos-props/
    415510918704    ./ocean-atmos-props/
    415510918704    total
    

    Those are the files on the NOAA server.

    Here are the files in hand:

    [jan@pub05 cdr]$ du -s -c -b ./amsu-ch7
    14574835        ./amsu-ch7
    14574835        total
    [jan@pub05 cdr]$ du -s -c -b ./amsu-ch9
    14580571        ./amsu-ch9
    14580571        total
    [jan@pub05 cdr]$ du -s -c -b ./amsu-msu
    51177456        ./amsu-msu
    51177456        total
    [jan@pub05 cdr]$ du -s -c -b ./appx
    481899078308    ./appx
    481899078308    total
    [jan@pub05 cdr]$ du -s -c -b ./avhrr-aot-v2/
    275755278737    ./avhrr-aot-v2/
    275755278737    total
    [jan@pub05 cdr]$ du -s -c -b ./avhrr-land
    2383164986499   ./avhrr-land
    2383164986499   total
    [jan@pub05 cdr]$ du -s -c -b ./gridsat
    3737783300537   ./gridsat
    3737783300537   total
    [jan@pub05 cdr]$ du -s -c -b ./hirs-ch12
    19606810        ./hirs-ch12
    19606810        total
    [jan@pub05 cdr]$ du -s -c -b ./hirs-olr
    3635670620      ./hirs-olr
    3635670620      total
    [jan@pub05 cdr]$ du -s -c -b ./msu-ch3-amsu-ch7/
    29817927        ./msu-ch3-amsu-ch7/
    29817927        total
    [jan@pub05 cdr]$ du -s -c -b ./msu-ch4-amsu-ch9/
    36621143        ./msu-ch4-amsu-ch9/
    36621143        total
    [jan@pub05 cdr]$ du -s -c -b ./msu-mlt-noaa/
    41169420        ./msu-mlt-noaa/
    41169420        total
    [jan@pub05 cdr]$ du -s -c -b ./nesdis-msu-amsu/
    41531189        ./nesdis-msu-amsu/
    41531189        total
    [jan@pub05 cdr]$ du -s -c -b ./ocean-atmos-props/
    239629274460    ./ocean-atmos-props/
    239629274460    total (*)
    

    (*) The download of this directory is not yet complete:

    /dev/mapper/vg_pool-lv_pool_data 11543320848 10072483328 1470821136  88% /var/local
    
  26. Jan Galkowski reporter

    I don't see anything like the 18Tb @Benjamin Rose estimated. Of course, I am limiting myself to the /cdr subdirectory, which is where all the good stuff is.

    I am going to stop the download of ./ocean-atmos-props on pub05, get rid of that subdirectory, and continue that download on pub01 after putting in a request for space for 870 Gb there.

  27. Jan Galkowski reporter
     amsu-ch7 [have]
     amsu-ch9 [have]
     amsu-msu [have]
     appx [have]
     avhrr-aot-v2 [have]
     avhrr-land [have]
     gridsat [have]
     hirs-ch12 [have]
     hirs-olr [have]
     msu-ch3-amsu-ch7 [have]
     msu-ch4-amsu-ch9 [have]
     msu-mlt-noaa [have]
     nesdis-msu-amsu [have]
     ocean-atmos-props [in progress]
     ocean-heat-fluxes [to be done]
     ozone-zonal-mean-esrl [to be done]
     patmosx [to be done]
     persiann [to be done]
     rss-uat-msu-amsu [to be done]
     sea-surface-temp-whoi [to be done]
     snowcover [to be done]
    
  28. Jan Galkowski reporter

    Outside of eclipse.ncdc.noaa.gov/cdr, here are the datasets that need to be grabbed, and their sizes. These are all part of eclipse.ncdc.noaa.gov/pub, and total 9.9 Tb.

      48492591547 misc
    2593464629147 surfa
                0 Data_In_Development
      95486044061 ibtracs
     868107579283 hursat
     124528278013 gridsat
     [irrelevant] cdr
    4227269238363 isccp
                0 grisat
     231687891497 oisst -> ../san2/oisst
    1319941013006 seawinds
     103499792963 ssmi
     188637721583 oisst.old
      [duplicate] OI-daily-v2 -> oisst/
      28225757524 vtpr
       1676646188 iosp
                0 gpcp
                0 gacp
          4847157 satfaq
         14564963 publications
    
  29. Jan Galkowski reporter

    Copying this third part on azi03, to /eclipse-part3of3.ncdc.noaa.gov-ftp/. Preserving the upper subdirectory structure by putting this in ./pub there. ./cdr is at the same level on the remote server, that is,

    [jan@azi03 local_data]$ lftp eclipse.ncdc.noaa.gov
    lftp eclipse.ncdc.noaa.gov:~> ls
    drwxrwsr-x  38 root     1085        32768 Jan  4 16:35 cdr
    lrwxrwxrwx   1 root     root            4 Dec  7  2011 pub -> san1
    drwxr-sr-x  21 2829     rsad        32768 Jan  3 10:15 san1
    drwxrwsr-x  16 root     rsad        32768 Oct 31  2013 san2
    lftp eclipse.ncdc.noaa.gov:/>
    
  30. Jan Galkowski reporter

    To summarize, this is/has being/been done in 3 parts. The first, on pub05, is complete, and is about 6 Tb. Logged into README.txt for planning purposes. To give some idea, the SHA256 sum is taking much of the day to calculate, and it will be followed by a SHA512 sum. Hmmmm. Wonder how long this will take to get to its eventual home, and how much intact? This is roughly half of ftp://eclipse.ncdc.noaa.gov/cdr. The other half of that subdirectory is being done on pub01. It's about 870 Gb. The third part consists of the ftp://eclipse.ncdc.noaa.gov/pub directory. It is being done on azi03 and, as noted above, it's about 10 Tb. Open question on how we'll get it from azi03 to one of the pub servers, or if we want to.

  31. marsroverdriver

    @Sakari Maaranen As it happens, the machine where I was downloading this (a box at home, not one of the download servers) rebooted and I never got much of the data. I forgot that I'd even commented here about downloading it.

    So, I do not have a copy of this data.

  32. Jan Galkowski reporter
    [10:56 AM] Jan Galkowski: [jan@pub05 pub]$ ls -lt /usr/local/bin
    total 4
    -rwxr-xr-x 1 root ftp 966 Feb  8 23:36 set_read_only.sh
    [jan@pub05 pub]$ /usr/local/bin/set_read_only.sh eclipse.ncdc.noaa.gov-part1-ftp
    Starting on /var/local/pub/eclipse.ncdc.noaa.gov-part1-ftp ...
    find: Failed to save working directory in order to run a command on ‘eclipse.ncdc.noaa.gov-part1-ftp.sha256.txt.gz’: Permission denied
    Failed to set files under /var/local/pub/eclipse.ncdc.noaa.gov-part1-ftp to 0444
    Done.
    [jan@pub05 pub]$
    [10:56 AM] Jan Galkowski: `set_read_only.sh` in `/usr/local/bin` continues to fail to work.
    
  33. Jan Galkowski reporter

    Last portion, the /pub subdirectory, is operating on azi03. Planning to move to a pub0x server when it is completed. Will allocate space in README.txt accordingly.

  34. Sakari Maaranen

    @Jan Galkowski I have notified you that azi03 will be converted to pub03. You can keep the data there. Please stop your processes on azi03 though, so I can do the conversion. There has been a message of the day on azi03 since January 18, reading:

    Jan, no need to interrupt work on this server (azi03). Data is safe here.
    However, please don't start new long running processes
    reading or writing ~jan/local_data/.
    
    Let running ones finish.
    
    Make sure no screen has shell open currently in that directory.
    
    When your local_data is idle, I will re-mount it under /var/local/.
    You can then continue. Data will be kept.
    
    Thank you!
    

    Still, you have screen and processes running there. Why are you not complying? You have had weeks time with a reminder every time you login. Would you comply now, thank you.

    [sam@azi03 ~]$ sudo ps -U jan -F --cols 4096
    UID        PID  PPID  C    SZ   RSS PSR STIME TTY          TIME CMD
    jan       7157 30807  0 48372  6596   2 Feb09 pts/4    01:05:36 lftp eclipse.ncdc.noaa.gov
    jan       8475  8473  0 36275  2448   1 01:22 ?        00:00:00 sshd: jan@pts/0
    jan       8476  8475  0 28879  2112   7 01:22 pts/0    00:00:00 -bash
    jan       8495  8476  0 31894  1176   0 01:22 pts/0    00:00:00 screen -r
    jan      17400 30132  0 159519 83076  3 Feb02 pts/9    01:28:22 httrack https://pds.nasa.gov -O . -i --mirror --depth=8 --ext-depth=3 --max-rate=100000000 %c500 --sockets=30 --retries=30 --host-control=0 TN 60 --near --robots=0 %s
    jan      26440     1  0 32589  3712   0 Jan06 ?        00:03:58 SCREEN -L
    jan      30132 26440  0 28880  2020   2 Jan31 pts/9    00:00:00 /bin/bash
    jan      30178 26440  0 28879  2120   1 Feb06 pts/1    00:00:00 /bin/bash
    jan      30807 26440  0 28879  2008   0 Feb03 pts/4    00:00:00 /bin/bash
    

    Keep data on azi03. Do not move to another server. azi03 itself will become pub03 server, in place. You can allocate the space on azi03.

  35. Jan Galkowski reporter

    Sakari,

    The answer is simple: I am trying to bring Issue #10 to a close as quickly as possible. This has been estimated and checked to be about 18 Tb of data overall. It is clear therefore that it cannot remain all on one server with our 11 Tb limits, some of which is used for existing /var/local/pub.

    Since more than one server is required, since azi03 has the most free space, and since aftp.cmdl... needed to be done, I put azi03 to work, and also pub01 and pub05. The assignment of work was done based upon space availability and requirements, reflecting the final disposition of these datasets in /var/local/pub/README.txt as you have requested.

    Azi03 will be available when these transfers are completed. How long? I don't know. Sooner using 3 servers than orherwise.

    At least in my personal experience on this project, these large scale reconfigurations, including the datarefuge fiasco, have been the source of most damage to the ongoing effort. I am reasonably diligent about these tasks, but when the rules and placements change midstream, it is easy to forget and get confused. This is how the podaac data was lost, and how, on two separate tickets which I did not point out loudly until now, I had to restart transfers from the beginning because I was not sure what I had.

    Wait.

  36. Sakari Maaranen

    @Jan Galkowski There has been exactly one significant reconfiguration. It was caused by the Storage Boxes not working reliably when installed according to instructions. We dealt with that successfully. There was no way to know in advance that the CIFS mounts of those volumes would not work reliably. As soon as we learned it, it was fixed.

    Instructions for how to work have not changed much. The changes we have had, have built on the previous instructions, complementing them. Usually it has been about moving a directory from a working space to a public space, which you would need to do anyway. We have added instructions for how to document your work.

    It is not anyone else's fault, if you don't know what you download, or if you download data sets that do not fit in your available space. Many of the changes were made at your request, to accommodate for your needs. Other project members have somehow managed to use the same resources just fine without creating conflicts. They have also asked questions, we have discussed, and found answers, and they have published their results according to the instructions given.

    In this case, we will simply move your workspace as is to make it public when you are ready. Remounting a directory is a trivial operation that will have to wait until your processes have finished. It is not a "large scale reconfiguration". During the last three weeks, you would only need to have paused your work for minutes to let me reconfigure. Then continue. You also have four other servers you could use. You can download directly to the same server where you publish, as long as you keep ongoing work in a separate working directory.

  37. Jan Galkowski reporter

    (1) So all I need to do is CTRL-z?

    (2) Having to move datasets around to pub servers was also disruptive.

    (3) Apart from Ben who us using his facilities, the difficulty I have/had is related to the size of the datasets I'm taking, both from ncdc.noaa, and ARGO. These are large. I tried once to size a dataset for Ben abd it was so large the sizing never finished. So Ben took it anyway.

    In any case, the set of unfinished work is finite. I'm not doing much now except monitoring and waiting for things to finish transferring. ARGO isn't as large as ncdc.noaa butvit sure is slow.

  38. Sakari Maaranen

    @Jan Galkowski what you needed to do has been documented in daily reminders to you on the server for the long period of time like expressed above. At any point during that time, when your processes had finished, I could have done the remount. It does not say ctrl+z. For future, the instruction is still the same. Please read it as quoted above that is identical to the message you continue to see on the server when you log in. Please notify me when that happens within the next few weeks - when your processes have finished and screens exited as instructed.

    There have been no disruptions if you have followed the instructions and planned your use of space. There will be, naturally, if you can't or won't do that. In case of data sets of unknown size, anyone with basic arithmetic skills can do the math what happens if the data set is too large. Then you just handle the conflict and make a decision. I was not expecting it to be such a great challenge for someone saying he is a data expert.

  39. Jan Galkowski reporter

    Progress report: Subdirectories of /pub are being copied by pub01, pub05, and azi03. The /pub/surfa subdirectory azi03 is working is 2.6 Tb. about 1.6 Tb have been done. I would, therefore, expect a couple of more days at least. I'll try to get a rate.

    It is necessary to spread the work out over several servers, since both ClimateMirror people and, I believe, Ben or Greg have found too many processes or connections going to eclipse.ncdc.noaa.gov get throttled or disallowed.

  40. Jan Galkowski reporter

    Status report.

    I have 295 Gb on pub05, currently downloading ./pub/oisst.old.

    There are 961 Gb on pub01, currently downloading ./pub/isccp.

    There are 2,47 Tb on azi03, currently downloading ./pub/ibtracs.

  41. Jan Galkowski reporter

    Status report.

    596 Gb on pub05, currently downloading ./pub/gridat.

    1201 Gb on pub01, currently (still!) downloading ./pub/isccp.

    2490 Gb on azi03, currently (still!) downloading ./pub/ibtracs.

    There is also:

    On pub05:

    [jan@pub05 2017-02-06T0356]$ find . -maxdepth 1 -type d -exec nice ionice du -s -b -c --apparent-size -BG {} \;
    6410G   .
    6410G   total
    6410G   ./cdr
    6410G   total
    [jan@pub05 2017-02-06T0356]$ cd cdr
    [jan@pub05 cdr]$ ls -lt
    total 52
    dr-xr-xr-x 2 jan jan 4096 Feb  5 03:32 nesdis-msu-amsu
    dr-xr-xr-x 2 jan jan 4096 Feb  5 03:32 msu-mlt-noaa
    dr-xr-xr-x 3 jan jan 4096 Feb  5 03:31 msu-ch4-amsu-ch9
    dr-xr-xr-x 3 jan jan 4096 Feb  5 03:31 msu-ch3-amsu-ch7
    dr-xr-xr-x 5 jan jan 4096 Feb  5 03:30 hirs-olr
    dr-xr-xr-x 3 jan jan 4096 Feb  5 03:17 hirs-ch12
    dr-xr-xr-x 4 jan jan 4096 Feb  5 03:16 gridsat
    dr-xr-xr-x 5 jan jan 4096 Jan 18 11:32 avhrr-land
    dr-xr-xr-x 4 jan jan 4096 Jan 15 03:31 avhrr-aot-v2
    dr-xr-xr-x 4 jan jan 4096 Jan 11 16:42 appx
    dr-xr-xr-x 2 jan jan 4096 Jan  9 02:42 amsu-msu
    dr-xr-xr-x 3 jan jan 4096 Jan  9 02:39 amsu-ch9
    dr-xr-xr-x 3 jan jan 4096 Jan  9 02:38 amsu-ch7
    [jan@pub05 cdr]$
    

    within its /var/local/pub space.

    And on pub01, there's 802 Gb, per:

    [jan@pub01 2017-02-13T2304]$ find . -maxdepth 1 -type d -exec nice ionice du -s -b -c --apparent-size -BG {} \;
    802G    .
    802G    total
    802G    ./cdr
    802G    total
    [jan@pub01 2017-02-13T2304]$ cd cdr
    [jan@pub01 cdr]$ ls -lt
    total 36
    dr-xr-xr-x 4 jan jan 4096 Feb 13 23:04 solar-irradiance
    dr-xr-xr-x 2 jan jan 4096 Feb 13 18:15 snowcover
    dr-xr-xr-x 3 jan jan 4096 Feb 11 21:22 sea-surface-temp-whoi
    dr-xr-xr-x 2 jan jan 4096 Feb 11 17:06 rss-uat-msu-amsu
    dr-xr-xr-x 3 jan jan 4096 Feb 10 20:15 persiann
    dr-xr-xr-x 2 jan jan 4096 Feb 10 20:14 patmosx
    dr-xr-xr-x 2 jan jan 4096 Feb 10 19:40 ozone-zonal-mean-esrl
    dr-xr-xr-x 3 jan jan 4096 Feb  8 06:09 ocean-heat-fluxes
    dr-xr-xr-x 3 jan jan 4096 Feb  6 04:02 ocean-atmos-props
    [jan@pub01 cdr]$
    

    within its /var/local/pub space.

  42. Jan Galkowski reporter

    The pub05 piece of these, informally dubbed part 3c, is completed. Doing SHA sums for it. Roadmap now looks like the following, with azi03 and pub01 still busy and having two more directories each to do.

    drwxrwsr-x  28 2829     rsad        32768 Feb  1 12:45 misc   azi03
    drwxrwsr-x  23 2829     rsad        32768 Dec 21 18:07 surfa  azi03
    drwxrws--x  42 2829     rsad        32768 Nov 30 09:36 Data_In_Development azi03
    drwxrwsr-x  24 2829     rsad        32768 Sep 26 11:48 ibtracs azi03
    drwxr-sr-x   9 2829     rsad        32768 Jul 21  2015 hursat azi03 (yet to do)
    -rw-r--r--   1 59990    root            0 Jul  6  2015 #rw-check
    drwxrwsr-x   5 2829     rsad          512 May 22  2015 gridsat pub05
    drwxrwsr-x   2 2829     rsad        32768 Dec 16  2014 cdr [[[skip]] ***check***]
    drwxrwxr-x  10 2829     rsad          512 Sep  4  2014 isccp  pub01
    drwxrws--x   3 2829     rsad          512 Jan 30  2014 grisat pub05 [empty]
    lrwxrwxrwx   1 root     rsad           13 Oct 12  2013 oisst -> ../san2/oisst (pub05)
    drwxrwsr-x   7 2829     rsad        32768 Mar 28  2013 seawinds pub01 (yet to do)
    drwxrwsr-x  10 2829     rsad          512 Mar 28  2013 ssmi pub05
    drwxrwsr-x  10 2829     rsad          512 Dec 31  2012 oisst.old pub05
    -rw-r--r--   1 root     rsad            0 Aug 29  2012 pt.txt
    lrwxrwxrwx   1 2829     rsad            6 Dec  8  2011 OI-daily-v2 -> oisst/ (see above)
    drwxrwsr-x  78 2829     rsad        32768 Apr  8  2011 vtpr pub05
    drwx------   2 root     root          512 Jan 28  2011 lost+found
    drwxrwsr-x   5 2829     rsad          512 Nov 30  2010 iosp pub05
    drwxrws--x   7 2829     rsad          512 Mar 10  2010 gpcp pub05
    drwxrws--x   2 2829     rsad        32768 Dec 19  2008 gacp pub05
    drwxr-sr-x   5 2829     rsad          512 Dec 16  2008 satfaq pub05
    drwxr-sr-x   3 2829     rsad          512 Sep 12  2008 publications  pub05
    
  43. Jan Galkowski reporter

    From time to time, moreso, recently, I get the kinds of errors documented in the screenshot below: Issue10AccessFailures_2017-03-26_224243.png What I have done and intend to continue to do is just push on: This is a part of a multimonth collection, and I cannot go back and debug these kinds of anomalies. It's just too big. Presumably, down the road, the developers of these sites might get from this experience a ``lessons learned'', allowing public transport of data reliably and in bulk.

  44. Jan Galkowski reporter

    ./pub/hursat completed on azi03. Got a fair number of errors of the kind described just above:

    mirror: Access failed: 550 /pub/hursat/b1/imagery/2009271N09146: No such file or directory
    mirror: Access failed: 550 /pub/hursat/b1/imagery/2009271N15257: No such file or directory
    mirror: Access failed: 550 /pub/hursat/b1/imagery/2009271N46317: No such file or directory
    mirror: Access failed: 550 /pub/hursat/b1/imagery/2009272N07164: No such file or directory
    mirror: Access failed: 550 /pub/hursat/b1/imagery/2009279N15311: No such file or directory
    mirror: Access failed: 550 /pub/hursat/b1/imagery/2009281N15144: No such file or directory
    mirror: Access failed: 550 /pub/hursat/b1/imagery/2009282N16252: No such file or directory
    mirror: Access failed: 550 /pub/hursat/b1/imagery/2009287N10154: No such file or directory
    mirror: Access failed: 550 /pub/hursat/b1/imagery/2009288N07267: No such file or directory
    mirror: Access failed: 550 /pub/hursat/b1/imagery/2009289N12230: No such file or directory
    mirror: Access failed: 550 /pub/hursat/b1/imagery/2009291N16111: No such file or directory
    mirror: Access failed: 550 /pub/hursat/b1/imagery/2009299N12153: No such file or directory
    mirror: Access failed: 550 /pub/hursat/b1/imagery/2009305N14134: No such file or directory
    mirror: Access failed: 550 /pub/hursat/b1/imagery/2009308N11279: No such file or directory
    mirror: Access failed: 550 /pub/hursat/b1/imagery/2009311N20157: No such file or directory
    mirror: Access failed: 550 /pub/hursat/b1/imagery/2009313N12071: No such file or directory
    mirror: Access failed: 550 /pub/hursat/b1/imagery/2009324N06129: No such file or directory
    mirror: Access failed: 550 /pub/hursat/b1/imagery/2009325N06148: No such file or directory
    mirror: Access failed: 550 /pub/hursat/b1/imagery/2009328N06108: No such file or directory
    mirror: Access failed: 550 /pub/hursat/b1/imagery/2009337N14143: No such file or directory
    mirror: Access failed: 550 /pub/hursat/b1/imagery/2009345N07085: No such file or directory
    Total: 828 directories, 126705 files, 0 symlinks
    New: 126705 files, 0 symlinks
    265992978394 bytes transferred in 279300 seconds (930.0K/s)
    5235 errors detected
    lftp eclipse.ncdc.noaa.gov:/pub>
    
  45. Jan Galkowski reporter

    azi03's piece of the effort on this ticket is completed. The assignment chart now looks like:

    drwxrwsr-x  28 2829     rsad        32768 Feb  1 12:45 misc   azi03
    drwxrwsr-x  23 2829     rsad        32768 Dec 21 18:07 surfa  azi03
    drwxrws--x  42 2829     rsad        32768 Nov 30 09:36 Data_In_Development azi03
    drwxrwsr-x  24 2829     rsad        32768 Sep 26 11:48 ibtracs azi03
    drwxr-sr-x   9 2829     rsad        32768 Jul 21  2015 hursat azi03 
    -rw-r--r--   1 59990    root            0 Jul  6  2015 #rw-check
    drwxrwsr-x   5 2829     rsad          512 May 22  2015 gridsat pub05
    drwxrwsr-x   2 2829     rsad        32768 Dec 16  2014 cdr [[[skip]] ***check***]
    drwxrwxr-x  10 2829     rsad          512 Sep  4  2014 isccp  pub01
    drwxrws--x   3 2829     rsad          512 Jan 30  2014 grisat pub05 [empty]
    lrwxrwxrwx   1 root     rsad           13 Oct 12  2013 oisst -> ../san2/oisst (pub05)
    drwxrwsr-x   7 2829     rsad        32768 Mar 28  2013 seawinds pub01 [[**in progress**]]
    drwxrwsr-x  10 2829     rsad          512 Mar 28  2013 ssmi pub05
    drwxrwsr-x  10 2829     rsad          512 Dec 31  2012 oisst.old pub05
    -rw-r--r--   1 root     rsad            0 Aug 29  2012 pt.txt
    lrwxrwxrwx   1 2829     rsad            6 Dec  8  2011 OI-daily-v2 -> oisst/ (see above)
    drwxrwsr-x  78 2829     rsad        32768 Apr  8  2011 vtpr pub05
    drwx------   2 root     root          512 Jan 28  2011 lost+found
    drwxrwsr-x   5 2829     rsad          512 Nov 30  2010 iosp pub05
    drwxrws--x   7 2829     rsad          512 Mar 10  2010 gpcp pub05
    drwxrws--x   2 2829     rsad        32768 Dec 19  2008 gacp pub05
    drwxr-sr-x   5 2829     rsad          512 Dec 16  2008 satfaq pub05
    drwxr-sr-x   3 2829     rsad          512 Sep 12  2008 publications  pub05
    

    My understanding is that I am supposed to leave these files on azi03 intact. It's size on azi03 is 2802 Gb.

    The companion download of pds.nasa.gov for Issue #23 on azi03 continues and presently has 1154 Gb.

  46. Jan Galkowski reporter

    pub01 is doing ./pub/seawinds. That will be the last piece. I verified that this ./pub/cdr is a replica of the ./cdr which was done by 2017-02-13T2304 in pub01's /var/local/pub/eclipse-part2of3.ncdc.noaa.gov-ftp. So the update of the assignment now is:

    drwxrwsr-x  28 2829     rsad        32768 Feb  1 12:45 misc   azi03
    drwxrwsr-x  23 2829     rsad        32768 Dec 21 18:07 surfa  azi03
    drwxrws--x  42 2829     rsad        32768 Nov 30 09:36 Data_In_Development azi03
    drwxrwsr-x  24 2829     rsad        32768 Sep 26 11:48 ibtracs azi03
    drwxr-sr-x   9 2829     rsad        32768 Jul 21  2015 hursat azi03 
    -rw-r--r--   1 59990    root            0 Jul  6  2015 #rw-check
    drwxrwsr-x   5 2829     rsad          512 May 22  2015 gridsat pub05
    drwxrwsr-x   2 2829     rsad        32768 Dec 16  2014 cdr [replica of ./cdr done by pub01 by 2017-02-13]
    drwxrwxr-x  10 2829     rsad          512 Sep  4  2014 isccp  pub01
    drwxrws--x   3 2829     rsad          512 Jan 30  2014 grisat pub05 [empty]
    lrwxrwxrwx   1 root     rsad           13 Oct 12  2013 oisst -> ../san2/oisst (pub05)
    drwxrwsr-x   7 2829     rsad        32768 Mar 28  2013 seawinds pub01 [[**in progress**]]
    drwxrwsr-x  10 2829     rsad          512 Mar 28  2013 ssmi pub05
    drwxrwsr-x  10 2829     rsad          512 Dec 31  2012 oisst.old pub05
    -rw-r--r--   1 root     rsad            0 Aug 29  2012 pt.txt
    lrwxrwxrwx   1 2829     rsad            6 Dec  8  2011 OI-daily-v2 -> oisst/ (see above)
    drwxrwsr-x  78 2829     rsad        32768 Apr  8  2011 vtpr pub05
    drwx------   2 root     root          512 Jan 28  2011 lost+found
    drwxrwsr-x   5 2829     rsad          512 Nov 30  2010 iosp pub05
    drwxrws--x   7 2829     rsad          512 Mar 10  2010 gpcp pub05
    drwxrws--x   2 2829     rsad        32768 Dec 19  2008 gacp pub05
    drwxr-sr-x   5 2829     rsad          512 Dec 16  2008 satfaq pub05
    drwxr-sr-x   3 2829     rsad          512 Sep 12  2008 publications  pub05
    
  47. Jan Galkowski reporter
    6882 GiB pub05                                                  
    6784 GiB pub04                                                  
    2400 GiB pub01 (incomplete ... still doing seawinds)            
     860 Gib pub01                                                  
    2802 GiB azi03                                                  
    ---------------------------------------------------------       
    19728 GiB TOTAL (as of 1 Apr 2017, still in progress)           
    
  48. Jan Galkowski reporter

    Checksums for SHA256 and SHA512 completed on azi03:

    -rw-rw-r-- 1 jan jan 259609867 Apr  2 19:47 eclipse-part3of3.ncdc.noaa.gov-ftp-20170401.sha512.txt.gz
    -rw-rw-r-- 1 jan jan 142662623 Apr  2 00:12 eclipse-part3of3.ncdc.noaa.gov-ftp-20170401.sha256.txt.gz
    drwxrwxr-x 3 jan jan      4096 Apr  1 17:29 2017-04-01T1728
    

    It is telling that these took an entire day to conclude.

  49. Jan Galkowski reporter

    pub01' completed./pub/seawinds` tonight. SHA256 and SHA512 hashes are being run.

    drwxrwsr-x  28 2829     rsad        32768 Feb  1 12:45 misc   azi03
    drwxrwsr-x  23 2829     rsad        32768 Dec 21 18:07 surfa  azi03
    drwxrws--x  42 2829     rsad        32768 Nov 30 09:36 Data_In_Development azi03
    drwxrwsr-x  24 2829     rsad        32768 Sep 26 11:48 ibtracs azi03
    drwxr-sr-x   9 2829     rsad        32768 Jul 21  2015 hursat azi03 
    -rw-r--r--   1 59990    root            0 Jul  6  2015 #rw-check
    drwxrwsr-x   5 2829     rsad          512 May 22  2015 gridsat pub05
    drwxrwsr-x   2 2829     rsad        32768 Dec 16  2014 cdr [replica of ./cdr done by pub01 by 2017-02-13]
    drwxrwxr-x  10 2829     rsad          512 Sep  4  2014 isccp  pub01
    drwxrws--x   3 2829     rsad          512 Jan 30  2014 grisat pub05 [empty]
    lrwxrwxrwx   1 root     rsad           13 Oct 12  2013 oisst -> ../san2/oisst (pub05)
    drwxrwsr-x   7 2829     rsad        32768 Mar 28  2013 seawinds pub01 
    drwxrwsr-x  10 2829     rsad          512 Mar 28  2013 ssmi pub05
    drwxrwsr-x  10 2829     rsad          512 Dec 31  2012 oisst.old pub05
    -rw-r--r--   1 root     rsad            0 Aug 29  2012 pt.txt
    lrwxrwxrwx   1 2829     rsad            6 Dec  8  2011 OI-daily-v2 -> oisst/ (see above)
    drwxrwsr-x  78 2829     rsad        32768 Apr  8  2011 vtpr pub05
    drwx------   2 root     root          512 Jan 28  2011 lost+found
    drwxrwsr-x   5 2829     rsad          512 Nov 30  2010 iosp pub05
    drwxrws--x   7 2829     rsad          512 Mar 10  2010 gpcp pub05
    drwxrws--x   2 2829     rsad        32768 Dec 19  2008 gacp pub05
    drwxr-sr-x   5 2829     rsad          512 Dec 16  2008 satfaq pub05
    drwxr-sr-x   3 2829     rsad          512 Sep 12  2008 publications  pub05
    
    7042 GiB pub05                                                  
    3559 GiB pub01
    2803 GiB pub03                                                  
    ---------------------------------------------------------       
    13404 GiB TOTAL (as of 10 Apr 2017, completed)
    
  50. Log in to comment