/dev/mapper/vg_pool-lv_data_jan on azi01 is running out of space and no extents

Issue #68 closed
Jan Galkowski
created an issue

Similar problem as on azi03, although the situation on azi03 has been postponed by juggling. Cannot do it so readily here, as there are wgets and an lftp running to contribute. Hence the priority.

[jan@azi01 local_data]$ sudo df -k
Filesystem                            1K-blocks       Used  Available Use% Mounted on
/dev/mapper/vg_sys-lv_sys_root          8125880    1962088    5727980  26% /
devtmpfs                                7962000          0    7962000   0% /dev
tmpfs                                   7984924          0    7984924   0% /dev/shm
tmpfs                                   7984924      41616    7943308   1% /run
tmpfs                                   7984924          0    7984924   0% /sys/fs/cgroup
/dev/md0                                1031064     192404     786284  20% /boot
/dev/mapper/vg_sys-lv_sys_var          10190136     897408    8752056  10% /var
/dev/mapper/vg_sys-lv_sys_home         20511356     314624   19131772   2% /home
/dev/mapper/vg_sys-lv_sys_vartmp        5029504     224108    4526868   5% /var/tmp
/dev/mapper/vg_sys-lv_sys_tmp           5029504      10264    4740712   1% /tmp
tmpfs                                   1596988          0    1596988   0% /run/user/1003
tmpfs                                   1596988          0    1596988   0% /run/user/1006
/dev/mapper/vg_pool-lv_data_jan      4227420072 3662525752  371606540  91% /home/jan/local_data
/dev/mapper/vg_pool-lv_data_maxwell  2849543800      82216 2712166376   1% /home/maxwell/local_data
/dev/mapper/vg_pool-lv_data_borislav  528313784   23964592  477489264   5% /home/borislav/local_data
tmpfs                                   1596988          0    1596988   0% /run/user/1005
[jan@azi01 local_data]$ ps -ef | grep -E "wget|lftp|rsync|tar"
jan        814 16615  0 Jan09 pts/1    00:20:37 lftp ftp://eclipse.ncdc.noaa.gov
jan       7196 28760  0 Jan05 pts/8    00:41:51 wget --dns-timeout=10 --connect-timeout=20 --read-timeout=120 --wait=5 --random-wait -e robots=off --prefer-family=IPv4 --tries=40 --timestamping=on --recursive --level=8 --no-remove-listing --follow-ftp -nv --output-file=rawdata-oceanobservatories-org-files.log --no-check-certificate https://rawdata.oceanobservatories.org/files/
jan       7268  7252  0 Jan05 pts/15   00:10:06 wget --wait=0 --mirror --no-verbose -4 --output-file=DOE-EERE.log --no-check-certificate --tries=40 --page-requisites https://energy.gov/eere/office-energy-efficiency-renewable-energy
borislav 10752     1  0 Jan09 ?        00:19:06 wget --wait=0 --mirror --no-verbose -4 --output-file=noaa-severe-weather.log --no-check-certificate --tries=40 --page-requisites https://www.ncdc.noaa.gov/data-access/severe-weather
jan      27558 26826  0 07:20 pts/5    00:00:00 grep --color=auto -E wget|lftp|rsync|tar
[jan@azi01 local_data]$

Comments (12)

  1. Sakari Maaranen

    Do not allocate are capacity immediately, because logical volumes are easy to extend, but difficult to shrink. This happens if you give all capacity to one person who is not using it. So leave the capacity unallocated and only take more when you need it.

  2. Jan Galkowski reporter

    So, in future, if I don't have a budget request for an amount of storage, I will increase the size by 30% only. And if I cannot, I'll signal via write-up of an issue.

  3. Jan Galkowski reporter

    On second thought, there are a couple of directories that are finished. I'll move those and note them in their tickets.

    I have created /home/maxwell/local_data/jan_spillover on azi01 to accommodate.

  4. Sakari Maaranen

    In future we could create new servers for serving completed data sets, with continuous local storage for their entire capacity. We could keep download servers with personal quotas to avoid conflicts. Mirror servers with single large volumes for data sets of known size.

  5. Log in to comment