- edited description
Ubuntu server minimal 20.04 error: filesystem statistic error: cannot read /sys/class/block/root/stat -- No such file or directory
I am using monit version 5.26 (but I tested this with 5.27 as well). I am on a VM running Ubuntu server minimal 20.04.
This is what is have in my config:
check filesystem rootfs path /
if space usage > 80% then alert
This is the error that appears in the log file:
filesystem statistic error: cannot read /sys/class/block/root/stat -- No such file or directory
This is what I have in /sys/class/block
Thank you very much for all your help!
Comments (12)
-
reporter -
repo owner It seems that your system has paravirtualized disks (vda). Please can you provide following data? :
- output of “mount”
- output of “ls -l /sys/class/block”
-
reporter Thank you so much for looking into this @Tildeslash !
output of
mount
:/dev/vda1 on / type ext4 (rw,relatime) devtmpfs on /dev type devtmpfs (rw,relatime,size=505664k,nr_inodes=126416,mode=755) sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime) proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) tmpfs on /run type tmpfs (rw,nosuid,nodev,size=101500k,mode=755) tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k) tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755) cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd) none on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset) cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio) cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct) cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory) cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb) cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids) hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M) mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime) debugfs on /sys/kernel/debug type debugfs (rw,nosuid,nodev,noexec,relatime) tracefs on /sys/kernel/tracing type tracefs (rw,nosuid,nodev,noexec,relatime) /dev/vda15 on /boot/efi type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro) tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,size=101496k,mode=700,uid=1000,gid=1000)
output of
ls -l /sys/class/block
total 0 lrwxrwxrwx 1 root root 0 Oct 1 08:09 loop0 -> ../../devices/virtual/block/loop0 lrwxrwxrwx 1 root root 0 Oct 1 08:09 loop1 -> ../../devices/virtual/block/loop1 lrwxrwxrwx 1 root root 0 Oct 1 08:09 loop2 -> ../../devices/virtual/block/loop2 lrwxrwxrwx 1 root root 0 Oct 1 08:09 loop3 -> ../../devices/virtual/block/loop3 lrwxrwxrwx 1 root root 0 Oct 1 08:09 loop4 -> ../../devices/virtual/block/loop4 lrwxrwxrwx 1 root root 0 Oct 1 08:09 loop5 -> ../../devices/virtual/block/loop5 lrwxrwxrwx 1 root root 0 Oct 1 08:09 loop6 -> ../../devices/virtual/block/loop6 lrwxrwxrwx 1 root root 0 Oct 1 08:09 loop7 -> ../../devices/virtual/block/loop7 lrwxrwxrwx 1 root root 0 Oct 1 08:09 vda -> ../../devices/pci0000:00/0000:00:04.0/virtio1/block/vda lrwxrwxrwx 1 root root 0 Oct 1 08:09 vda1 -> ../../devices/pci0000:00/0000:00:04.0/virtio1/block/vda/vda1 lrwxrwxrwx 1 root root 0 Oct 1 08:09 vda14 -> ../../devices/pci0000:00/0000:00:04.0/virtio1/block/vda/vda14 lrwxrwxrwx 1 root root 0 Oct 1 08:09 vda15 -> ../../devices/pci0000:00/0000:00:04.0/virtio1/block/vda/vda15
-
repo owner Thank you for data. It seems that for some reason, the mount entity was mapped to “root” instead of “vda1”. The “root” can come from filesystem label / device mapping.
it would be best if you can provide us with access to some virtual machine, where the problem occurs (you can send details to support@mmonit.com).
If no remote access is possible, please can you send yet the content of /etc/mtab?
-
Same problem here. FWIW, this is on AWS with the official Ubuntu 20.04 AMI.
-
repo owner Thanks for update, i have reproduced the problem on ubuntu AWS.
The filesystem is mounted using label:
$ cat /etc/fstab LABEL=cloudimg-rootfs / ext4 defaults,discard 0 0
Which points to e.g. /dev/xvda1:
$ ls -l /dev/disk/by-label/ lrwxrwxrwx 1 root root 11 Nov 1 14:43 cloudimg-rootfs -> ../../xvda1
But the getmntent() API, which is used by Monit, and corresponds to content of /etc/mtab and /proc/mounts (which are links to /proc/self/mounts), shows /dev/root instead:
$ cat /etc/mtab /dev/root / ext4 rw,relatime,discard 0 0 …
ditto /proc/self/mountinfo:
$ cat /proc/self/mountinfo 24 1 202:1 / / rw,relatime shared:1 - ext4 /dev/root rw,discard
The /dev/root is not symlink to /dev/xvda1, but an independent device entry for the same filesystem (with the same major/minor number):
$ ls -l /dev/root /dev/xvda1 brw------- 1 root root 202, 1 Nov 1 14:38 /dev/root brw-rw---- 1 root disk 202, 1 Nov 1 14:43 /dev/xvda1
However, statistics is available via the xvda1 only:
$ ls -l /sys/class/block … lrwxrwxrwx 1 root root 0 Nov 1 15:09 xvda1 -> ../../devices/vbd-768/block/xvda/xvda1
=> it’s quite tricky … procfs uses /dev/root, but sysfs uses /dev/xvda1 for the same device.
It seems /proc/partitions could be the key for sysfs:
$ cat /proc/partitions major minor #blocks name 202 1 8387567 xvda1
Will fix.
-
repo owner Issue
#725was marked as a duplicate of this issue. -
repo owner - changed status to resolved
1.) Fixed: Issue
#937: If the filesystem check uses mountpoint instead of device and multiple devices are defined for the same filesystem with mismatch between /etc/mtab and device name in disk statistics path, Monit reported error and disk activity was not reported.2.) refactor the VxFS disk activity support: uses the common interface now, no need for special callback (revert most of code added on issue
#877- it was the same root cause for VxFS as for issue#937)→ <<cset 7469b8c86d66>>
-
Hi!
I am still experiencing this error on Ubuntu 20.04 on AWS EC2.
[UTC Jan 21 07:52:31] error : filesystem statistic error: cannot read /sys/class/block/root/stat -- No such file or directory
System info -
# lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 20.04.3 LTS Release: 20.04 Codename: focal # monit --version This is Monit version 5.26.0 Built with ssl, with ipv6, with compression, with pam and with large files Copyright (C) 2001-2019 Tildeslash Ltd. All Rights Reserved.
-
repo owner @Dev Dua please upgrade to Monit >= 5.27.2
-
How come this fix is not backported to Ubuntu 20.04 LTS?
-
repo owner @Klemenn the ubuntu package is maintained by the ubuntu package manager, not by us
- Log in to comment