Looks good, thank you. Probably not related to your change, but I noticed that the Ceph RBD page fails to load if a pool has reached it’s quota and the health status has changed to HEALTH_WARN - the oA backend returns:
500 - INTERNAL SERVER ERROR
Command 'rbd get image parent' terminated because of timeout (30 sec).
This error disappears if I increase or remove the quota and the health goes back to HEALTH_OK.
Thanks. Did some tests, turn out that if an RBD image is opened in RW mode (which is default), RBD image methods block when the image’s pool reaches its quota or is full.
Need some time to figure out correct read-write modes for all RBD Image instances in code. Also there might be some checks/warning should be shown to users when they want to manipulate resources in a full pool.
Not sure if rgw, iSCSI, and NFS resources are also affected, need further tests.
Also, it’s somewhat inconvenient having to enter the Maximum number of storage in plain bytes without being able to specify prefixes for multiples of bytes - would it be possible to accept units like “10 GiB” here? See the RBD creation dialogue for an example.