If a process is returning more than Run.limits.sendExpectBuffer bytes back, this assertion gets triggered all of the time.
I believe there is an incorrect bounds check in the assert statement... At the time of failing, n is 256, Run.limits.sendExpectBuffer is also 256.
buf is alloced as size Run.limits.sendExpectBuffer + 1.
Socket_read reads size (Run.limits.sendExpectBuffer - 1), then adds 1 to n later to account for the first byte read earlier.
I'm not sure if the assertion should just be fixed to be <=, since buf has the extra byte allocated already, or if the Socket_read length should be sendExpectBuffer - 2, making n + 1 strictly less than sendExpectBuffer. That depends on how you want escapeZeroInExpectBuffer to work...
check process test_process matching "test_process" if failed port 1234 send "test\r\n" expect ".*test.*" with timeout 60 seconds then restart
#!/bin/bash while true; do yes | nc -l -p 1234; done
'test_process' process is running with pid 19478 'test_process' zombie check succeeded GENERIC: successfully sent: 'test ' monit: src/protocols/generic.c:42: _escapeZeroInExpectBuffer: Assertion `n < Run.limits.sendExpectBuffer' failed. Aborted (core dumped)