Files changed (1)
Otherwise, the benchmark shows a somewhat good load possible. At 7500 users, it's still possible to send a few dozen thousand messages per second. The bottleneck will probably be the web server, which means the code is //fast enough//, especially given real users aren't likely to be spamming that much. Most of the memory consumption (not shown here) appears to be from the messages sent and the history kept by each user (10 messages).
+The second benchmark is the [[http://bitbucket.org/ferd/chut/src/tip/src/benchmark_real.erl|Realistic benchmark]].
+The realistic benchmarks operate in a manner similar to the stress benchmarks, but there is a 5 seconds delay between each message sent by any user. The idea is to see whether the number of messages sent/received on a average stay constant when we augment the number of users in a system. This should be a decent test of latency and of how many users can be supported in a more realistic scenario.
+Here my guess about why I get more messages received than sent is that a sent message generates one 'sent' event, counted as a received message. What that means is that because the 'sent' event takes less time than the 'receive' event to show up (no lookup needed), when I send the termination signal, the synchronization between what was sent and received is always off.
+The proof of this rests in how there is not much variation in the rate of Sent/Received depending on how long the test runs:
+Where the ratios are of 0.947, 0.946 and 0.965, respectively. The difference in what's sent and received doesn't seem to depend on time, groups or clients. Thus the cutoff point must be the next rational error cause. Now let's carry on with the benchmarks...
+Still no change in the stats. Let's push it to 15 000 users (you need to raise the default process limit: we have 15000*3+750 processes (Users * supervisors + groups + ...) = 45 750 + ... Given the limit is 32 768 by default, we've gotta go higher. Restart the VM with 'erl +P 75000' and then it can be run:
+The send/received ratios remain similar, and the average per client too. The core of the chat server can thus theoretically handle over 15 000 users sending each other messages every 5 second without too much degradation in response time per user. Note that at that point, the shut down of processes became a bit hard on my laptop and user timeouts started appearing (after the results were done and clients disconnected). Real world trials would be needed to further show reliability of Chut's core, but so far I'm pretty satisfied with the results.