Files changed (1)
+For our purposes, this is a useful (if not "good") simulation as it ensures that the bottleneck is locking (ie, not CPU usage, data IO, etc), therefore producing cleaner results. One big problem is transactions in real systems do not take a constant amount of time (If I were to guess, I'd suspect a Poisson distribution would be more likely). Furthermore....
+Explanation: Each "subsequent" transaction has (P_c - 1) probability of not contending with each of the previous transactions. The throughput*duration product at a given point is therefore the summation over the probability that the nth transaction is running. Throughput is just that value divided by duration.
+Locking B, performed better, as it allowed transactions which overlapping requests only in their readset to overlap. This is most apparent during low contention situations with long transaction duration.
+Purely in terms of lock management, Locking B will always perform better (There are no situations in which a transaction would gain ownership of a lock under A, but not under B). However, under a heavily CPU bottlenecked system, it's possible (though I suspect unlikely) that the simpler locking would be beneficial
+B has the advantage of the concept of shared locks over standard two phase locking, however since it waits for all locks to be received before starting, and releases all locks at once regardless of the nature of the transaction, it loses out on the ability to start a transaction before acquiring the locks needed for a later stage of it, and blocks other transactions from using locks that the transaction already finished with.
+Better for low contention and long transaction length. The former lines up with expectations, the latter doesn't (to the degree that I suspect it indicates an error on our part somewhere), as OCC performs well in situations where contention, and therefore the number of transactions that need to be restarted is low, and where the cost of restarting, should it be necessary, is also low.
+I would expect in a real system, where due to the pieces mentioned, the cost of each transaction, and therefore that of a restart, is higher, that the low contention advantage would remain, and the effect of long/resource intensive transactions would be even greater.
+I suspect it would perform worse in simulation than our current system, as more transactions would have to be restarted and the cost of validation in this simulation is minimal.
+MVCC great advantage is that it effectively negates the effects of contention on read (no transaction ever must wait for another to finish using a resourcebefore it can read it) and therefore shines in situations with a mixture of reads and writes with high contention (MVCC is the run-away winner for the mixed test)
+You could very well combine MVCC with OCC (and then you wouldn't need to check your readset on validation), however, MVCC+LockingA or LockingB would basically be MVCC.