Results of the Test
Assume for a moment, that you want to create the newest social web application, and you want to use an established distributed version tracking tool as data backend, so all data can easily be migrated and merged between seperate parts of the system, and even be edited offline and merged into the mainline at a later time.
*([shut up and take me to the results](#results))*
If you want free tools (and since I believe in free software as the only ethical choice for any application, I won’t assume anything else), there are two main contenders: [Git] and [Mercurial].
[Git]: http://git-scm.com/ "Git the fast version control system"
[Mercurial]: http://mercurial.selenic.com/ "Mercurial is a free, distributed source control management tool. It efficiently handles projects of any size and offers an easy and intuitive interface."
From [knittl2010] we know that git is the clear winner in terms of commit speed if we omit the time needed for adding the data and the time for the frequently needed garbage collection runs, git needs to keep the repository sizes sane. If we take this into account, the question becomes harder to answer.
[knittl2010]: http://thehappy.de/~neo/dvcs.pdf "Analysis and Comparison of Distributed Version Control Systems"
On a typical human workflow, for which [most current performance tests](http://draketo.de/deutsch/freie-software/licht/dvcs-vergleiche-mercurial-git-bazaar-links) are designed, the raw speed of a command doesn’t really matter, as long as it’s not too long, because the time the command takes to complete is far below the time for typing the command — even if you just type “C-r com ENTER”. The [highest typing speed ever measured] is 216 [words per minute], which equals 1080 characters, or 56ms per keystroke.
[highest typing speed ever measured]: http://www.owled.com/typing.html
[words per minute]: http://en.wikipedia.org/wiki/Words_per_minute
On a server, though, the raw speed of the commands defines how many requests you can answer per second, or more generally, how many servers you need for supporting a given userbase.
As you will start out with limited resources (if you aren’t extremely lucky, you won’t be rich from the start), let’s make two assumptions:
* You have limited disk space.
* You have a limited number of servers.
Also you want to provide your users with *as much storage space per server as possible*, and you want to guarantee a certain *maximum response time*, so no user sits in front an unresponsive website (even 1 second load time is frustrating, and you need at least 300ms to get the data from your server to your users).
The following additional assumptions make it possible to design the simple test:
* There is no quiet time on the server with no activity in which you could perform maintenance work without impact on the response time of your system.¹
* The time for an action is the time needed to find the state, save it and keep the repo in a working state despite limited space. For git that means, add is part of the commit and a regular gc is necessary.
* Your backend system has to accommodate to arbitrary filechanges (modification, deletion or addition). The state consists of arbitrary files with arbitrary content.
¹: To verify that assumption, you can check the public hourly access statistics of the [1w6] project (with almost entirely german content; Timezone GMT+1):
![hourly access statistics of the 1w6 project for december 2010](hourly_usage_201012.png)
[1w6]: http://1w6.org "1w6: Ein Würfel System: Einfach saubere, freie Rollenspiel-Regeln"
As final assumptions say, that you’ll implement your system in Python, calling git via subprocesses and calling Mercurial directly via its API, to avoid the start time of the python interpreter (about 30ms). That is the best way to designing a system which uses Mercurial, and it does not have a negative effect on git (see “Limitations” for details).
For testing, I read the changed files for each commit and the mean size of the change from different repositories (see [Readme](Readme.html) for details). They serve as usage pattern. Then I got about 8 MiB of arbitrary lines of code from the repositories as example data.
For each commit I then add/change the files which were really changed in the commit, but I use random lines from the example data. (Up to) half the lines are used to replace existing lines, the other half is appended.
All these tests were run on a quadcore amd64 (605e, 2.4Ghz) Gentoo GNU Linux system with git version 18.104.22.168, Mercurial version 1.7.3 and Python 2.7.1+ (release27-maint:87872, Jan 11 2011, 19:13:26) built with GCC 4.4.4.
The disk on which the repositories were created is a SAMSUNG SpinPoint F1 DT series: SAMSUNG HD103UJ; Serial Number: S13PJDWS513146.
On small tests (only the first 155 commits), git performed best on the change data of the mercurial repository. Since the full test took quite long, I stopped it after processing that change data. This partly favors git.
The other main limitation of this test is that the history is linear, while real history will be nonlinear with many merges.
Also Git is called from python via subprocess.call() which imposes a cost from two to three times 3ms (0.006s for normal commits, 0.009s for commits with garbage collection and for automatic garbage collection). The minimal time for git in this test is about 50ms. This could be reduced to about 44ms by calling it in other ways.
12980 commits, hg usage pattern.
<!--![hg vs git without gc](usage-hg-12979-hg-vs-git-ng.png)
Before additional manual gc:
After additional manual gc:
$ cat hg-hg\_com.dat | xargs | pyline "len([float(l) for l in line.split()])"
$ cat hg-hg\_com.dat | xargs | pyline "sum([float(l) for l in line.split()])"
$ cat hg-hg\_dis.dat | xargs | pyline "sum([float(l) for l in line.split()])"
$ cat hg-git\_ng.dat | xargs | pyline "sum([float(l) for l in line.split()])"
$ cat hg-git\_auto.dat | xargs | pyline "sum([float(l) for l in line.split()])"
$ cat hg-git\_g10.dat | xargs | pyline "sum([float(l) for l in line.split()])"
$ cat hg-git\_g100.dat | xargs | pyline "sum([float(l) for l in line.split()])"
$ cat hg-git\_g1000.dat | xargs | pyline "sum([float(l) for l in line.split()])"
$ cat hg-git_ag100.dat | xargs | pyline "sum([float(l) for l in line.split()])"
<!--1d6 uses only the first 255 commits because git gc became unbearably slow.-->
All data parsed together, except for Mercurial via dispatch() (because it offers no new information) and the aggressive garbage collection in git, because that is much too slow.:
![All data except git gc --aggressive](hg-vs-git-all-data-no-aggressive.png)
The size shown in the legend shows the minimum size, which the repository has directly after a manual garbage collection and a size close to the maximum size. Since the data is not perfectly the same, the minimum size can deviate. I did not calculate the standard-deviation (for time reasons), so it can only be estimated. If the sizes differ by 3 MiB or less, treat them as equal.
The aggressive garbage collection in git (git gc --aggressive):
![Mercurial vs. aggressive garbage collection in git](hg-com-vs-git-gc-aggressive.png)
Note: The garbage collection steps created a mean load of 100% for 3 of my 4 cores.
The total time of the commits as well as the repository sizes:
![Total time for commits](total-time-spent-committing.png)
![Repository sizes with and without manual repack at the end](repository-size.png)
The actions with less then 0.4 seconds runtime (only the range 0 seconds to 0.4 seconds is shown, though the garbage collection in git takes significantly longer. Thus the garbage collection is not visible here, showing only the fast operations:
![Mercurial vs. Git when ignoring garbage collection (just the region below 0.4s)](hg-vs-git-ignore-gc.png)
As contrast, Mercurial vs. git with no garbage collection on the full range:
![Mercurial vs. Git when ignoring garbage collection (just the region below 0.4s)](hg-vs-git-no-gc.png)
Mercurial vs. git with automatic garbage collection (git gc --auto called after each commit, which makes the actions of Mercurial and Git feature equal):
![Mercurial vs. Git with gc --auto](hg-com-vs-git-auto.png)
Mercurial and git with garbage collection activated every 10, 100 or 1000 commits:
![Mercurial vs. Git with gc every 10, 100, 1000 commits](hg-com-vs-git-gc-10-100-1000.png)
*From here on, the Mercurial code via commit() uses a stronger locking mechanism, which makes it faster and gets the commit time closer to a constant time.*
And only the first 300 commits of Mercurial and git:
![Mercurial vs Git, first 300 commits](hg-vs-git-first-300.png)
For getting the bigger picture, the cumulative time for all commits for Mercurial, git with gc every 10 steps and git with gc every 100 steps - including fit-functions.
![Mercurial vs. Git, cumulative + fit](hg-vs-git-g100-cumulative-fit.png)
Finally a comparision of Mercurial called via mercurial.commands.commit(ui, repo, message=str(msg), addremove=True, quiet=True, debug=False) and via mercurial.dispatch.dispatch(["commit", "-q", "-A", "-m", message]):
![Mercurial commit() vs. Mercurial dispatch(['commit', …])](hg-com-vs-hg-dis.png)
Images with ¹ are run in a second run, with the data from the first.
If you need the fastest system and don’t mind occassional performance glitches and volatile repository sizes, git is the way to go. Note that its speed drops to half the speed of Mercurial (and less for big repositories, since the time for the garbage collection rises linearly) if Git is forced to avoid frequently growing and shrinking repositories by running the garbage collection every ten commits. Also note that git gc --auto allowed the repository to grow to more than 5 times the minimum size and that garbage collection created 100% load for 3 of my 4 cores which would slow down all other processes on a server, too (otherwise gc would likely have been slower by factor 3).
If you need reliable performance and space requirements, Mercurial is the better choice, especially when called directly via its API. Also for small repositories with up to about 200 commits, it is faster than git even without garbage collection.