Clone wiki

hg-website / Workflows


Possible Workflows to include on the site or the Mercurial wiki



  • Huge repo with shallow clone extension
  • Clone only feature branches

General ideas:

  • Workflows sorted by conditions and reqquirements (environment, number of devs, amount of history, ...) -> Ask in the #mercurial list.

Workflow reports

Pbranch development

The pbranch development workflow is based on the superb patch branch extension ( Two local repositories sitting side by side. A "stable" repo and a cloned "devel" repo. In the "devel" I create patches that I then export and import into the "stable" repo. The advantage is that I can do all the commits and experiments that I need in "devel" without them being part of the permanent record of the project.

Once a patch is in "stable" I pull the "stable" repo from "devel" and merge the patch branch with the change set, recoding change what "stable" changeset relates to what patch branch in devel, and closing the patch branch.

The stable repo receives only complete, tested and documented changesets. Each changeset should reference to an entry in the issue tracking system. The goal is that anybody can grab the "stable" and always have a "stable" release. With version numbers only describing sets of features and not code quality.

- Marijn Vriens <>

Sysadmin workflow

The sysadmin workflow is the simplest that can be, but for me at least, it is of real value.

When ever I have configuration files for a service (Apache, Postfix, OpenLDAP, Squid... etc.) I just run "hg init" and add all the files. From then on out I just maintain the files in its own little local repo. I can see what was changed over time. But the best thing is: it's a no brainer. No need for network access (locked down servers), think of where to leave it in a SVN repo or worry about system passwords escaping...

It's so simple that sysadmins that don't have SCM experince learn it in 15minutes and see the benefit of maintianing "hg st" silent on production machines.

- Marijn Vriens <>


That use case is a bit similar to what is described on - perhaps something from that page can be used too.

- Mads Kiilerich <>

Single developer

1. Home Projects

One of the simplest workflows possible. A single developer working on small projects for learning or "scratching an itch". Most of my repositories live in my home directory under a projects/ directory. These repos are my "master" repositories.

When I want to make some project public, I push to the mercurial instance on my webhost. If I want to work on something on the road, I clone or pull to my laptop and don't need network access to my main system.

  • What is special about your workflow?

It's easy to get started and simple to push and pull over ssh.

  • Do you use extensions for your workflow? (Are they essential, or do they just make it easier?)

None are needed, but sometime I use MQ to "fix" a changeset I have not pushed anywhere.

  • How do you work locally?

Working in the "master" repository on my system. Commits are done there.

  • How do you share changes? (between developers and for releases)

Push to the mercurial install on my personal webpage.

  • How many people are in your team?

One. :-)

  • Why did you choose this workflow?

It's the simplest way to work as a single developer.

- John Mulligan <>

Dealing with CVS

We still use CVS at my office, and we all feel the pain. So I try to reduce the pain by doing most of my "trunk" work outside of CVS. I use the convert extension to create a Mercurial repo of the CVS history. This repo is available on a server in our lab network. I clone this converted repo and work directly in it.

I often use MQ here, and create patches that I will apply to CVS to checkin. When I am done a patch I email others the link to the hg web for that revision. If they are happy with that I then patch a CVS instance and commit those changes.

  • What is special about your workflow?

I use CVS without having to use CVS very much.

  • Do you use extensions for your workflow? (Are they essential, or do they just make it easier?)

Yes. I use MQ heavily here, but it is not strictly needed. I usually end up having to adjust a patch more than once before applying to CVS.

I have scripted the 'hg export' to cvs also, but I used to do it all manually.

  • How do you work locally?

With MQ, usually creating one or two patches per bug I am working on.

  • How do you share changes? (between developers and for releases)

I email coworkers links to the web view, so they can see what I plan on committing.

  • How many people are in your team?

About six people.

  • Why did you choose this workflow?

Mercurial is fast and comfortable to work with. MQ lets me manage patches to a remote VCS without doing everything manually.

- John Mulligan <>


Why don't you convert changes to one repo, and then clone off feature feature repos and just "hg export" the changes from there?

repos/ cvs/ converted/ feature1/ feature2/

Then you can just do "hg export" in feature1/ and get a patch you can apply directly to cvs.

(the pbranch extension should allow using this workflow with a single repo for several patches instead of one per feature)

- Arne Babenhauserheide <>

Patch Queue Workflow

- Giorgos Keramidas <> ---

I think that not only would pbranch support the workflow that you are trying to do, but it would probably make it easier in cases where you are working on a patchset with someone else (sharing queues with the intent of collaborating kinda sucks).

- Bill Barry <>

Shared repository with named branches and bugzilla integration

Briefly: we have mercurial-server installed on a central server which we use to share our changes. For many projects, we create a named branch for each bug, and merge these branches together when the time comes to create releases. This makes code review easier. We also have Bugzilla integration which adds comments to bugs based on the branch name.

- Paul Crowley <>

tiered web development

For those of you that do tiered web development: dev -> qa -> staging -> production, do you all prefer to keep a separate repository or a separate named branch for each environment?

- bradford <>

In our environment they all come from the main repo.

We put our releases into a separate repository and we tag the releases that get to production, but that is all:

our tiers are: dev -> qa -> demo -> test -> production at any time, the main repo tip may be built and placed into the {project}-published repo at this time, qa gets updated from the latest revision in {project}-published overnight or when no demos are scheduled and something major has happened demo gets updated from the latest revision in {project}-published when it comes time to do releases, we tag the main repo and build into {project}-published if all goes well in the qa environment, we replace the current contents of the {project}-stable repo with the tip contents of {project}-published at this time, test gets updated from the latest revision in {project}-stable during the next scheduled release, production gets updated to the latest revision in {project}-stable

Most of this happens automatically or via a single click in CruiseControl.NET

So, I suppose we keep separate repositories for environments (dev==main, qa/demo==published, test/prod==stable), but dev is the only one with source code.

- Bill Barry <>

Centralized with attic

Which workflow(s) do you use?

For work we mostly use a centralized style workflow. There is one big repository with 14 projects in it and we all can push to it. We tag releases for each product in this repo and use named branches for major/minor versions (ex: 1.0.0 is tagged on the default branch, 1.0.1 is released off of a 1.0 branch with a base of the 1.0.0 tag whereas 1.1 would come off of the default branch). Essentially, every machine only has one copy of this repository.

Additionaly, each developer has their .hg/attic folder wired up to a central attic repository, and we each have .hg/patches wired up to an individual mq repo on the central server.

We do have several rules: 1. The main repo must never have more than one head (across all branches). This head must be from our default branch (which happens to be named trunk due to the fact that I named it at revision 0; looking back it might not have been my greatest idea, but it really hasn't caused any problems other than minor confusion when reading docs and wondering what the 'default' branch is) 2. Developers are strongly encouraged to never have uncommitted work in their repositories when they are not at their machines working on something. All work should be committed/pushed to the main repo, or shelved/pushed in the attic or in the developer's central queue. 3. If somebody commits something that causes the main repo to start burning then that dev is expected to immediately fix it. We have CruiseControl.NET watching the repository which compiles/tests each commit and sends out an email when it finishes.

Some important things about a workflow are

  • What is special about your workflow?

Nothing really... It is pretty easy to understand (feels quite similar to our svn usage before we switched, except we have the added benefits of mq and attic).

  • Do you use extensions for your workflow? (Are they essential, or do they just make it easier?)

We heavily use Attic and the log viewer in tortoisehg (we all use windows; the rest of tortoise we don't really find useful though). I also have purge and rebase enabled, simply because I find them useful (none of the others use these, we don't particularly care about seeing a branchy log). We all occasionally use MQ (much less now that I wrote Attic as we rarely built queues more than one patch deep).

In my attic, I also have boundmode enabled (another extension I wrote) because I tend to commit there a lot and it makes it pretty easy to not have to remember to commit and push every time (boundmode makes hg ci attempt hg ci then hg push). It is completely unnecessary, but saves me typing. I'd use it on my main repo, but I often will commit 3 or 4 times before I decide to push there (generally after I push I go and read my email and comment on some bugs in bugzilla).

  • How do you work locally?

Each developer has a single copy of the main repo. As a developer: Generally at the start of the day you would run hg pull -u (I have a scheduled task that does this for me at 5am before I wake up). Throughout the day you will commit and push (and merge/rebase where necessary). Occasionally you might pull and update or shelve and unshelve (or qpush/qpop). When committing, you should put a bug number into your commit message, the bugzilla hook will link to the changeset in the bug and ccnet will know what bug to link to with the build. After committing you would pay attention to the CI email (generally comes in about 5 mins). At the end of the day you would make either one last commit and wait for the CI email, or you would shelve your work and push to the attic repo (merging where necessary) or if you are working on something big you might push your queue.

  • How do you share changes? (between developers and for releases)

Finished work (or other changes that can be made public in the main repo without causing a burning tree) generally goes directly in the main repo where anybody can pull it at any time (last I checked we were averaging something like 20 commits a day). Unfinished work usually goes into the central attic; if I wanted to share something I would shelve a patch and commit/push the attic. My coworkers would pull the new changesets in the attic and can unshelve my patch.

  • How many people are in your team?

Right now we are down to 3 constant developers. There is one person working on skins who also commits (about twice a week or so) and two other developers who will occasionally commit (we have a second team who are working in a language called Powerbuilder against a Visual Source Safe repository...). There used to be 5 of us full time on these 14 projects.

  • Why did you choose this workflow?

It works... We didn't exactly choose this workflow. It sorta evolved out of our needs. We started with the idea that a centralized repository would be much like how we used subversion, just replacing svn ci with hg ci && hg push and svn up with hg pull && hg up. Already we had Subversion wired up to Bugzilla and using Mercurial the same way just seemed natural. Originally we worked just like we were with Subversion, always committing everything at the end of every day, no matter what (some hyper-intelligent person thought it would be a great idea to contracts with our clients that we would never leave unfinished work on our machines and that their code escrow service should always have all of our code, finished or not; using Subversion we had 3 choices: 1. breach contract by not committing at the end of the day, 2. let our main tree simply not be stable, or 3. deal with patches/user branches in svn; generally we went with option 2 and nobody was really happy because our releases suffered for it). We knew there had to be a better way, so we started using MQ (and almost immediately our releases and build server were much improved). Sometime around then, I joined this list and started looking around for a tool that better modeled the way we wanted to work (sharing queues kinda sucks when you only care about one patch in them at any time). After a while I realized that there really wasn't a tool that quite matched the way we wanted to work, so I wrote Attic.

If you also have an idea for a nice name for your workflow(s), please post it!

I would call this workflow Centralized. When it comes down to it, that is really all it is:

We have: 1. 2. 3. 4. 5.

each developer has: c:\work\Trunk (with default path set to 1) c:\work\Trunk\.hg\attic (default path set to 2) c:\work\Trunk\.hg\patches (default path set to a local \work\mq repo; when looking at someone elses queue you delete this dir and pull another) c:\work\Bill-mq (default path set to 3) c:\work\Mike-mq (default path set to 4) c:\work\Andrew-mq (default path set to 5)

And if there's anything you'd like to add beside the questions, please do so!

Everybody who works here has had experience with Subversion; this workflow was really easy to get used to. For the small size of our team, I feel that this workflow is pretty good but it certainly wouldn't scale up. I doubt it would be useful for more than about 10-15 developers.

A script which simulates your workflow(s) would be a nice bonus :-) (but that's not necessary for a useful description)

How about some instructions to set it up?

central server: 1. set up hgwebdir on a folder 2. in this folder you should create the following repositories: Trunk, Attic, and one MQ repo per developer

each developer machine: 0. install Mercurial and Attic globally 1. create a work folder 2. clone Trunk to work\Trunk 3. clone Attic to work\Trunk\.hg\attic 4. clone everybody's queues to work\name-mq 5. set up path aliases for everybody's queues locally and for push locations (saves typing later when you want to change queues; though most of the time queues will be unused because the attic is just so much better).

- Bill Barry <>

Simple shared repository

  • This is most useful when your developers already know and use SVN*

I envision us both working the main trunk for many small day-to-day changes, and our own isolated repo for larger additions that we will each be working on.

I don't know about a HOWTO, but I can give you a short description about basic usage and the workflow I'd use:

Basic usage:

  • Just commit as you'd have done in SVN via "hg commit".
  • To get changed from others, do "hg pull -u". The "-u" says 'update my files'.
  • If you already committed and then pull changes from someone else, you merge the changes with yours via "hg merge". Merging is quite painless in Mercurial, so you can easily do it often.
  • Once you want to share your changes, do "hg push". Should that complain about "adding heads", pull and merge, then do the push again. If you really want to create new remote heads, you can use "hg push -f".


Firstoff: Create a main repository you both can push changes to. If you have ssh access to a shared machine, that's as simple as creating a repository on that machine via "hg init project".

Now both of you clone from that repository via hg clone ssh:USER@ADDRESS/path/to/project project

(ADDRESS can be either a host or an IP).

That's your repository for the small day to day changes.

If you want to do bigger changes, you create a feature clone via hg clone project feature1

In that clone you simply work, pull and commit as usual, but you only push after you finished the feature. Once you finished the feature, you push the changes from the feature clone via "hg push" in feature1 (which gets them into your main working clone) and then push then onward into the shared repository.

That's it - or rather that's what I'd do. It might be right for you, too, and if it isn't, don't be shy of experimenting. As long as you have a backup clone lying around (for example cloned to a USB stick via "hg clone project path/to/stick/project"), you can't do too much damage :)

- Arne Babenhauserheide <>

Offsite working on dynamic websites

I have are a couple of websites--cgi python sites. I'd like to get them into version control, and Mercurial sounds pretty good. This would be a single-developer situation--just me piddling with the sites on occasion. Currently I usually just edit the live site files in-place (this is not a high-traffic critical site), but I'd rather get things under version control.

So here is my basic question: what's the best way to set up my "workflow" for this type of situation (I hope I'm using that term correctly)? My assumption is to copy the existing website contents off to a repository directory and set up the repository there. Then I'd have to link that directory to a web-enabled directory so that I can make updates and test them, I assume. Then, when I'm ready to publish the changes to the live site...would I just create another repository in the live site and push (or pull) into the live directory? Or is there some other "publish paradigm"?

- "Chris Stromberger" <>


You should be able to do "hg init; hg add; hg commit" exactly where the web site is now, then use HG clone to make copies elsewhere.

I keep clones of my web sites on my laptop, then use "hg push" through ssh to upload changes to the server. They go directly into repositories where the site is located but I still have to do "hg update" on the server to take my change live. I don't want the update to be automatic because doing that I would lose some control over configuration management of the site.

For some repositories I push all changes to a test server first, then to the operational server once I am happy with them.

- Michael Smith <>


I used these instructions that were intended for Git, but worked well with Mercurial:

Converting the command names from Git to Hg was not hard; most are the same or very similar. Overall, I thought that article was a great how-to that helped me figure out how to fit DVCS into my workflow, and the result has been a huge time and effort saver. It was especially useful for me in figuring out how to manage Drupal, which has the "Drupal core" software as well as contributed modules and finally, my customizations.

- "Jeremy Reichman" <>

Dependency tracking

Daed Lee <> wrote:

I'd like my application to have the following directory structure:

application/ application/dependencies/library_a/ application/dependencies/library_b/ application/dependencies/library_b/dependencies/library_c/


Essentially, I want to treat the directory tree as a single repository for operations like clone/pull/push/etc., but I want to also be able to upgrade each library individually while retaining my local changes.

Here at work I manage a very similar situation using branches/repositories: each library, application or common code tree is in a repository, and changes to the common code propagated as if the common code was a simple branch of the same project.

It's probably easier to show than to describe:


  1. create the library (common code) repository hg init r1 mkdir r1/library1 echo 'a change' > r1/library1/1.txt hg -R r1 ci -Am 'lib change 1' echo 'another change' >> r1/library1/1.txt hg -R r1 ci -m 'lib change 2'
  1. create the app repository hg init r2 mkdir r2/application1 echo 'a change' > r2/application1/1.txt hg -R r2 ci -Am 'app change 1' echo 'another change' >> r2/application1/1.txt hg -R r2 ci -m 'app change 2'
  1. now we'll add library1 as a dependency
  2. so pull it from its repository
  3. (this first pull needs --force, because the
  4. repositories are (yet) unrelated) hg -R r2 pull --force r1
  1. the right place for the lib on this repo is the
  2. directory 'dependencies', so move it there... hg -R r2 up -C tip hg -R r2 mv r2/library1 r2/dependencies/library1 hg -R r2 ci -m 'move library1 to the right place'
  1. ... and merge it to the application
  2. check where is the other head, go there, and merge hg -R r2 heads hg -R r2 up -C 1 hg -R r2 merge hg -R r2 ci -m 'add library1 to the project'
  1. from now on, changes to library1... echo 'a change' > r1/library1/2.txt hg -R r1 ci -Am 'lib change 3' echo 'yet another change' >> r1/library1/1.txt hg -R r1 ci -m 'lib change 4'
  1. ...can be simply merged to its copy
  2. on the application repo hg -R r2 pull r1 hg -R r2 merge hg -R r2 ci -m 'merge from library1'


So, an application repository clone gets all needed libraries automatically, since they're just additional files inside the same repository. And changes to a library can be simply merged to the applications that use it.

The library code can also be developed and tested directly on the app repository. To propagate it to the main library repository afterwards, you can export it, import it on the lib repository¹, and merge it back to the application as usual (the change will appear twice, but the merge will work fine)².

You can also migrate code easily between application and library, or even diverge the library code a bit. This setup works very well for my projects, but it may be a bit cumbersome if you don't need that much isolation between a project and its dependencies (it may create lots of merge revisions). Also, repository size may be an issue, if the dependencies are too big or numerous.

Hope that helps, Wagner

1: in the example, the directory rename makes the import a bit tricky, but take a look at the -p option to hg import

2: there are some extensions that may help with this workflow (notably mq), but they're not essential; I'd suggest experimenting without them at first.

- Wagner Bruna <>

Tracking dependencies by snapshots

I'd like my application to have the following directory structure:

application/ application/dependencies/library_a/ application/dependencies/library_b/ application/dependencies/library_b/dependencies/library_c/

However, I'm not sure how I should set up my repository(s) to fit this structure.

You can always import a clean version of vendor code in its project specific subdirectory. One helpful bit for tracking thirdparty code in this way is that Mercurial can go back and commit over the clean vendor code later.

So if you have an initial repository that has a history of its own, with changes 1..3:

[1] --- [2] --- [3]

You can import a snapshot of library_a as change 4 (marked with '*' in the history graphs below to show the original vendor/thirdparty import, two stars '' mark the second import of clean code, and so on):

[1] --- [2] --- [3] --- [4]

Then you can commit your own project specific changes on top of the vendor code, i.e. to adapt the build-glue of the library, or to attach it to the rest of the project build. This will create another local changeset, shown as [5] here:

[1] --- [2] --- [3] --- [4] --- [5]

After a few more local changes, your history may include other, unrelated stuff, i.e.:

[1] --- [2] --- [3] --- [4] --- [5] --- [6] --- [7]

With Mercurial you can go back to changeset 4 and import a new, clean snapshot of the library_a code:

% hg update --clean 4 % cd application/dependencies % rm -fr library_a % tar xzvf /var/tmp/library-a-1.2-release.tar.gz % mv library-a-1.2-release library_a % hg addremove library_a % hg commit -m 'Import library_a, release 1.2'

Now your history will look like this:

  • [1] --- [2] --- [3] --- [4] --- [5] --- [6] --- [7] \ `-- [8]

The second 'vendor import' of library_a has been committed as change 8, and you can pull the latest vendor code into your mainline by merging:

% hg update --clean 7 % hg merge 8

When the merge is finished, you can commit the new, merged code as another change:

  • [1] --- [2] --- [3] --- [4] --- [5] --- [6] --- [7] --- [9] \ / `-- [8] ----------------------'

A few local changesets later, you can import a third snapshot of the vendor code on top of change 8, and merge again:

  • [1] --- [2] --- [3] --- [4] --- [5] --- [6] --- [7] --- [9] --- [10] --- [11] --- [13] \ / / +-- [8] ----------------------+---- [12] ---------------'
    • *

... and repeat the import & merge dance every time the library_a vendor releases a new snapshot that you consider worthy of import.

Note how the changesets 4, 8, and 12 form a second `parallel line of development' that tracks the various imports of library_a. This is essentially a `vendor branch' for the library_a code.

[ This example shows how you can import three snapshots of vendor code for a single library, but I hope it's easy to imagine how this can extend to arbitrarily large numbers of vendor code drops. ]

- Giorgos Keramidas <>

concept: automatic trusted group of committers (untested)

Goal: A workflow where the repository gets updated only from repositories whose heads got signed by at least a certain percentage of trusted committers.

Requirements: Mercurial, two hooks for checking and three special files in the repo.

The hooks do all the work - apart from them, the repo is just a normal Mercurial repository. After cloning it, you only need to setup the hooks to activate the workflow.

Hooks: prechangegroup and pretxnchangegroup

Files: .hgtrustedkeys , .hgbackuprepos , .hgtrustminimum

concept: - prechangegroup: Copy the local versions of the files for access in the pretxnchangegroup hook (might be unnecessary by letting the pretxnchangegroup hook use the rollback-info).

- pretxnchangegroup:

  • per head: check if the tipmost non-signature changeset has been GnuPG signed by enough trusted keys.
  • If not all heads have enough signatures, rollback, discard the current default repo and replace it with the backup repo which has the most changesets we lack. Continue discarding bad repos until you find one with enough signatures.

.hgtrustedkeys contains a list of public GnuPG keys.

.hgbackuprepos contains a list of (pull) links to backup repositories.

.hgtrustminimum contains the percentage of keys from which a signature is needed for a head to be accepted.

With this workflow you can even do automatic update from the repository. It should be ideal for release repositories of distributed projects.