Source control plays an essential role in software engineering. I’ve been using it ever since my first job and it transformed how I code. But like every tool it seems, it can be your best friend or at times your worst enemy. Most painfully, CVS, SVN and P4 for example all are terrible at merging a branch the second time. They lose track of what was already merged and start registering false conflicts.
At Adobe, on some complex projects during lockdown you’d have to coordinate with someone before each checkin. He’d bracket batches of commits with tags, then carefully merge a set of deltas one batch at a time. Not a fun job – everyone’s waiting on you, while you are trying to juggle lots of code you did not write at a critical juncture of the project.
The other time source control let me down in a big way was on my trip to India a couple of years ago. Access to the source control system back in San Jose was so poor, it made me change how I worked – in a bad way. I did not verify the diffs and checkin comments for affected code before making changes. I batched up all sync/checkins during breaks (and yes took more breaks).
The reasons Git is superior:
Local history, local branches
I started a new project by creating a git repository on my local machine (git init, git add). A few months later, I wanted to share the code with a friend. I cloned my local repository into a bare repository on a hosted linux vps, then gave out that URL (git clone ssh://myserver.com/var/git/myapp.git). Now I can “git push” and “git pull” changes to/from that remote server as needed to share or backup my work. Each repository maintains the entire history of shared branches so even if there is a central repository, you use it less often. When you have conflicts trying to push or pull, there’s one straightforward process to merge and resolve them.
Occasionally you need to put work on hold to fix some other more important bug. Git lets you stash away your changes in a temporary branch (git “stash”), do the fix, then bring your changes back with “git stash apply”, all without touching a server.
Because you can check in changes to your own repository without affecting others and without having to run the complete test suite, your checkins tend to be smaller which improves the quality of your version history. At Adobe I was known for massive checkins sometimes with as many as 10 bug fixes. That’s because the test suites would take an hour or more to run. I could run them at most two times a day without interrupting my work. Later this cost me time when trying to identify or merge a particular fix. With Git you make checkins to your local project at natural intervals for history. You push/pull at natural intervals for synchronization.
On all but the smallest projects, you need a way to test environments that are isolated from active development prior to release. Usually you tell coders to stop checking in changes during lockdown or you might create a branch and start merging. Either way slows you down at the most critical phase of the project.
With Git you define a separate server repository for each level of isolation that is required. You might have a development repository which developers sync to, a staging repository for testing primarily used by QA, followed by a live one that is used to mirror what is actually released or to be released. During normal development, you might have staging automatically pull from development so QA stays on the latest. But after lockdown, you turn this off. QA can move changes as needed from the development repository into staging and sync that to live as needed. Any developer can change their default repository and sync to either staging or live as needed when problems arise.
So far, I like the performance characteristics of Git. Given the architecture, some things are faster, some things are slower but I suspect that since Linus wrote the core, most things you do day-to-day are faster even on large projects. Version information is maintained per-repository, not per-file so getting the changes which affect an individual file can be slower – i.e. the “git blame” command (similar to cvs annotate). But commits, push and pull commands have so far been very fast for me. Despite the fact that Git does not store changes as “diffs”, but instead stores everything as a compressed blob file-chunks, space has not been an issue.
Smarter Than You’d Expect
Renaming a file? Git figures that out automatically by comparing SHA1 hashes. Git can even figure out when you refactor a big chunk of one file into another one. It does fancy ascii-art during each push/pull to show you added/removed chunks.
Verifies All Files
Kernel programmers tend to be paranoid (a good thing). Git verifies the integrity of all files using SHA1 hashes. If any bit is out of place, it will barf with some cryptic error that may require a google search to fix. But this has already paid off for me. One problem I had with Git on windows was running it in cygwin without newlines getting destroyed (it only works in one of cygwin’s binary mode). Git complained which prevented me from checking in any corrupted files.
My favorite app server, Resin, is now using Git behind the scenes to sync files across a cluster of servers. I like that use since a) it is pretty fast, b) it makes it easy to make an isolated change on a live server while tracking that change robustly, c) you can check the history even on production, d) The verification comes in super handy here – any local changes can be detected and traced.
As with all new technology there are caveats. Git is still fairly low-level, has numerous options and did not fully follow industry standard conventions (i.e to revert: “git checkout file”). It takes more thought to set up repositories and workflows, and the two-phase commit/push process requires some mental re-wiring. Because it is so flexible, people are still figuring out how to use it best for different purposes. Since no one is making money off of git (except maybe github?), it is evolving fairly slowly in the “polish” area. But from now on, for me it’s gotta be git.