Subversion’s Future?
According to Google Analytics, one of the most heavily trafficked posts on my blog is the one I wrote years ago, the Risks of Distributed Version Control. It’s full of a lot of semi-angry comments about how wrong I am. I thought I would follow up to that post with some newer thoughts and news.
I have to say, after using Mercurial for a bit, I think distributed version control is pretty neat stuff. As Subversion tests a final release candidate for 1.5 (which features limited merge-tracking abilities), there’s a bit of angst going on in the Subversion developer community about what exactly the future of Subversion is. Mercurial and Git are everywhere, getting more popular all the time (certainly among the 20% trailblazers). What role does Subversion — a “best of breed” centralized version control system — have in a world where everyone is slowly moving to decentralized systems? Subversion has clearly accomplished the mission we established back in 2000 (“to replace CVS”). But you can’t hold still. If Subversion doesn’t have a clear mission going into the future, it will be replaced by something shinier. It might be Mercurial or Git, or maybe something else. Ideally, Subversion would replace itself. 🙂 If we were to design Subversion 2.0, how would we do it?
Last week one of our developers wrote an elegant email that summarizes a potential new mission statement very well. You should really read the whole thing here. Here’s a nice excerpt:
I'm pretty confident that, for a new open source project of non-huge size, I would not choose Subversion to host it [...] So does that mean Subversion is dead? That we should all jump ship and just write a new front-end for git and make sure it runs on windows? Nah. Centralized version control is still good for some things: * Working on huge projects where putting all of the *current* source code on everyone's machine is infeasible, let alone complete history (but where atomic commits across arbitrary pieces of the project are required). * Read authorization! A client/server model is pretty key if you just plain aren't allowed to give everyone all the data. (Sure, there are theoretical ways to do read authorization in distributed systems, but they aren't that easy.) My opinion? The Subversion project shouldn't spend any more time trying to make Subversion a better version control tool for non-huge open source projects. Subversion is already decent for that task, and other tools have greater potential than it. We need to focus on making Subversion the best tool for organizations whose users need to interact with repositories in complex ways[...]
I’ve chatted with other developers, and we’ve all come to some similar private conclusions about Subversion’s future. First, we think that this will probably be the “final” centralized system that gets written in the open source world — it represents the end-of-the-line for this model of code collaboration. It will continue to be used for many years, but specifically it will gain huge mindshare in the corporate world, while (eventually) losing mindshare to distributed systems in the open-source arena. Those of us living in the open source universe really have a skewed view of reality. From where we stand, it may seem like “everyone’s switching to git”, but then when you look at a graph like the one below (which shows all public (not private!) Apache Subversion servers discoverable on the internet), you can see that Subversion isn’t anywhere near “fading away”. Quite the opposite: its adoption is still growing quadratically in the corporate world, with no sign of slowing down. This is happening independently of open source trailblazers losing interest in it. It may end up becoming a mainly “corporate” open source project (that is, all development funded by corporations that depend on it), but that’s a fine way for a piece of mature software to settle down. 🙂
I’ve noticed the same kind of trend, and I think that we’re moving (at least for corporate uses) towards a hybrid model, where a system like Subversion is the core, and the 20% use some DVCS to interop with the centralized system with extra power.
I’m glad to see I’m not alone in thinking that the corporate world (especially someplace like Google or Apple with loads of IP to protect) will stay on a fundamentally central model. I think that Mercurial and Git will eventually both become sort of super-clients, that can be used as either a DVCS or as an interface to Subversion (or CVS/whatever) itself.
The one fear I have of this is that right now at work I do my work in feature branches, which means that people can see and comment on my progress if desired, and even if I’m hit by a bus my work can still be finished in a hurry. With a DVCS tool working as a super-client, you (probably) end up losing this layer of security. Got any thoughts on that potential loss?
Well said. And note that the centralized version-control space is a big place, and not all of it has been explored — that is, Subversion is still growing new features, and will continue to do so. “Maturity” does not necessarily mean “settling down”. The part you quote from David Glasser is important: “We need to focus on
making Subversion the best tool for organizations whose users need to
interact with repositories in complex ways…” For example, sparse checkouts, complex authorization needs, preservation of auditable review trails, etc.
These things are probably most useful to the corporate world, and to governmental and military organizations. Subversion has an interesting future… 🙂
@Augie With DVCS you can still opt to push your local development branches over to the “main” repository. The only commits that are local-only will be those that happen before you’re able to reach Internet and push upward. This is wholly a choice of the user, and is not mandated by the DVCS paradigm. I tend to push branches people may be interested in tracking, and rely on regular backups for those that are truly experimental.
@author It’s possible with Git to create a “shallow clone”, although it isn’t a standard practice (see http://www.gelato.unsw.edu.au/archives/git/0511/11390.html). This is an area where Git could be improved by those with experience in the needs of centralized users.
While I’m personally fascinated by the rise of Distributed VCS (DVCS) in the open source world and, separately, the amount of buzz it is getting I think it is far from a done deal.
VCS is a broad field: disparate examples include the needs of vast organizations, the most distributed of development teams and small shops/groups. At the moment DVCS seems to fit large scale distributed development teams well, but fit other models relatively poorly.
It will be interesting to see whether DVCS can “cross the chasm” from the early adopters to broad acceptance. Even if it does, Subversion and other commercial VCS systems still have a place. The balance may change, but the need for mature, robust and scaleable free centralized VCS (i.e. Subversion) will continue for a long, long time.
———————-
Christian Knott
alphasoftware.com
So I don’t think I agree with the reasons why centralized approach is reasonable; I am a git user, but I am sure that there are non-git DVCS’es which allow for the same functionality. If they don’t… why do they still exist? 🙂
* putting all of the *current* source code on everyone’s machine is infeasible
Git shallow clones permit me to clone an incomplete tree. If I am not interested in all the history.
Git modules allow the maintainers to split their work into functional units. Users can then clone the appropriate unit they need. I agree that this requires some planning… unlike with CVS and SVN which allows you to do this on the fly.
* Read authorization!
There is no such thing with SVN today. If I have SVN access to the gree, I can with only one command make a git repository with all the history which I can then share with the world.
I’m not at all ready to give up Subversion as my base repository for my projects and for projects for clients. However, I love the speed of Git and I love how trivial branching works. I love how the history is local. I want the best of both worlds… and so I use git-svn to bridge. It would be interesting to see better cross-system tools, as I have noticed quirks with this approach. Allowing SVN to be the “official” repository while letting developers use git is an interesting model.
I disagree with the sentiment that an open collaboration structure encourages anti-social behavior.
To the defense I want to point out everybody’s poster child of dvcs usage, linux, which wouldn’t be possible with a centralized, everything life, environment (partly out of sheer size, partly because there’s many gatekeepers).
Also keep in mind that to deal with the bomb phenomenon git and mercurial can accept patchsets (and stack them), such that you can try on a bomb. If it explodes in your face you can scald the submitter of the patch and drop the patchset again, and if it works out fine you can merge it into your repository.
Finally do also keep in mind that code-review, especially of big projects can’t be done by one person alone, and if you intend to organize the workflow of this in any way, you end up with a tree structure where patches flow upstream by review and merge, and eventually they arrive at the reference repository, from where they propagate downstream to everybody else again. DVCS is what allows this.
Meh, honestly, the two bullet points quoted in the post aren’t that valid.
“Working on huge projects…”
This isn’t that valid since hard drive space is cheap. There’s plenty of projects I’d say are *extremely* large that are hosted in DVCSes (Solaris comes to mind). In this regard, the gains you get in SVN are extraordinarily minor to the point of barely even being a blip…
“Read authorization!”
This was the one reason I stuck with SVN as long as I did… but once Hg came out with the easy to set up hgweb script this is no longer an issue. I virtually have *every* read/write authorization capability that I had with SVN, except now I also have distributed version control to boot. Anyone who seriously attempts to argue this point simply hasn’t looked at what Hg offers in this regard (though the point is still valid with every other DVCS out there: git, bzr, darcs, they all fail this).
Now, you may argue that having the repo on a local filesystem that is no longer protected by an authenticated server/client scheme nullifies this and makes it a problem again (after all, people can just “hg clone” the local filesystem repository), but it doesn’t make any problems that weren’t already present in SVN (a local SVN checkout can still be copied and moved around just as easily, SVN doesn’t protect any data other than history.)
So anyway, both of these points really aren’t valid generally. And, specifically with regard to Hg, the points aren’t true.
While it may be fine for hobbyists and early-adopters to experiment with new technologies like GUIs and user-friendly fads, corporate users and real-world practices still depend on DOS for day-to-day operations and will continue to do so for the foreseeable future. Lotus 1-2-3 dominates the market so convincingly, they’ll have no difficulty translating that lead to whatever new platforms their customers adopt in the future. — The enterprise customer of 1990
At least I already know the project team has chosen to hold to project management practices which are out of step with a global, and increasingly mobile, workforce. So if Subversion has successfully replaced CVS, then it is the new CVS. Goodbye, CVS!
I was thinking that read authorization was so that group A, B, and C could all see different parts of the project while say group D can only read parts of B & C. This comes up in the corporate world where various political and financial matters dictate who can see what or write to what.
Does write authorization come into play here also?
Here’s the way I (ab)use Subversion: I store my entire home directory in it. Yup, I have an approximately 25GB Subversion repository containing all my music, personal programming projects, OpenOffice.org and LaTeX documents, the whole works. It’s hosted on an old Ultra 10, and I have this respository checked out on each of my machines.
What amazes me is this actually works. I ran into a 2GB file size limit problem with the fsfs backend, but, once I switched to Berkeley, I was able to “svn import” my entire home directory fine, and commits are reasonably fast.
I recognize that this is probably an abuse of the tool, but Subversion has provided me a convenient way to get file system synchronization, history, and incredibly good undelete protection for not much cost. As an added bonus, it also provides source code version control for my programming projects as a standard part of the system. I can’t even imagine doing something like this with git — it would probably just crash. Subversion is an irreplaceable asset to the OSS community.
Hard drive space isn’t cheap when the master project is hundreds of gigs large. Open source projects in general tend not to be large (the flattering explanation is that maybe such projects are better at code re-use), and can almost always be checked out in their entirety. But many corporate projects are simply too big to check out — not only in terms of disk space, but in terms of time: you need to be able to select subsets in order to have manageable checkout and update times, if the amount of date and/or churn rate is too high.
If you “need to focus on making Subversion the best tool for organizations whose users need to interact with repositories in complex ways” then doesn’t the new mission statement become “to replace ClearCase?”
I can think of worse goals for a VCS.
I’m still sold on Perforce. Centralized, but merging/integrating/etc feels a whole lot more natural and so I have a hard time following the centralized/decentralized debate. It feels more like I’d want to use git in any case over subversion (especially for open source), but Perforce everywhere else (especially in a company).
Any long-time Perforce and git users to comment?
I agree strongly with what you said about people in the open source universe having a skewed view. I used to work for a giant corporate IT services provider, and I would very highly doubt that the corporate world will ever start using distributed version control systems en masse. We had teams of 10 or so people, that support 100s of software applications, ranging in size from 5KLOC to 500KLOC. In this context, it just wouldn’t make sense to use a DVCS. People may not touch the code of a particular application for a year, they would only check it out from the repository whenever they need to make changes, and after they would often delete it to reclaim hard disk space on their machines. That environment is much better suited to a centralised version control system. And, most applications that I’ve seen are still on CVS, when I left we were only just starting to use SVN for some in house experimental projects. Though, SVN was on the cards to replace CVS in the near future. If anything ever replaces SVN in the corporate world, it won’t be a DVCS.
The other thing to keep in mind about “enterprise version control” is that there is still a huge amount of work that happens in an enterprise that should be under version control, but it is not. You’d think that wouldn’t true by now, but you’d be surprised.
Part of that is related to the fact that it’s easier than ever for people who don’t have the word “developer” in their job title or as their primary job function to be creating / maintaining IT related work. And it’s not just people writing macros in spreadsheets either.
Getting those folks to use version control, centralized or not, is still a huge challenge. Making it easier for the non-“developer” to gain the benefits of version control is key. The (relative) ease of use of Subversion is certainly part of the reason for it’s ongoing success in the enterprise.
I am an independent developer who has been working solo for about 5 months, and gradually hope to add other developers to the team. We will be working in an essentially centralized fashion, but we will not be using Subversion. I am fully on board the DVCS train, enjoying Bazaar until they stop competing with Mercurial. I don’t know if I’ll ever use “svnadmin” again.
I am grateful for this candid post. I had sort of left feeling bitter that the Subversion team did not understand my needs as a user. Fixing the bundled documents issue with the .svn litter might have kept me on board a little while longer, and maybe I would have figured out how to use SVK on my laptop. In the end though, I am just not in your target market, nor are a lot of other developers. I am glad the Subversion team understands this.
You have a great centralized system, provided some of the long-standing annoyances can be resolved. Many developers will have an SCM system forced upon them by someone who doesn’t have to use it much themselves. It’s great to know that you will working to make Subversion be that system.
I guess this is farewell, but with all the good wishes that implies. Thanks to anyone who has or will contribute to this project. You had earned your popularity in the open source world, and I have no doubts you will earn your consulting fees in a more lucrative market!
DCVS won’t take over the world as long as it’s users keep claiming there is absolutely nothing wrong with it and everything is perfect.
Examples:
“hard drives are cheap” – Great talking point, completely useless in the real world, where buying bigger hard drives for 10,000 developers (or really large NFS servers) is not feasible.
“Subversion’s read authorization doesn’t prevent you from copying checkouts anyway” – This is only mildly relevant (It has nothing to do with read authorization, and everything to do with local security), and where most DCVS have *no fine grain control at all*, not an acceptable answer. Just because something doesn’t do an amazing job at something doesn’t mean you can do nothing and still claim to be doing okay.
Lastly, if DCVS’s like git want to actually take over the world, they need to stop telling everyone else they are broken, and try to change everyone else’s workflow, and learn to work with their potential users. Telling users “hey, just change the way you do everything” just doesn’t cut it, even if your way really is better (which is often debatable). You don’t get anywhere by trying to change everyone else, but instead, by learning to work with everyone else. You certainly can’t be everyone’s perfect tool, but most DCVSen (hg is pretty different here since they make a concerted effort to find out what their potential users want) are too far on the extreme side of “no, change the way you work” when it comes to UI.
(I’m the author of the quoted message, for what it’s worth.)
For those claiming that “putting all of the *current* source code on everyone’s machine is infeasible” is not a real use case, just trust me that to me, this is not a theoretical statement. As Karl says, many corporate projects just can’t be checked out at once. Organizing code into multiple repositories (“hg forest” / “git modules” / etc) is a partial solution, but suddenly you lose the ability to atomically commit and branch across your whole codebase, which is a serious requirement for many organizations. (These solutions are similar to svn’s externals, which like them end up being flawed in practice.)
With respect to read authorization:
bartman, yes, of course given read access to the tree, a human can send restricted data to those who shouldn’t see it. But there’s no practical technical solution to this social problem. There’s also nothing technically stopping the folks with the USA’s nuclear access codes from calling me and telling them to me; however, there’s a lot more stopping me from finding them out without a cooperating insider. Similarly, Subversion’s read-based access control prevents an attacker from reading secret parts of the repository, though it certainly doesn’t stop an insider from leaking.
Sam Hart: Can you show me a pointer to hg’s support for read authorization at the sub-repository level? Last time I checked, there was no such thing. (This is pretty different from the ability to control access to an entire repository, of course.) If there is, I’d be incredibly curious to find out how they do it! (The only ways I know of to implement read access control in a git model or mercurial model involve a lot of crypto overhead, with all the usual fun PKI problems.)
Actually there already exists an distributed version of subversion. SVK …
I am already using subversion in a distributed way with this program.
Don’t know of ANYONE in the commercial world using distributed source control.
You’re correct: that small sector of open source is most definitely out of the loop. Must be the same bunch claiming ruby is the only language anyone uses 🙂
Seriously: there are a bunch of people who flit from one “cool” thing to the next with no real idea of how the rest of the world works. Writing bits of unused throwaway code in unused throwaway fad languages while the rest of the world uses java or c# to get things done that service userbases ranging from hundreds to tens of thousands of users doing REAL work.
Subversion is very much used and on the up (as the graph shows). If anyone’s considering ditching subversion development or doesn’t see the future in it: let me be the first to give you a good slap in the back of the head.
FYI: Subversion, as a project, is still listening to users. As DannyB said, we’re not in the business of telling users to change the way they work; in general we take feedback and try to give users what they ask for.
The last year has been harrowing for the svn developer community, because (surprise surprise), implementing merge-tracking is a Really Hard problem. It was the one big thing left that commercial centralized systems could do (Perforce, Clearcase, etc.) but Subversion couldn’t. Subversion 1.5 (now in release candidate) should ease most of the pain here: users will now be able to repeatedly run ‘svn merge’ (with no revision arguments) and the system will usually do the right thing, just like ‘p4 integrate’ does.
The next Big Thing on the plate is a major rewrite of the working copy: centralized metadata (no .svn/ turds littered around!), fast scanning, the optional ability to keep things read-only (and an ‘svn edit’ command to make them read-write), and a general goal of making the working copy less ‘fragile’ in terms of locking up or getting into bad states. This is a serious usability hurdle, but will be well worth the effort.
And of course, some of us are still flirting with the idea of borrowing DVCS features as well. What if a working copy could store some or all of the repository history, to allow offline commits? What if svn repositories really could be taught to swap changes with each other? Lots of blue-sky ideas out there.
One thing that is overlooked here is that many corporate development teams are still using CVS, and still talking about migrating to SVN.
Not to mention the idiots who are still using M$ source-unsafe, and thinking about moving to CVS or SVN (or not).
You make the disk space for distributed VCS argument, yet don’t acknowledge that for many projects, a complete git clone is still less disk space than a simple SVN head checkout.
Others have pointed out that git can shallow clone, chipping away further from the disk space argument. “sparse checkout” is an upcoming feature that will let you check out only part of a tree. It’s taken a while to arrive, mostly because most people just don’t need it.
David Glasser, check your information – git’s submodules are inherently atomic. Probably hg’s forest, too. Just because svn:externals are flawed, does not mean that the others are flawed, too.
We use Mercurial for about 20 projects involving about 50 developers. We originally used Subversion but needed to work remotely with other company sites. We also do kernel development and wanted something ‘like-git’ but easier to use.
The problem is that as the number of developers grows and the complexity of the access requirements increases, I look longingly at enterprise vcs solutions rather than this crude sort of shell based script hacking that has to go on behind the scenes to keep everything afloat.
Sean
One thing that is overlooked here is that svn and cvs sucks – even on large projects. Centralized source control is dead.
As a corporate SVN user, I’m not worried about SVN losing mindshare to DVCS, I’m worried about SVN losing mindshare to Microsoft Team Foundation Server.
@Tim Dysinger: thanks for that deep, insightful comment. 🙂
Very well said. Having looked at the usage scenarios for distributed version control systems, the idea of people owning their branch only makes sense if you have really good (superhero) developers who are morally and ethically strong about writing good code. In an organization most of developers do not fall in that bracket. Also from an organization’s point of view, the whole notion of owning the source code ties very well with a centralized version control system. I agree with the post that SVN is in the right segment and should stick to its centralized paradigm – where its good at. Much much better than what CVS used to be :).
Yet another mostly fact-less, mostly baseless, mostly knowledge-less article on DVCS. Couldn’t you just have stopped after the stupid tripe of your “Version Control and “the 80%”” article?
Please, in the future, I beg you to stop trying to talk about DVCS, and stop trying to damage control for subversion, these articles are neither informative nor informed and it shows to anyone who’s ever tried to understand DVCS.
Just ignore DVCS, and everything will be much better for everybody.
@Masklinn: you really thought my post was about DVCS? I thought it was about Subversion finding a niche. If you have real feedback, rather than just angry insults, I’d like to hear.
“Svn is a good implementation of a model of version control which is being rapidly left in the dust” – I don’t know if this person meant distributed vs centralized or branch and label vs. stream-based CM.
I don’t think Svn has anything to worry about from Microsoft, but users may want to think about moving to something other than an interim solution to a branch and label architecture, only to require yet another tool swap in 2-3 years.
Sam: Yes, I’m aware that git clones are often smaller than Subversion HEAD checkouts. (I use git! I like git! Did anyone even read the quoted part about “I wouldn’t use svn for a new moderately sized open source project”?) The situations I’m referring to are ones where a HEAD checkout is too big for one machine. They exist. Seriously. (If you do a little research you might be able to figure out the main example I’m thinking of.)
I mean, you say “It’s taken a while to arrive, mostly because most people just don’t need it.” about shallow checkouts. (And unless shallow checkouts also includes shallow (in the space dimension) cloning, it’s not useful here, by the way.) That’s *exactly* the point Ben and I are making. There are lots of fundamentally different use cases for version control. From the open-source perspective, “most people” don’t need to deal with projects that can’t be checked out onto a single machine, or projects with specific files that shouldn’t be readable by everyone. And if you don’t, then great! Something like git is going to work fine, and (ignoring the issues of portability, API stability, and usability, which I suspect can be fixed eventually) has many advantages over Subversion. But there are worlds where “most people” need exactly these features. And simply repeating “but these issues aren’t real because hard drives are big” is frankly missing the point.
And are you sure about git modules / hg forest supporting true atomic commits across multiple repositories? Do you have a link to technical documentation about those features that explains how this is accomplished?
@ Nath (23): We produce commercial software for a living and we use Git.
We also happen not to use Java or C#. We want our userbase to do REAL, REAL work, so we use C and Perl.
We switched away from Subversion to Git mostly because Git makes it so easy to branch, merge and cherry-pick. The “Distributed” part of DVCS wasn’t a particularly big deal for me. The clinchers were the speed and ease of merging.
Ben, interesting post, thanks. I’ve posted my (somewhat lengthy) response here: http://blog.emptyway.com/2008/05/01/response-to-subversions-future/
In short, I strongly disagree that Subversion is better suitable to big projects than modern DVCS like hg or git.
The “we’ve replaced CVS, now let’s replace ClearCase” idea above is EXACTLY where it needs to go. If your primary work environment is military/security/gov’t and stationary server/workstations, DVCS as a concept doesn’t buy you anything. There’s already a commercial ClearCase killer out there called AccuRev. Looking at it for ideas on which paths to follow wouldn’t be a bad idea.
And specifically regarding git, ( which I like, and I putter around with for projects at home ) corporations FREAK OUT at stuff like re-basing and fast forwarding pushes upstream. That destroy’s the audit trail, which might be the only thing saving your derierre for a Sarbains/Oxley audit.
The thing to keep in mind is that like operating systems, ALL VC solutions suck. They each suck in different ways, and under different conditions, but they ALL suck. Your mission, if you are tasked to select one, is to find the one that sucks the least for YOUR environment, and matches your CM process the closest, so it can be customized to work the way you need to work.
> you really thought my post was about DVCS?
Seeing as you mentioned DVCS more than 12 times in one form or another one, yes it was.
> I thought it was about Subversion finding a niche.
Not exactly, it was about “waaah waaah subversion can still do things DVCS can’t”. In other words, damage control, and a bad one at that.
> If you have real feedback
On this post? Seeing as there are no facts, the only feedback I could give is that both points of the mail you quoted are either wrong or deeply misguided.
> DVCS as a concept doesn’t buy you anything
Sure it does, it buys you the same thing as it buys to everybody else, pretty much. Hell, Monotone probably buys you even more with its mandatory signed and checked commits.
> corporations FREAK OUT at stuff like re-basin […] That destroy’s the audit trail
destroying the audit trail would require performing rebasing on stuff that’s already been committed/upstreamed, which is more than frowned upon (as it breaks other people’s repositories) and could trivially be prevented by using hierachical trees of repositories (which is trivial to do). Rebasing is mainly a tool for linearizing one’s changes against a repository’s head instead of having to create pointless merge changesets, it’s not something you’re supposed to use on patches already upstreamed.
I worked for several years in an environment where the entire source tree could not fit on a single development host. At least in my workplace, I personally don’t think atomic commits across arbitrary parts of the tree were as necessary as some people seemed to assume. They’re definitely useful if you have them (which we did), but if we didn’t have them I think we’d have adapted with very little effort.
@James Roper: 100s of
> it was about “waaah waaah subversion can still do things DVCS
> can’t”. In other words, damage control, and a bad one at that.
You got me! Clearly the Titanic is sinking, and you have uncovered my
desperate attempt to persuade users from jumping ship. I should drop
my ruse and finally acknowledge that DVCS is clearly superior to
centralized is every conceivable context, for every conceivable class
of users. One size fits all, right? What was I thinking?
> Seeing as there are no facts […]
Other than a giant graph of hard numbers showing exponential svn
adoption rates? When you’re ready to bring similar data to the table,
let’s chat.
What continues to amaze me is that while Subversion developers keep
trying to have level-headed discussions about how different systems
shine or stutter in different situations, these attempts at objective
analysis keeps getting identified as “desperate weakness” and “damage
control” by arrogant DVCS zealots. The Subversion community has
always had these debates, even before we had written one lick of
code. Subversion developers have *always* been eager to
self-criticize, judge tradeoffs in design, and adapt to new
situations. If all the chest-thumping githeads I’ve met truly believe
that they’ve found the One True Solution and that “everything else is
stupid” (to quote Linus), then their delusion is too deep to even take
part in the conversation.
Eesh. I should know better than to feed trolls.
Ben,
I’ve been using DVCS in a commercial environment for about 5 years now. We moved from CVS to BitKeeper which was a very significant and positive change. Recently we have moved from BK to git, which is a fairly lateral move.. I understand BK is doing OK in the commercial world and they tout significant advantages over the other big vendors, most of which are real.
In my experience the issues people have been mentioning were not a problem for us. We identified them and adjusted our processes. Things have worked out very well.
Some observations I will share based on this experience:
– For us the switch did need some training and there was some hand wringing and change to developer process but overall it went pretty smooth. An interesting benefit was that people started doing whole-tree builds rather than only builds of their one little bit (bk does not allow partial tree checkout, so this became mandatory). This little thing greatly reduced the build breakage rate 🙂
– We did spend some time speeding up the whole tree build process. Optimizing makefiles, letting make -j4 work better, etc, etc. This was again overall positive. The project size was such that a full tree build on NFS was about
I don’t plan to leave Subversion. I actually like Subversion a lot:
– Great cross-platform support
– Lots of optional add-ons and possibilities for integration with other software configuration management tools
– A track record which makes me sleep well at night
– Great documentation, including good books
What I’m missing in Subversion is actually even more aspects of centralization:
– LDAP-integration in the svn access configuration files, so that groups may refer to groups in an LDAP tree
– Ways to distribute policies like “.html files should automatically be marked with text/html context type”, or “.c files should automatically get this and that svn:keywords
Oh, and this would also be helpful:
– A way to handle charset conversions on the fly: If my locale is UTF-8 but the repository’s general charset is Latin1, I would like to be able to tell Subversion to automatically convert my checkouts to UTF-8 but convert my commits back to Latin1 when committing