Archive for category Computers

Somebody try this, please.

Posted by on Thursday, 22 May, 2008

Allow me to geek out for a second. I’ve come up with a new recipe for ultimate geekiness:

  1. Build this c-compiler for the z-machine platform.
  2. Use the c-compiler to build this super-simple lisp interpreter.
  3. Distribute the lisp interpreter as a .z5 game
  4. Profit!

If you have no idea what any of this means, it’s ok. I have a strange sense of humor today.

Subversion 1.5 merge-tracking in a nutshell

Posted by on Saturday, 10 May, 2008

As I’ve mentioned in other posts, the Subversion project is on the verge of releasing version 1.5, a culmination of nearly two years of work. The release is jam-packed with some huge new features, but the one everyone’s excited about is “merge tracking”.

Merge-tracking is when your version control system keeps track of how lines of development (branches) diverge and re-form together. Historically, open source tools such as CVS and Subversion haven’t done this at all; they’ve relied on “advanced” users carefully examining history and typing arcane commands with just the right arguments. Branching and merging is possible, but it sure ain’t easy. Of course, distributed version control systems have now started to remove the fear and paranoia around branching and merging—they’re actually designed around merging as a core competency. While Subversion 1.5 doesn’t make it merging as easy as a system like Git or Mercurial, it certainly solves common points of pain. As a famous quote goes, “it makes easy things easy, and hard things possible.” Subversion is now beginning to match features in larger, commercial tools such as Clearcase and Perforce.

My collaborators and I are gearing up to release a 2nd Edition of the free online Subversion book soon (and you should be able to buy it from O’Reilly in hardcopy this summer.) If you want gritty details about how merging works, you can glance over Chapter 4 right now, but I thought a “nutshell” summary would make a great short blog post, just to show people how easy the common case now is.

  1. Make a branch for your experimental work:

    $ svn cp trunkURL branchURL
    $ svn switch branchURL

  2. Work on the branch for a while:

    # ...edit files
    $ svn commit
    # ...edit files
    $ svn commit

  3. Sync your branch with the trunk, so it doesn’t fall behind:

    $ svn merge trunkURL
    --- Merging r3452 through r3580 into '.':
    U button.c
    U integer.c
    ...

    $ svn commit

  4. Repeat the prior two steps until you’re done coding.
  5. Merge your branch back into the trunk:

    $ svn switch trunkURL
    $ svn merge --reintegrate branchURL
    --- Merging differences between repository URLs into '.':
    U button.c
    U integer.c
    ...

    $ svn commit

  6. Go have a beer, and live in fear of feature branches no more.

Notice how I never had to type a single revision number in my example: Subversion 1.5 knows when the branch was created, which changes need to be synced from branch to trunk, and which changes need to be merged back into the trunk when I’m done. It’s all magic now. This is how it should have been in the first place. 🙂

Subversion 1.5 isn’t officially released yet, but we’re looking for people to test one of our final release candidate source tarballs. CollabNet has also created some nice binary packages for testing, as part of their early adopter program. Try it out and report any bugs!

Subversion’s Future?

Posted by on Tuesday, 29 April, 2008

According to Google Analytics, one of the most heavily trafficked posts on my blog is the one I wrote years ago, the Risks of Distributed Version Control. It’s full of a lot of semi-angry comments about how wrong I am. I thought I would follow up to that post with some newer thoughts and news.

I have to say, after using Mercurial for a bit, I think distributed version control is pretty neat stuff. As Subversion tests a final release candidate for 1.5 (which features limited merge-tracking abilities), there’s a bit of angst going on in the Subversion developer community about what exactly the future of Subversion is. Mercurial and Git are everywhere, getting more popular all the time (certainly among the 20% trailblazers). What role does Subversion — a “best of breed” centralized version control system — have in a world where everyone is slowly moving to decentralized systems? Subversion has clearly accomplished the mission we established back in 2000 (“to replace CVS”). But you can’t hold still. If Subversion doesn’t have a clear mission going into the future, it will be replaced by something shinier. It might be Mercurial or Git, or maybe something else. Ideally, Subversion would replace itself. 🙂 If we were to design Subversion 2.0, how would we do it?

Last week one of our developers wrote an elegant email that summarizes a potential new mission statement very well. You should really read the whole thing here. Here’s a nice excerpt:

I'm pretty confident that, for a new open source project of non-huge
size, I would not choose Subversion to host it [...]
 
So does that mean Subversion is dead? That we should all jump ship
and just write a new front-end for git and make sure it runs on
windows?

Nah. Centralized version control is still good for some things:

* Working on huge projects where putting all of the *current* source
  code on everyone's machine is infeasible, let alone complete
  history (but where atomic commits across arbitrary pieces of the
  project are required).
* Read authorization! A client/server model is pretty key if you
  just plain aren't allowed to give everyone all the data. (Sure,
  there are theoretical ways to do read authorization in distributed
  systems, but they aren't that easy.)

My opinion? The Subversion project shouldn't spend any more time
trying to make Subversion a better version control tool for non-huge
open source projects. Subversion is already decent for that task, and
other tools have greater potential than it. We need to focus on
making Subversion the best tool for organizations whose users need to
interact with repositories in complex ways[...]

I’ve chatted with other developers, and we’ve all come to some similar private conclusions about Subversion’s future. First, we think that this will probably be the “final” centralized system that gets written in the open source world — it represents the end-of-the-line for this model of code collaboration. It will continue to be used for many years, but specifically it will gain huge mindshare in the corporate world, while (eventually) losing mindshare to distributed systems in the open-source arena. Those of us living in the open source universe really have a skewed view of reality. From where we stand, it may seem like “everyone’s switching to git”, but then when you look at a graph like the one below (which shows all public (not private!) Apache Subversion servers discoverable on the internet), you can see that Subversion isn’t anywhere near “fading away”. Quite the opposite: its adoption is still growing quadratically in the corporate world, with no sign of slowing down. This is happening independently of open source trailblazers losing interest in it. It may end up becoming a mainly “corporate” open source project (that is, all development funded by corporations that depend on it), but that’s a fine way for a piece of mature software to settle down. 🙂

A computer in my pocket

Posted by on Friday, 7 March, 2008

It’s astounding to me that Americans tolerate our mobile phone landscape. Imagine going into a computer store, and being told that the computer you buy can only run on 1 of 3 different internets. And that it comes with all the software pre-installed, and that the software can’t be changed. (OK, well, you can add more software from a small, restricted supply provided by the computer vendor, and only for steep prices.) The hardware is opaque. The operating system is opaque. You have no freedom whatsover. Would you buy this computer? Millions of Americans buy cellphones like this, and don’t think twice. Meanwhile in Japan (which is 5 years in the future) I’ve been told that the phones are so powerful and usable that they’ve actually become a replacement for laptop computers. People spend more time emailing, instant messaging, websurfing, and shopping on their phones than they do on regular computers. Have you seen any phone in the U.S. with a user interface that would allow you to do that?

Starting last fall, I’ve been pretty excited about the new generation of “smartphones” coming out. We’re starting to get closer to the ideal of “computer in the pocket.” The iPhone I carry around with me is the first phone I’ve ever had which hasn’t frustrated me. It’s a real pleasure to use. And it has set the bar incredibly high — it’s like carrying a 1998 computer wherever you go. No, it can’t do everything my desktop computer can do, but even a computer from 1998 is still pretty handy: a real web browser with fonts and CSS and javascript; a beautiful finger-driven email reader; and now that Apple has finally allowed developers to write native applications with the iPhone SDK, it can even play OpenGL games that appear to be from 1998. But again… it’s in my pocket. In my pocket. As the VC guy in the Apple iPhone SDK video, the whole paradigm is changing. The computer in my pocket knows who I am, and it knows where I am. It’s intensely personal, and will change the computing game as much as the personal computer changed things.

I sound like an Apple commercial, for sure, but I’m also still really hopeful for Google’s Android platform as a major contender. Android isn’t a specific phone like the iPhone (or “gPhone”, as some have said): it’s a whole class of phones. Google got a bunch of phone manufacturers together, got them to agree on a hardware platform, and then wrote a complete Linux/Java-based phone operating system to run on this hardware specification. The entire operating system will be 100% open source when it gets released later this year, and I’ve even started learning how to write applications for it, using the Android SDK. (My first project has been to help someone write a z-machine of course, so you can run text adventures on your Android phone!) You can bet that when the first batch of Android phones is released later this year, I’ll be ditching my iPhone for one. 🙂

What’s interesting to me, it seems, is that Apple and Google are now about to compete head-to-head in this market, but with completely different philosophies. Apple is the Cathedral, Google is betting on the Bazaar.

Over in Apple’s universe, there is only one single phone. The hardware and software of the phone are completely secret, and tightly controlled by a single entity. Even the distribution of applications is centralized and tightly controlled: authors must distribute them only through Apple’s iTunes Store, and only after Apple has approved them as legitimate.

In Google’s corner, though, everything is open. The hardware is merely a spec — dozens or hundreds of phones will be created that are compatible, allowing users to choose the form factors and features they want. The operating system is completely open, effectively part of the public domain. Anyone can examine or modify the system, and I expect multiple ‘distributions’ to be released with different purposes, just as there are multiple Linux distros for my computer. And as with any normal computer platform, absolutely anyone can write an application anywhere and give it to anyone else (“caveat emptor” — which means “we hope you like it!”).

I know it’s a cliche analogy, but the two worlds sort of feel like the difference between a centrally-planned, tightly-controlled economy versus a big free market. Who will win? People could argue that capitalism has historically been more successful than planned economies. People could also argue that the chaotic Windows/PC market has historically been more successful than the centralized universe of Mac computers. But Macs are starting to make a big comeback now. There’s clearly a large segment of the population that’s willing to give up some freedom for the convenience of not having to make choices. Heck, I have a Mac and love it. My days of building PCs from parts and messing around with Linux software packages are long over; my time is too valuable, and Macs Just Work. I wonder if after I have a long affair with my Android phone I’ll eventually end up going back to an iPhone? 🙂

A Peek at Google Chicago

Posted by on Saturday, 1 March, 2008

The Google Chicago office (where I work) recently won an award from Crain’s Chicago Business magazine as the “best place to work in Chicago”. As part of the press, a reporter followed me around the office for a few days taking photos, movies, and interviewing me and Fitz about our corporate culture. The final result was a short article in the magazine, and a fancy web slideshow where you can watch photos while listening to the two of us narrate and ramble. This is also your chance to see AND hear us playing guitar and banjo together in the office! 🙂



Podcast #2 is up.

Posted by on Sunday, 10 February, 2008

OK, we got a bunch of good questions posted to our podcast site, so we went ahead and recorded a 2nd podcast. As before, you can either download the mp3 directly from the site, or you can just subscribe directly in iTunes. (You might be able to find ‘PC Load Letter’ in the iTunes podcast directory, but if not, just open the “Advanced > Subscribe to Podcast” menu in iTunes, and enter the address http://feeds.feedburner.com/PCLoadLetter.)

Try our Podcast

Posted by on Saturday, 19 January, 2008

Oh noes, not another podcast!

Yeah, well, Fitz and I have been thinking about it for a while. The two of us already have this habit of speaking at conventions and geeky gatherings together; you can watch a couple of our talks up on YouTube. Whenever we speak, we always have tons of questions from the audience and end up chatting with people for an extra hour in the hallway. So, after three different people approached us and said we oughta make a podcast, we finally capitulated. The trick is, we need QUESTIONS posted to the website, so we have something to talk about.

To download the podcast, either get the mp3 directly from http://code.google.com/p/pcloadletter, or you can just subscribe to it in iTunes. You might be able to find ‘PC Load Letter’ in the iTunes podcast directory, but if not, just open the “Advanced > Subscribe to Podcast” menu in iTunes, and enter the address http://feeds.feedburner.com/PCLoadLetter.

About Episode 1: Because we didn’t have any questions yet, we didn’t have a lot to talk about. But at least you get to hear the thrilling theme music I assembled: I played a touch of banjo over some techno-loops, and got to use a nice open-source speech synthesizer while at it. PLEASE post questions to the main website, so we can start pretending our show is a live phone-in show like Car Talk. 🙂 If we don’t get any questions, we’ll gracefully let the podcast die.

Technical Details: I have a secret double-life as a sound-designer, so I brought a bunch of recording equipment from my studio to the office. A beautiful BLUE Dragonfly microphone, going into a PreSonus vacuum tube pre-amp, going into a MOTU Ultralite firewire audio interface, going into Digital Performer 5.1 on my Macbook. It’s a wonderfully portable setup which fits all in one backpack. For our next recording, we’ll use better limiting/compression, and a “real” free-floating mic stand (so that you can’t hear us bumping the table-stand!)

New video of tech-talk is up.

Posted by on Friday, 30 November, 2007

A new follow-up presentation to our now-famous Poisonous People talk has finally been posted to youtube. Once again, Fitz and I are speaking about open source software, but addressing corporations this time (rather than developers.) We gave this talk at OSCON last summer, but also gave it to the public when we were visiting Mountain View last month.

The name of the talk is What’s in it for me? Benefits from Open Sourcing Code.

Remeber, our motto is caveat emptor — which means “we hope you like it!” 🙂

Version Control and the… Long Gradated Scale

Posted by on Tuesday, 27 November, 2007

My previous post about version control and the 80% deserves a follow-up post, mainly because it caused such an uproar, and because I don’t want people to think I’m an ignorant narcissist. Some people agreed with my post, but a huge number of people took offense at my gross generalizations. I’ve seen endless comments on my post (as well as the supporting post by Jeff Atwood) where people are either trying to decide if they’re in the “80%” or in the “20%”, or are calling foul on the pompous assertion that everyone fits into those two categories.

So let me begin by apologizing. It’s all too easy to read the post and think that my thesis is “80% of programmers are stupid mouth-breathing followers, and 20% are cool smart people like me.” Obviously, I don’t believe that. 🙂 Despite the disclaimer at the top of the post (stating that I was deliberately making “oversimplified stereotypes” to illustrate a point), the writing device wasn’t worth it; I simply offended too many people. The world is grey, of course, and every programmer is different. Particular interests don’t make you more or less “20%”, and it’s impossible to point to a team of coders within an organization and make ridiculous statements like “this team is clearly a bunch of dumb 80% people”. Nothing is ever so clear cut as that.

And yet, despite the fact that we’re all unique and beautiful snowflakes, we all have some sort of vague platonic notion of the “alpha geek”. Over time, I’ve come to my own sort of intuition about identifying the degree to which someone is an alpha-geek. I read a lot of resumes and interview a huge number of engineering candidates at work, and the main question I ask myself after the interview is: “if this person were independently wealthy and didn’t need a job at all, would they still be writing software for fun?” In other words, does the person have an inherent passion for programming as an art? That’s the sort of thing that leads to {open-source participation, writing lisp compilers, [insert geeky activity here]}. This is the basis for my super-exaggerated 80/20 metaphor in my prior post, and hopefully a less offensive way of describing it.

That said, my experience with the software industry is that the majority of people who write software for a living do not have a deep passion for the craft of programming, and don’t do it for fun. They consume and use tools written by other people, and the tools need to be really user-friendly before they get adopted. As others have pointed out, they need to just work out of the box. The main point I was trying to make was that distributed version control systems (DVCS) haven’t reached that friendliness point yet, and Subversion is only just starting to reach that level (thanks to clients like TortoiseSVN). I subscribe to a custom Google Alert about my corner of the software world, meaning that anytime Google finds a new web page that mentions Subversion or version control, I get notified about it. You would be simply astounded at the number of new blog posts I see everyday that essentially say “Hey, maybe our team should start using version control! Subversion seems pretty usable, have you tried it yet?” I see close to zero penetration of DVCS into this world: that’s the next big challenge for DVCS as it matures.

Others have pointed out that while I scream for DVCS evangelists not to thoughtlessly trash centralized systems like Subversion, I’m busy thoughtlessly trashing DVCS! I certainly hope this isn’t the case; I’ve used Mercurial a bit here and there, and perhaps my former assertions are simply based on old information. I had previously complained that most DVCS systems don’t run on Windows, don’t have easy access control, and don’t have nice GUI clients. Looking at wikipedia, I sure seem to be wrong. 🙂

Version Control and “the 80%”

Posted by on Tuesday, 16 October, 2007

11/17/07: Before posting an angry comment about this post, please see the follow-up post!

Disclaimer: I’m going to make some crazy sweeping generalizations — ones which are based on my 12 years of observing the software development industry. I’m aware that I’m drawing some oversimplified stereotypes, but I think most of my peers who work in this industry will nod their head at some point, able to see the grains of truth in my characterizations.

Two Types of Programmers

There are two “classes” of programmers in the world of software development: I’m going to call them the 20% and the 80%.

The 20% folks are what many would call “alpha” programmers — the leaders, trailblazers, trendsetters, the kind of folks that places like Google and Fog Creek software are obsessed with hiring. These folks were the first ones to install Linux at home in the 90’s; the people who write lisp compilers and learn Haskell on weekends “just for fun”; they actively participate in open source projects; they’re always aware of the latest, coolest new trends in programming and tools.

The 80% folks make up the bulk of the software development industry. They’re not stupid; they’re merely vocational. They went to school, learned just enough Java/C#/C++, then got a job writing internal apps for banks, governments, travel firms, law firms, etc. The world usually never sees their software. They use whatever tools Microsoft hands down to them — usally VS.NET if they’re doing C++, or maybe a GUI IDE like Eclipse or IntelliJ for Java development. They’ve never used Linux, and aren’t very interested in it anyway. Many have never even used version control. If they have, it’s only whatever tool shipped in the Microsoft box (like SourceSafe), or some ancient thing handed down to them. They know exactly enough to get their job done, then go home on the weekend and forget about computers.

Shocking statement #1: Most of the software industry is made up of 80% programmers. Yes, most of the world is small Windows development shops, or small firms hiring internal programmers. Most companies have a few 20% folks, and they’re usually the ones lobbying against pointy-haired bosses to change policies, or upgrade tools, or to use a sane version-control system.

Shocking statement #2: Most alpha-geeks forget about shocking statement #1. People who work on open source software, participate in passionate cryptography arguments on Slashdot, and download the latest GIT releases are extremely likely to lose sight of the fact that “the 80%” exists at all. They get all excited about the latest Linux distro or AJAX toolkit or distributed SCM system, spend all weekend on it, blog about it… and then are confounded about why they can’t get their office to start using it.

I will be the first to admit that I completely lost sight of the 80% as well. When I was first hired by Collabnet to “design a replacement for CVS” back in 2000, my two collaborators and I were really excited. All the 20% folks were using CVS, especially for open source projects. We viewed this as an opportunity to win the hearts and minds of the open source world, and to especially attract the attention of all those alpha-geeks. But things turned out differently. When we finally released Subversion 1.0 in early 2004, guess what happened? Did we have flocks of 20% people converting open source projects to Subversion? No, actually, just a few small projects did that. Instead, we were overwhelmed with dozens of small companies tossing out Microsoft SourceSafe, and hundreds of 80% people flocking to our user lists for tech support.

Today, Subversion has now gone from “cool subversive product” to “the default safe choice” for both 80% and 20% audiences. The 80% companies who were once using crappy version control (or no version control at all) are now blogging to one another — web developers giving “hot tips” to each other about using version control (and Subversion in particular) to manage their web sites at their small web-development shops. What was once new and hot to 20% people has finally trickled down to everyday-tool status among the 80%.

The great irony here (as Karl Fogel points out in one of his recent OSCON slides) is that Subversion was originally intended to subvert the open source world. It’s done that to a reasonable degree, but it’s proven far more subversive in the corporate world!

Enter Distributed Version Control

In 2007, Distributed Version Control Systems (DVCS) are all the range among the alpha-geeks. They’re thrilled with tools like git, mercurial, bazaar-ng, darcs, monotone… and they view Subversion as a dinosaur. Bleeding-edge open source projects are switching to DVCS. Many of these early adopters come off as either incredibly pretentious and self-righteous (like Linus Torvalds!), or are just obnoxious fanboys who love DVCS because it’s new and shiny.

And what’s not to love about DVCS? It is really cool. It liberates users, empowers them to work in disconnected situations, makes branching and merging into trivial operations.

Shocking statement #3: No matter how cool DVCS is, anyone who tells you that DVCS is perfect for everyone is completely out of touch with reality.

Why? Because (1) DVCS has tradeoffs that are not appropriate for all teams, and (2) DVCS completely blows over the head of the 80%.

Let’s talk about tradeoffs first. While DVCS dramatically lowers the bar for participation in a project (just clone the repository and start making local commits!), it also encourages anti-social behavior. I already wrote a long essay about this (see The Risks of Distributed Version Control). In a nutshell: with a centralized system, people are forced to collaborate and review each other’s work; in a decentralized system, the default behavior is for each developer to privately fork the project. They have to put in some extra effort to share code and organize themselves into some sort of collaborative structure. Yes, I’m aware that a DVCS is able to emulate a centralized system; but defaults matter. The default action is to fork, not to collaborate! This encourages people to crawl into caves and write huge new features, then “dump” these code-bombs on their peers, at which point the code is unreviewable. Yes, best practices are possible with DVCS, but they’re not encouraged. It makes me nervous about the future of open source development. (Maybe the great liberation is worth it; time will tell.)

Second, how about all those 80% folks working in small Windows development shops? How would we go about deploying DVCS to them?

  • Most DVCS systems don’t run on Windows at all.
  • Most DVCS have no shell or GUI tool integrations; they’re command-line only.
  • Most 80% coders find TortoiseSVN full of new, challenging concepts like “update” and “commit”. They often struggle to use version control at all; are you now going to teach them the difference between “pull” and “update”, between “commit” and “push”? Look me in the eyes and say that with a straight face.
  • Corporations are inherently centralized entities. Not only is their power-structure centralized, but their shared resources are centralized as well.
    • Managers don’t want 20 different private forks of a codebase; they want one codebase that they can monitor all activity on.
    • Cloning a repository is bad for corporate security. Most corporations have an absolute need for access control on their code; sensitive intellectual property in specific parts of the repository is only readable/writeable by certain teams. No DVCS is able to provide fine-grained access control; the entire code history is sitting on local disk.
    • Cloning is often unscalable for corporations. Many companies have huge codebases — repositories which are dozens or even hundreds of gigabytes in size. When a new developer starts out, it’s simply a waste of time (and disk space) to clone a repository that big.

Again, I repeat the irony: Subversion was designed for open source geeks, but the reality is that it’s become much more of a “home run”for corporate development. Subversion is centralized. Subversion runs on Windows, both client and server. Subversion has fine-grained access control. It has an absolutely killer GUI (TortoiseSVN) that makes version control accessible to people who barely know what it is. It integrates with all the GUI IDEs like VS.NET and Eclipse. In short, it’s an absolute perfect fit for the 80%, and it’s why Collabnet is doing so well in supporting this audience.

DVCS and Subversion’s Future

Most Subversion developers are well aware of the cool new ground being broken by DVCS, and there’s already a lot of discussion out there to “evolve” Subversion 2.0 in those directions. However, as Karl Fogel pointed out in a long email, the challenge before us is to keep Subversion simple, while still co-opting many of the features of DVCS. We will not forget about the 80%!

Subversion 1.5 is getting very close to a release candidate, and this fixes the long-standing DVCS criticism that “Subversion merging is awful”. Branching is still a constant-time operation, but you can now repeatedly merge one branch to another without searching history for the exact arguments you need. Subversion automatically keeps track of which changes you’ve merged already, and which still need merging. We even allow cherry-picking of changes. We’ve also got nice interactive conflict resolution now, so you can plug in your favorite Mercurial
merging tool and away you go. A portable patch format is also coming soon.

For Subversion 2.0, a few of us are imagining a centralized system, but with certain decentralized features. We’d like to allow working copies to store “offline commits” and manage “local branches”, which can then be pushed to the central repository when you’re online again. Our prime directive is to keep the UI simple, and avoid the curse of DVCS UI (which often have 40, 50, or even 100 different commands!)

We also plan to centralize our working copy metadata into one place, which will make many client operations much faster. We may also end up stealing Mercurial’s “revlog” repository format as a replacement for the severely I/O bottlenecked FSFS format.

A Last Plea

Allow me to make a plea to all the DVCS fanatics out there: yes, it’s awesome, but please have some perspective! Understand that all tools have tradeoffs and that different teams have different needs. There is no magic bullet for version control. Anyone who argues that DVCS is “the bullet” is either selling you something or utterly forgetting about the 80%. They need to pull their head out of Slashdot and pay attention to the rest of the industry.

Update, 10/18/07: A number of comments indicate that my post should have been clearer in some ways. It was never my intent to say that “Subversion is good enough for everyone” or that “most of the world is too dumb to use DVCS, so don’t use it.” Instead, I’m simply presenting a checklist — a list of obstacles that DVCS needs to overcome in order to be accepted into mainstream corporate software development. I have no doubt that DVCS systems will get there someday, and that will be a great thing. And I’m imploring DVCS evangelists to be aware of these issues, rather than running around thoughtlessly trashing centralized systems. 🙂