Well well, hasn't this gone off topic! To be fair I find myself doing the very same thing all the time too, c'est la vie.
Getting back to the issue of Net Neutrality....which actually has nothing to do with dems/repubs/health insurance/big evil gov/tinfoil hat conspiracies... this is how I see it. Think of it more as a battle between application developers and network providers. Also, think of it as a battle between small and large ISP's. Lastly, think of it as a clash that occurs when technical realities...that are constantly changing.. become entwined with political and commercial realities, which are also constantly changing. Unfortunately Chuck (Charles?) there is no one side that you should be on definitively as there are multiple arguments for an issue which has many facets.
First to the application side of things. When the internet first started expanding into homes and small business, it could be summed up with two happy little words:
Best Effort.
90% of home users used the internet mostly for downloading things. Webpages, e-mails and various files. Most of these transactions occurred with your computer sending out a little tiny amount of info requesting a file (your upload) and getting in return a lot of data coming from a server somewhere, your download. For years, the majority of users only cared about getting good download speeds as the casual user didn't upload too terribly much. Furthermore, the nature of most of the stuff being downloaded was not dependent on arriving in order. Whether the data arrived out of order or some got lost along the way and had to be re-sent, it really didn't matter because eventually it would all arrive, then get shuffled into the proper order and voilla, you're ready to rock.
Yourself being a gamer I'm sure you're well aware that this idyllic world didn't last too terribly long as now the interwebs are carrying all kinds of real-time dependent traffic. For a voice conversation, whether you're using skype or teamspeak or whatever, if the packets arrive terribly out of order then the audio might be all garbled. Also, if some of the traffic get's lost along the way there's no sense in the far-end resending it, because by the time it arrives chances are good it would be far too out of sequence to be of any good. So, that traffic that got lost just falls into the void and you might get dead patches in your conversation where the other party's voice just drops off for a bit. These are issues that for non-real time applications like downloading a file via FTP, no one really had to worry about previously.
Also with applications like bittorrent, online gaming and all the various "cloud computing" models out there, every home user is turning into a miniature datacenter that's trying to send and receive larger and larger amounts of traffic. Remember, back in the day most heavy duty lifting was done by servers, usually in nice big data centers with nice fat pipes where the home user just needed a fraction of their download speed for their upload to request files. Now peer to peer has largely turned this on it's head.
Finally, keep in mind that now every little widget and application being made is trying to get on the web for some reason or another. While in the past "surfing the web" might have meant one or two packet flows (or service flows, or connections, whatever you prefer to call them) now you can at any point in time have multiple flows to and from your PC even if it's just sitting there idle, what with a dozen programs from windows to games to anti-virus software automatically going out and seeking the latest in updates and patches. And this is not even getting into the malicious stuff like malware and spyware.
So I just gave you a background that I'm sure you already know very well, what the heck does it have to do with net neutrality?
The application developers- the folks who make peer to peer software and voice clients and streaming video- want the network providers to treat the traffic generated by their applications to not be discriminated against. Now, in principle this sounds absolutely reasonable and seems like a fair request.
In practice, every major ISP has learned the unfortunate truth that you can never have too much bandwidth. No matter how much money is spent on infrastructure and expanding the "pipes" that make up a network, if given free reign the end users, knowingly or not, will very quickly be able to snap up the excess bandwidth and then you run into the tuesday night 9:00 pm frustration of having your game freeze up and become un-playable.
Now, when this happens the ISP usually understands and fully realizes that they've got a congested network and they WILL try to fix it, but pesky realities like the cost of running a new fiber between cities or installing better optical gear mean that they have to justify the business case for doing so, and even if they do this right away it will take time for them to increase the bandwidth regardless. Political problems can derail it too- what if the ISP can't secure the land-use rights to run a new fiber and instead has to go to one of their competitors to lease a dark fiber from them at an outrageous rate?
Anyway. Let's say that the ISP finally doubles the available bandwidth by turning up or expanding a link, and for a time life is good again. But the problem with bandwidth usage it that it almost seems to run on a logarithmic scale (notice I said SEEM, not actually does), whereas increasing the available bandwidth on the network seems to happen more in a linear, fixed fashion. If you've got a 45 Mbps link that get's fully congested, you turn up another 45 Mbps link, and have just increased your available bandwidth by 100%. That second link gets congested so you turn up another 45 Mbps link and you've just gained 50% more bandwidth. Then a month later the third link gets used up so you turn up another 45 Mbps link and this time you've only increased by 30%, and so on until you realize that the time between network congestion on your links is getting shorter and shorter and you have to spend more money to turn up a more robust link.
The point behind this ramble is that the network provider can't simply expand their network bandwidth infinitely as they will run into the walls created by cost, time and geography. At some point, the network provider needs to get into the network management game.
This is a very, very large part of the net neutrality argument.
Let's pretend you're the CEO of an ISP. At certain times of the week, your network is getting heavily congested in certain areas. While you put the ball in motion to expand certain links, the end users are nonetheless going to have to live with congestion and slower than normal speeds for some time. Could be months or longer.
What do you do in the meantime? You have several options. One is to let your network drop excess traffic randomly. This is what many people call the fairest option. Under this way of doing things, when the network is congested, and the pipes are as full as they're going to get, routers just start randomly (or as random as a machine can get) discarding packets. They don't care what type of traffic it is. Under this way of doing things a packet that's part of your voice call is just as likely to be dropped as a packet that's part of my music download, which is just as likely to get dropped as a packet that's part of a web-page and so on.
There are lots of other options on which multiple books have been written. Some ISP's crunch the numbers and decide to install a farm of webcache servers. Other ISP's (especially cable providers for reasons I won't get into here) decide to get into the traffic management game, particularly something called deep packet inspection.
This is where the ISP actually looks into the packets you're sending and looks at the actual data. Think of it like a traffic cop shining a flashlight inside of a car instead of just looking at it's license plate. The reasoning goes that by performing packet inspection, the network provider can decide that at peak times they're going to choose to discard more traffic that's being used for peer to peer than they will for something like downloading web-pages. On the other hand, they can also decide that traffic for voice applications (which hopefully is using a well known protocol) will NOT be dropped under any circumstances and therefore get through to the other side.
Again, the highway analogy. Think of a really busy road and the cops have decided that there are simply too many cars trying to use it at the same time. So, before you can get on the road they look in your car. An ambulance needing to get to the city on the other end (in reality a phone call) would get preference and be allowed while a car-full of teenagers (a bittorrent download) would be told to go home and come back and try later. The ambulance needs to get through right now, or else all will be lost but the car-full of teenagers will grumble but come back in a bit and try again.
This is just an example but it's what a lot of larger ISP's do during primetime. They use deep-packet inspection to -TRY- and identify the type of traffic and then discriminate based on that to determine who can go and who can't in order to try and keep the road from getting congested. Once it does get congested then you have the all-bets-are-off crapshoot.
I said they try to indentify the type of traffic because nowadays there's a MASSIVE amount of traffic on the internet that is encrypted. This is everything from banking information to VPN traffic to bittorrent clients that are trying to mask their traffic from being identified as peer to peer. Most ISP's have a default policy that they throttle encryped traffic- anything they can't readily identify they throttle it back quite a bit. They don't kill it outright, but if you're trying to use a VPN client between home and office at primetime and it always slows to a trickle that might be why.
So there you have the question: Should network providers be able to manage their networks as they deem fit or should they step back and allow all traffic the same preference regardless of congestion or other issues?
Also, there's always the security concerns that with deep-packet inspection an ISP could look into your traffic and see sensitive personal information, or see that you're downloading illicit or illegal content and call the cops and so forth. Nevermind that most personal information -should- be encrypted automatically by most programs today, the question is largely philosophical.
In truth network providers are way tooo busy to ever bother themselves with knowing or caring the actual contents of a single subscriber's traffic (although it feeds the paranoia of the tin-foil hatters) and chances are good that if they ever did it would be after a knock on their door from a law enforcement agency that can legally get them to do so anyway. Again, the concern is simply to try and figure out -what- type of traffic it is and then make a decision to pass it on or drop it accordingly.
Interestingly enough, ALL of the major networking hardware manufacturers are in favor of allowing network providers to manage themselves (Cisco, Juniper, for example) and all of the big ISP's want to be able to police their networks as they see fit.
On the other side of the argument you have virtually all of the small ISP's who are mostly re-sellers off of the big ISP's networks, and all of the application types- Bittorrent, Amazon, Google, e-Bay, who say that the big ISP's by law should not be allowed to discriminate what type and what amount of various types of traffic they police on their networks and rather either expand their networks to keep up with demand or use other means of limiting congestion (other means like usage caps or over-usage extra charges and so forth)
Anywho, I could go on at length but I hope that helps you paint a better picture? Also please keep in mind that a lot of the above is very, very generalized as a lot of areas where broadly covered to try and give a general explanation, so as with all things in life please take what I just said with a grain of salt!