Tom responded to my Windows vs Linux comparison:
Unices services are just scripts in the init.d directory which can be copied to other system or backed op to removable media, you can edit them over ssh with any text editor. But with windows you have to register them somewhere obscure so they end up somewhere in the registry and manage them through MMC." What's obscure, in Windows it's quite usual that you run in installer to install software, installing the service is done automatically for you, whatever happens it's done automatically for you and you don't have to know what's happening. (That's fine if it works alright, but a problem if it doesn't).
"Sometimes MMC or the registry gets corrupted, but thats even minor. You have to use a graphical console to administer the services which demands a lot of resources. Running remote desktop or vnc or something sure is a lot more of a drag than telnet or ssh where you notice no slowdown." Yes well it uses resources yes, but many resources? Not that many compared to the ones available. And using remote desktop even through a not so fast connection (I used it with 6kb/s upstream) works quite well (due compression) and it's all graphic and therefore as easy to use as if you're physically sitting behind the machine (you can even mount your local harddrives for use on the remote machine).
"- configuration Again text files have are my opinion a much better way to store configuration, and MS seems to agree because they are shifting towards text-based configuration since .NET (XML is text too) instead of cramming everything in the registry. The registry is so fragile and if you break something, everything is broken. Flat text files act as separate entity's so an error in one of them doesn't have a chance to break something else. Uninstalling something for example leaves tracks in the registry which are damn hard to find (no regclean.exe won't cut it)." The registry might not be the greatest way to store settings (it probably is not, as it is indeed quite fragile). Text files are very fine with me, the problem with the current ones in Linux/FreeBSD etc is their consistency. Every file looks differently, the /etc/fstab file looks completely different than the apache configuration files. If they'd all use a consistent (XML?) format it would both be easier for the die hard people wanting to edit the files by hand, and for the tools to read the settings and expose them through some UI.
You got a point. The WSH/com/com+/DNA architecture is pretty neat. What i dislike about windows and scripts is that they don't have to have the executable bit set. I can mail you a malicious script in both windows and Linux, but in Linux it won't be executable. WSH gets executed everywhere, you can append it to the end of a video or embed it in a mail. Even though WSH is neat, it adds greatly to the insecurity of windows." Hmm, I always wondered why unices had the execution bit, but this is indeed a valid reason for that (even though it might not be the original one).
"- installing software.
this one i completely disagree on. If you know where to look installing software in Linux is a whole lot easier than in windows. Maybe it's just me, but i find it easier to launch synaptic and install whatever or type emerge whatever, than to search around wares sites for a couple of days (which is frankly how most windows users get software)." Synaptic is great if it contains the software you need in the version you need it. You could just as easily write a tool that holds a repository of windows software and can easily download it for you and run the installer. But in many cases the software is just not available through APT (or synaptic for that matter), but indeed if it is (and the collection of software is growing) then it's very easy to both install and uninstall software.
"- File system
I like the choice in FS in Linux, one is faster for small files, another is faster but less reliable, and if you don't want to know or care about it, you can safely stick with whatever the distro has chosen as defaults. In windows NTFS just sucks. The only advantage windows has over Linux in this area is that it comes with ACL's enabled by default. This slows things down for home users that don't use the feature, but with most Linux distro's ACL's are a pain." NTFS just sucks, ok, you convinced me...
"- performance monitoring
If you monitor performance on your boxes you will have noticed that windows starts to stutter if you go over 70% load. In Linux (while compiling for example) the load goes over 2.0 (which means that you would be using twice the power of your CPU if you could), so every single CPU cycle is used but still the system runs fine. I would even say monitoring tools in Linux are way beyond whatever windows Performance Counters offer. Simple yet vital tools as lsof, ethereal (network packet capturer) are not included in any windows release." I wasn't talking about handling high loads in this bit. However I think there are tools, also for windows for monitoring network data packets and all that, but I must admit I never looked for them, never needed them. The thing I was talking about it this bit was the consistent way (again consistency is the major thing that distinguishes Windows from Linux) that you can look at the performance of different pieces of both the OS and services running on it (through perfmon).
If you stick to Red Hat/GTK2 app's on a Red Hat/Gnome box or KDE/QT on mandrake or suse (defaults all the way) things look VERY integrated. More so than in windows (ever compared the office XP menu's and the notepad that comes with XP?). If you start mixing toolkits of course you will notice different styles but still, most of them can be themed to look consistent." In recent Redhat and SuSE releases I indeed noticed that GTK2 and KDE apps can be changed to look more alike. That's an improvement, but still on many linux systems every application looks differently, not only in the widgets it uses but also in the way it lays out the screen. One hosts all windows within it's main window (Opera), another uses seperate windows for each document (OpenOffice), some use a separate window for every part of the GUI (The GIMP). It's all just so different.
"If we're talking servers the Linux GUI is the clear winner because you don't need it. Running a GUI on a server that doesn't need a monitor because nobody ever approaches it just wastes memory and CPU." Maybe, but I hope that the fact that an OS has a GUI running isn't enough reason not to use it.
You say that Linux users don't run as root while windows users do, but that's not due to some policy. Microsoft's best practices require you to run as a regular or power user too, but in windows the "run as" is not half as good as "su" or "sudo". Last time i tried i couldn't run explorer from the "run as" menu. The "run as" feature is simply too buggy to be used on a regular basis, and to allow the boxes administrator to login with a regular user for normal tasks." That could be, I must admit I don't know (I run Windows as an Adminstrator myself too :o)
"In Unix you can install stuff in your home dir without switching to root, and you can do pretty much everything you need when running as user, in windows it's a whole different story." That indeed is a huge benefit of unices.
"Also it's not just users being able to install everything that brings insecurity. Almost every service runs as system (has to) which makes an exploit in the service almost automatically a remote exploitable root exploit. No home user uses RPC over tcpip, but still it allows for remote takeover of the system, same for SQL server/MSDE, most users don't know they have a database server running, let alone patch it, but if the service gets compromised the attacker has full access." True
"- windows on a server
Are you kidding me? And reboot after each update, sorry can't afford the downtime. But seriously, windows on a server is just starting out. You wouldn't seriously prefer a Microsoft web server over an Apache." I wouldn't be so sure.
"First they need to improve automation, have a decent shell and command line tools so you can do stuff over ssh, then they need to include updates for everything through windows update (up2date for example updates mysql, apache, ...) so a single scheduled task keeps your system updated." Why SSH if you have remote desktop (or terminal services)? And about the updates, it's called Windows update :)
"Then their servers should stop crashing. If they got all of that it would be nice to be able to configure servers without GUI (which will be hard because i think the GUI is nested pretty deep in their kernel). Just as Linux has a long way to come on the desktop, so has windows on the server." Crashes, that's a classic. I've been using Windows 2000 and XP for what, two years now? It crashed on me only once or twice. And why the need for console access to server settings? (you could do that using WSH by the way).
"- is there hope for Linux?
Thats a bit of a strong statement. I could finish my rant with "is there hope for windows" but i won't. I still believe each has it's space in IT-land and both should inter operate a bit better (through standards). I think there's hope for both (and apple, irix, ... maybe even SCO ;)).
Linux is ahead in the server space, and catching up quick on the desktop so i would think there is definitely hope for Linux." :) That's just a reaction on the section title not the text underneath it ;) The thing the whole article comes down to is the consistency problem Linux has. Is there hope for that?