"OOP Is Much Better in Theory Than in Practice"

Two days ago DevX published a three-page rant from Richard Mansfield about how OOP has failed. It's amazing how much words Richard needs to actually say nothing, or at least not much more than what the title already told. OOP sucks, because, because, it does! You can find the "rant here":http://www.devx.com/DevX/Article/26776/0/page/1, read it and go through it with me.

Certainly for the great majority of programmers—amateurs working alone to create programs such as a quick sales tax utility for a small business or a geography quiz for Junior—the machinery of OOP is almost always far more trouble than it's worth. OOP just introduces an unnecessary layer of complexity to procedure-oriented design. That's why very few programming books I've read use OOP techniques (classes, etc.) in their code examples. The examples are written as functions, not as methods within objects. Programming books are trying to teach programming—not the primarily clerical and taxonomic essence of OOP. Those few books that do superimpose the OOP mechanisms on their code are, not surprisingly, teaching about the mysteries of OOP itself.

There's a difference between design and implementation. I view object-oriented programming as a necessary "evil", if you will, to make object-oriented design possible. We need object-oriented design. People think in certain ways, they think about people, objects, relations, interactions. OOD(Object-Oriented Design) does a fairly good job at porting these concepts into the computer. It makes large project manageable. Why we didn't need it earlier? Because projects weren't as big before. Earlier, doing calculations on a computer was considered fancy, now all a company is, does and owns is inside computers. The human brain is very limited, we can't hold millions of lines of code in our mind and reason about it. OOD helps us with that. To implement these designs we need languages that support the design concepts: object-oriented languages. In this language the actual programming takes place. Why programming books don't use many OOP examples? Because they're not about object-oriented design, but about programming. They assume the designing has been done. Actual programming deals with loops, if-statements, recursion. Not with extensive class-hierarchies.

If your application is simple and you don't think you need the added complexity of object orientation, then don't use it. Be pragmatic.

Consider the profound contradiction between the OOP practices of encapsulation and inheritance. To keep your code bug-free, encapsulation hides procedures (and sometimes even data) from other programmers and doesn't allow them to edit it. Inheritance then asks these same programmers to inherit, modify, and reuse this code that they cannot see—they see what goes in and what comes out, but they must remain ignorant of what’s going on inside. In effect, a programmer with no knowledge of the specific inner workings of your encapsulated class is asked to reuse it and modify its members. True, OOP includes features to help deal with this problem, but why does OOP generate problems it must then deal with later? Why does OOP generate problems it must then deal with later? All this leads to the familiar granularity paradox in OOP: should you create only extremely small and simple classes for stability (some computer science professors say yes), or should you make them large and abstract for flexibility (other professors say yes). Which is it?

First off: "to keep your code bug free"? Who says OOP exists to keep software bug free? I doubt that was really a design goal. The goal was to make large projects manageable and divide it into small bits that fit into people's brains. The fact that the user of a particular method does not know what's going inside is good, it has other things to worry about: get things done. If you got a code base of a million lines of code, the last thing you want is to understand each piece of code you're touching. That's called abstraction.

And about the OOP paradox, which is supposed to logically follow from the bit before, that's just a design decission. Like you never have to make decissions when programming in a procedural language...

A frequent argument for OOP is it helps with code reusability, but one can reuse code without OOP—often by simply copying and pasting. There's no need to superimpose some elaborate structure of interacting, instantiated objects, with all the messaging and fragility that it introduces into a program.

Copy and pasting sounds like a good idea? What if someone finds a bug in a piece of code that's reused on fifty other places. Copy it around all over again?

Further, most programming is done by individuals. Hiding code from oneself just seems weird. Obviously, some kind of structure must be imposed on people programming together in groups, but is OOP—with all its baggage and inefficiency—the right solution?

Hiding code they don't have to see makes life easier. When I download a C library somewhere I first have to figure out which functions belong to each other and which I have to call instead of being called by others. With OOP libraries I can see which pieces belong to each other and which I am supposed to call, because the ones that I'm not supposed to call are hidden from me.

Is OOP with all its "baggage and inefficiency" the right solution? It very well may not be perfect, but it's the best we got. Definately better than procedural programming, I would say.

However, professors of programming have taken the compartmentalization that GUI objects require to extremes. UI programming certainly benefits when programmers subdivide their code into OOP-like components, but it doesn't logically follow that they must extend this modus operandi to all other aspects of programming.

Why not? Not because it's a one-size-fits-all world, but that doesn't mean that one size doesn't fit many.

However, professors of programming have taken the compartmentalization that GUI objects require to extremes. UI programming certainly benefits when programmers subdivide their code into OOP-like components, but it doesn't logically follow that they must extend this modus operandi to all other aspects of programming.

Even after years of OOP, many—perhaps most—people still don't get it. One has to suspect that we're dealing with the emperor's new clothes when OOP apologists keep making the same excuses over and over: you don't understand OOP yet (it takes years of practice; you can't simply read about it in a book); your company isn't correctly implementing OOP, that's why you're facing so many programming delays and inefficiencies; you haven't transformed your databases into OOP-style databases; and on and on. The list of excuses why OOP isn't doing what it promises is quite long. The list of excuses is so long, in fact, that I've began to wonder whether OOP is simply the latest fad, like Forth, Pascal, Delphi, and other programming technologies before it.

How were those "technologies" fads? Delphi (which is an object-oriented language and environment, so how can something that is object-oriented, be like object-oriented?) seems to hold up pretty well, and Pascal has teached many to program well (its intended purpose). I'm not familiar with Forth. Sure, there are problems with OOP, but how much more problems would there be without it?

Also, components service a predictable input, usually from a single source—the user. OOP objects in real-world business situations must receive data from multiple streams, in multiple dynamic configurations (e.g., invoices must be reconciled with order forms and inventory). Using OOP's noun metaphor, too many nouns are operating in such a situation for the whole thing to be efficiently categorized as a customer class or an accounting class or some other single class. Instead, you must create additional mechanisms to permit the various nouns to communicate with each other. Inflation (code bloat) quickly rears its ugly head. Worse, many successful businesses are quite dynamic, changing their practices and structures rapidly and often. This dynamic environment wreaks havoc on your "nouns" (object classifications). You can find yourself trying to fit things into categories more often than you're actually programming. Sound familiar?

That's all very nice, but is procedure programming going to solve all that? I doubt it.

Efficiency is the stated goal of C-style languages and OOP, but the result is too often the opposite:

* Programming has become bloated—ten lines of code are now needed where one used to suffice. * Wrapping and mapping often use up programmer and execution time as OOP code struggles with various data stores. * Massive API code libraries are "organized" into often-inexplicable structures, requiring programmers to waste time just figuring out where a function (method) is located and how to employ it. * The peculiar, inhuman grammatical features in C++ and OOP's gratuitous taxonomies continue to waste enormous amounts of programming time.

Is really that much more programming required than before? I don't really think so. The initial code overhead may be somewhat bigger, but soon that doesn't really matter anymore. The "wrapping and mapping" sole purpose is clarity. Yes, it's an investement in code, but it pays off because it's easier to understand and therefore easier to remember. The couple more CPU cycles don't really matter that much more these days. The massive API code libraries mess comment is rather absurd. How is something harder to find in a neatly nested class structure than one massive pool of functions of which you only know one thing: the function you know is in there? And peculiar, inhuman grammatical features of C++? I'd never mention C++ as a particular clean object-oriented programming language. Object-oriented languages like Java and C# are much cleaner and in my humble opinion not inhuman or peculair at all. But that could just be my brainwashed head not knowing any better.

But I have to admit, I agree. OOP is much better in theory than in practice.

*Everything* is.