This guide describes where to find more information
on the myths listed on the
Criticism of Object
Oriented Programming page. (Please wait until the
entire pages are loaded, because many of these
links are in the middle or end of cited page.)
- Myth: OOP is a proven general-purpose technique
OOP may shine in certain niches, but there is no
evidence that it improves software in general
in any measurable way.
See: Research Lacking,
Goals and Metrics
- Myth: OOP models the real world better
See: Modeling the Real World
- Myth: OOP makes programming more visual
Although most "serious" OO proponents don't believe this,
it is commonly held among managers and new programmers,
and is a big part of
the reason for OOP's rapid acceptance into the marketplace
in my opinion.
See: The GUI Connection
- Myth: OOP makes programming easier and faster
"Easier" is too vague a claim to study. My interpretation
is that one really means that OOP "better fits the way
humans think", which is covered below.
Regarding "faster", most proponents claim that rapid
development comes only after "reuse" builds up, so that
existing building blocks can be "snapped into place".
Very few will claim that it is faster from the ground up,
and may even be slower. I see nothing about OOP that
makes for rapid development. Procedural components
have been around and successful for quite a while,
even in languages without "modern" procedural
features. However, the biggest difficulty in obtaining
a Lego block "snap-together" approach to software
building is related to the multifaceted interactions
needed in real business software. Relational techniques
do this better because the
connectivity and viewpoints can be
"calculated" on-the-fly rather than hard-wired
into code structures and rigid OO interfaces.
See:
Up-Front Modeling,
Reuse Notes and Wiki Discussions,
Components,
Abstraction
- Myth: OOP eliminates the "complexity" of
"case" or "switch" statements
This claim results in some of the most lengthy and
heated debates I have ever
been in. Some issues relate to certain syntax of certain
languages, such as C's silly, archaic "break" statement.
Outside of language-specific syntax,
it appears to mostly be an issue of "aspect grouping". There are
(at least) two dimensions involved in examples compared,
and one must pick which dimension
to favor at the expense of the others. In other words, there is
no free lunch. Program code is pretty much a one-dimensional medium
and we are trying to project two dimensions onto it. Something
is going to have to be sacrificed. Further, IF or CASE statements
"degenerate" better when all the typical things that
go wrong with
polymorphism in the real world happen.
Thus, IF/CASE is often better future-proofing.
See: Single Choice Principle,
Shapes Example,
Aspects
- Myth: OOP reduces the number of
places that require changing
I have not seen any examples where OOP improves the change impact
score given a typical set of realistic changes. I do find, however,
that change impact examples in OOP textbooks and training
materials tend to mostly show
change impact examples that favor OOP, ignoring non-OO-favoring
changes. In other words, there seems
to be some bias in the type of
examples chosen for change impact illustrations. I cannot
determine whether this bias is intentional or a "copy-cat"
phenomena.
It may be that by simply making people aware of
a particular problem pattern, they simply notice
it more than the other tradeoffs.
This is sort of analogous to a shampoo
that fixes
split-ends,
but makes hair less shiny.
If the ads keep emphasizing the splits ends,
then people may pay more attention to that
than the lack of shine, and visa versa.
See: Change Patterns,
Goals and Metrics,
Shapes,
Single Choice Principle
- Myth: OOP increases reuse (recycling of code)
Reuse is one of the biggest claims of OOP. However, reuse
claims have mellowed over the years as reality has set in.
Some OO fans don't even include it on their OO benefits list
anymore.
Some proponents have suggested that the biggest area of
OO reuse is with components. However, exactly how OOP
components are more "reusable" than non-OOP components
has never been made clear. Some will argue that OOP
components are easier to have their "implementations swapped".
However, I have yet to find a common business need for
swapping implementations to identical interfaces in
my domain. What manager is going to authorize having
two implementations of the same thing?
Most examples used to illustrate swapping are
either unrealistic, or outside the stated domain.
See: Wiki Reuse Discussion and Notes,
Components,
The Driver Pattern,
Dr. Dobb's Article - 'Such Much OO, So Little Reuse'
- Myth: Most things fit nicely into hierarchical taxonomies
To their credit, many experienced OOP professionals agree that
heavy or deep hierarchies or taxonomies are over-hyped in
many OOP training materials. However, they appear unwilling
to do anything to reduce such hype. Some suggest that
such are "just meant to introduce concepts only".
Many new OOP programmers still keep trying to force their
application models into animal-like or nature-like taxonomies
because they never saw any disclaimers or warnings.
OOP without lots of inheritance and hierarchies is
kind of "de-clawed" anyhow, making OOP less unique.
See: Hierarchy-Happy,
Subtype Proliferation Myth,
People Type Experiment
- Myth: Sub-typing is a stable way to model differences
The "delta rule" is that you can take an existing class and
only have to specify what is different to get new variations
on the same thing. Although appealing as an idea, it has
numerous problems in reality. One is the "Fragile Parent
Problem", where shops are too afraid to change a parent
class in fear of breaking lots of children. Second, is
granularity issues, where doing things like overriding
only 1/3 of a method requires an interface overhaul,
or results in duplicating the 2/3 that is the same.
Further, the differences of multiple variations
may not necessarily fit or
stay in a tree-like pattern, creating duplication or
mass code shuffling ("refactoring"). The real world
does not change in a tree-shaped manner for the most
part. At least not the one I observe.
See: Method Granularity,
Fragile Parent Problem,
Hierarchy-Happy
- Myth: Self-handling nouns are more useful
than self-handling verbs
Some OO fans believe that grouping operations
by or around nouns (entities) is somehow
more magical or natural than grouping by tasks, which is the
traditional approach. I find grouping operations by noun rather
artificial because most operations in my domain involve multiple
nouns, and the nouns involved may change over time. Thus, there is
usually no "one right noun" to group it by.
actionA( noun1, noun2, noun3) // procedural
noun1.actionA( noun2, noun3) // OOP
I personally find the first approach more natural
in most situations.
See: Sentences,
The Noun Shuffle,
Aspects
- Myth: Each operation has one natural
"primary noun"
OO tends to believe that there is a natural association
between operations and the single "thing" that these
operations operate on. It is sometimes also called
"responsibility-driven design". I agree that there
is sometimes a natural relationship in lower-level
device-driver-like API's,
but much less so at the domain (business) level.
At the domain
level the relationship between nouns and verbs
is often shifting (dynamic) and many-to-many over
the longer run. The one-to-one or one-to-many association
often assumed by OO is artificial. Even if there is
a simple relationship at the time of design, it may
not last, resulting in a lot of code change to "upgrade"
to many-to-many.
The principles of information-hiding generally
dictate that we hide specifics of implementation.
If so, then the nouns used to carry out given
operations should also be hidden from the interface if possible.
However, OO requires one to "care" about which
primary noun an operation is attached to. In many
cases this is excess coupling of nouns and verbs. Example:
system.print(x); // unnecessary noun
print(x); // how it should be
print(x, driver=system); // optional different driver
(See prior "self-handling" myth for related links)
- Myth: OOP does automatic garbage-collection better
I am not quite sure what claimers mean by this because they
rarely stick around to defend their claim. However, I have
heard it claimed at least twice.
In my designs, most of the significant
"instances" are in tables, and
not language-specific RAM structures. Thus, most complex
"garbage collection" is part of managing the
database or table files (depending on the kind of
database engine used). Also, garbage collection is a
very language-specific issue.
See: Memory Recovery Notes,
Table Oriented Programming
- Myth: Procedural cannot do components well
The literature implies that OOP creates a "clean wall"
between the component and the component user (reflecting
some of the vocabulary of "encapsulation"). However,
functions have been doing this for a long time. But, OOP
proponents claim that functions don't manage state
(inter-process data) very
well. I have not been given enough specific examples
of functions failing from the claimers
to verify this. (Although some of the few problems shown
were language-specific and not paradigm-specific.)
Also, the use of tables as a communication
and state management mechanism is under-used, especially
for in-house components. Part of this is the "big
iron" mentality of many current database engine
vendors.
See: Components,
Translation Layers
- Myth: OO databases can better store large, multimedia data
See: Meyer's OOSC2, page 1051 Response
- Myth: OODBMS are overall faster than RDBMS
See: Meyer's OOSC2, page 1051 Response, further down.
- Myth: OOP better hides persistence mechanisms
OO fans often say that OOP better "wraps the persistence (data storage)
mechanism". What they often mean is that it allegedly hides the database
implementation so that database vendors/engines can be swapped without
API change. In practice this is tough regardless of paradigm because many
of the features are too different. The same interface will likely not work as-is
under a new database system. It is one thing to wrap implementation, but
another to wrap interfaces. The old API may assume a service or feature
that the new database simply does not provide, for example.
Further, databases are not just about persistence. Think of a
database as a "state manager" (state roughly is the same as "data")
or "attribute manager",
and NOT a "persistence manager". If a RDBMS ran entirely in
RAM, the application developer would not know the difference.
If anything, RDBMS abstract (hide) the fact that disk persistence
is or isn't being done. This is something that "raw" OOP has a hard time
with. (Some RDBMS vendors are working on a RAM-only version of
their database engines for certain kinds of needs.)
OO is also a kind of state manager since it is customary to wrap
state within classes in OOP. OO and databases tend to fight over
the right to manage state. "Hiding" the state manager by re-writing
one from scratch in OOP is not hiding, but simply exchanging
one state manager (DB) for another (OOP code). OO fans often
incorrectly call databases a "low level implementation". Good database
systems are not low level, except maybe to those who do not
understand how to use them effectively.
Relational databases hide indexing and
ad-hoc or virtual cross-referencing techniques better than most
OOP, for example. And, OOP is notoriously messy or inconsistent with
many-to-many relationships.
See Also: Why Procedural/Relational,
Standard Collection Features,
Invoice Detail Example
- Myth: C and Pascal are the best procedural can get
It is a fairly common debating flaw of OO proponents to use
features, or the lack of features, of specific
procedural languages as alleged evidence against
non-OOP paradigms. Comparisons between C and C++
are probably the most common. Many of the complaints from
OOP fans against procedural seem based on
the less flexible procedural languages and/or
lack of skill in using some of the more modern
features/techniques. Often they look more like
the Luddite than me.
See: Helpful Procedural/Relational Features
- Myth: Only OOP offers automatic initialization
This claim came from somebody commiting the above sin: comparing
C to C++. Constructors and destructors are very similar
to event-driven-programming events (which are not
an OOP concept), and similar to
RDBMS "triggers", such as ON INSERT, ON UPDATE, and ON DELETE
triggers.
Initializers or defaults could also be built into
array or structure declarations in
procedural languages. The fact that they
are not found in most languages is not the fault of procedural, but of the
language vendor. (Whether they are a common need or not is another issue.)
OOP does not demand that constructors be present in a language anymore than
procedural does. It is a feature that could be added to any paradigm.
(Ironically, OOP's hype killed progress in many procedural languages.)
- Myth: SQL is the best relational language
I listed this myth become
some have suggested
that SQL
can get too convoluted or is too hard to learn, and thus
they argue one should consider using language-specific
collections (such as arrays) and
loops instead of full-blown SQL.
Ironically, complaints can often be heard that
"proper OO is hard to learn".
However, I perfectly agree that SQL is far from the ideal
relational language and also that existing RDBMS could
use some improvements that would make software development
and maintenance easier. For example, if more RDBMS provided
temporary or user-definable "views", then larger SQL expressions
could be simplified by being divided up into smaller SELECT
statements without bothering the DBA to create a view.
But even with its flaws, SQL
is often superior than the OO practice of
reinventing a database
and/or query system from scratch in my
opinion.
See: Alternatives to SQL,
Criticism of SQL,
Table Q & A
- Myth: OOP would have prevented more Y2K problems
See: Response to Meyer's Y2K claims
- Myth: OOP "does patterns" better
The OO crowd appears more likely to view "patterns" as coding
patterns, while procedural/relational (P/R) tends to view them
as mostly relational, Boolean, and set-based formulas
or queries.
P/R also tends to view them as "virtual", "local", and/or
"as-needed" (HAS-A) instead
of global (IS-A). It is my opinion that
the "virtual formula" approach is more change-friendly,
compact, and cleaner in most cases.
See: Procedural/Relational Patterns
- Myth: Only OOP can "protect data"
In practical business applications, a class
is not likely to be the
sole owner of any important piece of data. Thus, a class
may wrap a view or be a proxy for the data,
but not actually "own" the data itself.
If you want a flexible protection system that is not
limited to just basic nesting patterns, then ACL's (Access
Control Lists) would probably be the best way to
go. One could then have fine control over which
module, routine, method, class, etc. had access to
which user/resource.
However, neither paradigm has ACL's built in, and thus
cannot claim high-level protection out of the box.
See Data Protection
for more on this.
Further, even if OOP achieved pure wrapping of
data somehow, that approach has downsides. One is an
increased need to
reinvent database-like
protocols/methods, bloating-up interfaces,
and the other is an increased
risk of needing to do the
Noun Shuffle
dance.
Many of the OO protection claims can also be done on
RDBMS using triggers, referential integrity, views,
and stored procedures. For example an "Update
trigger" can ensure that any and all new records pass a
given test. Further, these apply
to multiple languages using the same data, unlike
most OOP.
- Myth: Implementation changes significantly
more often than interfaces
Object oriented literature seems overly obsessed with the
idea of a stable interface wrapping and hiding a turbulent, bubbling
implementation. However, they actually bubble together,
in my experience. In other words, significant changes in
implementation are quite likely to result in significant
changes to the interface also.
I don't see anything in
OOP that assists with interface change management more
than other paradigms. In many cases it just seems to
produce excessive layers, which require a lot of work
when interfaces change because the change affects
potentially each layer. (Layering is not native to
just OOP, but OO fans are more likely to get carried
away due to excessive "hide at all costs" indoctrination.
Layering also has limitations over
multi-viewpoint abstraction.)
Some OO practices also increase the interface
size beyond what is necessarily.
Repeating collection-oriented operations
in many classes and components
instead of using existing collection engines to manage
collections is an example of this.
Lack of interface factoring (consolidation) is just as
problematic as lack of implementation factoring.
However, OOP doctrine tends to ignore the first.
See also: Driver Pattern
- Myth: Procedural/Relational ties field types and sizes
to the code more
See: Zip-code Example
- Myth: Procedural cannot extend compiled portions very well
See: Compiled Unit Separation
- Myth: No procedural language can re-compile at the routine level
See: Compiled Unit Separation
- Myth: Procedural/Relational programs cannot "factor" as well
Although "factoring" has grown to mean a lot of things, the
most consistent "core" meaning is removing repetition of
code or code patterns to one or fewer spots. This tends to
make the code smaller and reduce the number of spots that
have to be changed for any given change.
I have yet to see any code evidence that OOP clearly
reduces repetition over other paradigms in
the stated domain.
I have seen cases where it may trade
one type of repetition for another, such as with
aspect intersections,
but not produce a clear net reduction.
I have also seen cases where the OO fan has
only had exposure to poor procedural programming
techniques and/or languages. In other words,
they often compare bad procedural/relational
code to decent OOP code.
See: Repetition Factoring,
Inheritance vs. Defaults,
Inheritance Code-Size Study,
Interface Bloat
- Myth: OOP models human thought better
This claim is especially common in the older OOP
literature. I don't know exactly how they come
to this conclusion, unless it is subjective.
I don't question that some people may indeed
think better under OOP.
See: One Mind Fits All?
- Myth: OOP is more "modular"
"Modular" is kind of a vague term. It often
implies a Lego-like building-block structure
that allows one to snap together pre-built components
and/or provide divisions to better manage and
think about things.
I find that the ideal grouping of something tends to
be relative to particular needs. There is
usually no one ideal "global" grouping because
multiple aspects are usually involved with anything
non-trivial; thus, our grouping choice is usually
a compromise. Factoring a significant portion of information
into relational tables assists with such relativism because
one can issue queries or views to bring about the desired
"virtual" grouping on an ad-hoc basis.
Further, OOP seems to lack a grouping partition size
between class and application. This makes it hard for
me to navigate a "sea of classes".
See: Encapsulation,
Components,
Aspects
- Myth: OOP divides up work better
This is yet another claim that is poorly
articulated. I find that dividing up the
code into task units (task-based modules)
does a pretty good job of dividing up
development labor. Inter-task communication
can often nicely be handled via
table/database "messaging".
See: P/R Business Modeling
- Myth: OOP "hides complexity" better
See: Abstraction
- Myth: OOP better models spoken language
This is a claim that I don't see often, but those who hold it
seem pretty adamant about it.
See: Modeling Sentences
- Myth: OOP is "better abstraction"
See: Abstraction
- Myth: OOP reduces "coupling"
"Coupling" is another one of those buzz-words that
is sometimes waved about like a magic sword. It generally
refers to things being tied together so that if one changes,
it may impact the other. Not surprisingly, reducing coupling
in one spot often increases it in another. Thus,
measuring the total impact can get sticky. Besides, coupling
is sometimes a good thing because it allows groups of things to
be moved or handled as a unit instead of repetitiously handling
each item by itself. Coupling is something to be
managed, not gotten rid of. OO's encapsulation is even
a form of coupling. (Whether
encapsulation
is a "good" form of coupling or not is another topic.)
Some say that "proper OOP" will reduce coupling.
My observation is that the
change patterns
they assume in their calculations or reasoning
are not very realistic, at least not in the stated domain.
They are unknowingly increasing some forms of coupling,
such as coupling to assumptions of
continued mutual exclusion, assumptions that
one aspect will remain more
important than others, and
excessively tight
integration to complex protocols.
- Myth: OOP does multi-tasking better
It is sometimes claimed that OOP can better launch
and/or manage multiple processes within an application.
I have not seen enough examples from the business world
to fully evaluate this claim, but will point out that
one can often use a database to manage communication
between multiple processes. There are many features, such
as transaction management, provided by most databases that
can greatly reduce the amount of application code devoted to
managing inter-process communication. See
concurrency comment in OOSC2.
- Myth: OOP scales better
The claim is that OOP is better for building
large systems and/or large applications.
Scaling comparisons tend to be a can of worms because of
the different techniques and styles in partitioning
between paradigms. Also, batch, client/server, and
web applications tend to have different
partitioning strategies from each other.
Looking at EXE size
may be useful for client/server comparisons, but
not for web applications which may rely mostly on a bunch of
scripts and use mostly the database to communicate state
between scripts instead of language structures in RAM.
Decent procedural/relational tends to rely on
the database(s) as the primary "backbone" to glue
relatively independent tasks or events together, and not
application code. OOP seems instead to
want "one big EXE". Many of scaling failures of
procedural that led some to turn to OOP were because
usage of databases was not well-understood back then.
Using databases reduces the need for large in-memory (application)
constructs.
Anyhow, comparing
the size of the two is often comparing apples and oranges.
Is three-hundred Microsoft Access applications hooked to a
large database via ODBC a "large system", a
bunch of "small systems", or both? (I am not
promoting Microsoft tools here; just providing food
for thought.)
See: Size comment from OOSC2
- Myth: OOP is more "event driven"
My search for a consistent definition of
"event driven" has failed to turn up anything
worth repeating. Getting objects to respond
to GUI events usually requires a non-trivial
framework. Such frameworks can be built a
number of ways in a number of paradigms.
See: Event Tables
- Myth: Most programmers prefer OOP
To be fair, there are no known good surveys on
developers' opinions on OOP. To complicate matters,
here are some other factors that may
affect any measurement or survey:
- Many "high-level" OO gurus say that most actual
production code written in OOP languages, such as Java,
tends to be procedural in nature. In other words,
many programmers are using OOP syntax, but not much
OO beyond that without even knowing that it is not
really OO (or just "OO-Lite").
- Many programmers switched to OOP because they and/or
their shop didn't know good procedural/relational techniques
or used only lame procedural languages.
They are comparing decent OO to bad p/r because they don't
know any better. This problem is made worse by the fact
that most software engineering training focuses on the
OO paradigm, ignoring the others. In other words, the OOP
hype is becoming a self-fulfilling prophecy by smothering
education in the alternatives.
- Most programmers I know simply "go with the flow".
If assembly languages came back in style, they would gladly
flock to them if it increased their paychecks and/or made
them more employable. They would tend to answer a survey
with whatever is in style because that is where they are
headed. In my observation, the choice of paradigm is mostly
determined by what managers prefer, and they are more
easily swayed by toy examples, clever cliches, and
brochure-talk than programmers. But, programmers generally have
to go along with the preferences of those who hire and fire.
- Being popular is not necessarily the same as
"being good".
See Also: Pro-OO Debate Tactics,
Paul
Graham on "Being Popular"
- Myth: OOP manages behavior better
This is a claim that I have heard more of recently.
For the most part, I find behavior and data interchangeable.
One can shift the application toward a more code-centric
(behavioral) design or toward a data-driven design, if you know how.
I lean toward data-driven designs because data is
easier to customize one's view of than code. There
are more "maths", tools, and techniques to browse
and manipulate data views than there are code
tools. Things that don't convert smoothly
to easy-to-view data include
Interweaving Orthogonal Conditions.
But, I don't see how OO improves these either. Subtype theory
and its related polymorphic dispatching are just not powerful enough
to deal with multiple dynamic dimensions without making a mess.
And, "responsibility-driven design" (see above "primary noun" myth)
assumes a tight one-to-one relationship between nouns and verbs
that really often should be prepared for
many-to-many in my observation.
See:
Code Neutering,
Data Dictionaries
- Myth: OOP gets stains out of clothing better
This myth is still under research. It may turn out to
be the only myth that is accurate. I will keep
you posted.