Short-Term Versus Long-Term

Planning Issues Regarding Time

Updated 5/1/2002


Payoff Rare, Late, and not being Measured

"You can only improve what you can measure or care to measure"

There are some studies that show that OOP only pays off if a programming project is "well-managed". There are also a few studies that show that even non-OOP COBOL projects can achieve the same famous "code reuse" and "flexibility" goals that OOP lays claim to if managed properly. (Sorry, I accidentally lost the references.) Thus, OOP's goals may be achievable simply through commitment to certain goals, and OO languages are not a prerequisite. However, let's continue assuming OOP is unique.

Studies show that most OOP projects do not really achieve the goals (improvements) of the OOP promise because all parties have to buy into the proper methodologies. These parties include programmers, project managers, and top level managers. Since code reuse and added flexibility often come only after a few years, many are unwilling to follow the process correctly. Using OOP and OOD (Object Oriented Design) often slows down progress in the first few years. If done right, OOP works as a long-term investment; it is not an instant results tool even according to OOP experts and proponents.

Graph

I know of an invoicing system that was tossed after 18 months in use because the company purchased a large accounting package that had an invoicing system already bundled with it. There was no evidence that the bundled invoicing system was better, but the prior one was still tossed. No documented comparison or survey was ever done. Billers later admitted that the old one was better overall. (The bundled one required too much invoice setup work because it was made to be more generic, and thus had more levels and more question boxes to manage. It was also slower.)

Management whims, mergers, "hostile" takeovers, departmental reorganizations, new CEO, etc. are part of most corporations. I guestimate custom software systems last about 3-to-5 years in service on the average. This is well behind the alleged payback period of OOP. You have to go several years beyond the break-even point to make up for the startup loss. Plus, distant future results should be discounted using accepted investment mathematics. Some claim that investment time discounting should not apply to software projects, but their arguments are not very strong.

time discounting graph
This shows a finance curve that uses a 13 percent
discount rate, which is fairly typical. Steeper
curves are often used. Inflation is usually not
a major component to time discounting.

Perhaps packaged software companies and some nonprofit institutions keep custom software longer, but these make up a minority of companies.

One often hears stories of COBOL projects that have been around for 30 years, but these are probably the exception. Besides, it is difficult to tell what will last 30 years from what will last 30 months. The old-timer applications are there by luck and circumstance. Thus, treating every application like it will last 30 years may be a waste of resources when weighed against the laws of probability and finance.

Further, many companies would not want to commit to one language for the amount of time needed to get OOP to pay off in many cases. Today's centerfold technologies will not likely be tomorrow's centerfolds also. The very fear of obsolescence is actually self-fulfilling in that it is harder to find programmers in older languages because programmers are afraid of being stuck with a dying language and so move on.

Many project teams just slap the label "OOP" on their project because they used OOP languages and tools, not because they took advantage of OOP potential (Object-Oriented Design). OOP by itself does not improve productivity. Object Orientedness is as much a mindset and philosophy as it is language constructs. (OO languages are designed to facilitate the philosophy, not guarantee it.) Many studies say in practical terms that most OOP projects fail to produce benefits over non-OOP methods because for political, technical, or training reasons, OOP is "not being used properly."

As programmers we can say from experience that managers rarely reward or encourage long-term planning. Obviously there will always be a tradeoff between getting a project done fast and building a project with future changes in mind. Since managers do not give a rats behind about 5 years away when their job may not last another 5 months, the programmer is usually forced to take the fast route. This is the sad but true reality.

It is also difficult to quantify the amount of future planning in application design. The proof is in the future and in the details, and the people who sign a developer's paycheck rarely dig this deep or keep detailed notes. Even if such records were kept, the "perpetrator" is likely to be somewhere else 4 years down the road. The Y2K problem shows this time after time. Managers (understandably) spend much more time putting out fires than they do hunting down any distant root causes.

mechanic
Software building and maintenance can be likened somewhat to automobile repair. When you take your car into the mechanic, your primary concern is that he/she fix the problem and make the car run. It is possible that the mechanic uses shortcuts that get the job done a bit faster and cheaper, but perhaps leave you with other, long-term risks. For example, they may use cheap parts, use wire that corrodes when placed near certain vents or parts, not fully tighten a hard-to-get-to nut, etc. Most of these shortcuts may not surface until several years later. How do you know if such shortcuts were taken?

If you worry about it, you could pay another shop to perform an inspection. However, not all shortcuts can be traced back to the mechanic who took it. Also, how do you know that the inspector is not bad-mouthing the last mechanic just to get your business? (I used to often legitimately badmouth the work of prior programmers and designers, although it usually falls on deaf ears and makes one sound like a whiner.)

And, when the shortcuts finally do produce problems years down the road, how do you know which mechanic made them?

And, if you do find out who the "perpetrator" is, they may no longer be at the same shop.

And, if by chance they are still there, how are you going to punish them? You can stop going there, but they already have your money from years back. Keep in mind that they may have done a good job of solving the immediate problem and getting you back on the road, so you have little reason to harbor too much bad feelings.

Sure, there are good mechanics who take the time to do the job right, but these will be the minority if there is no feedback mechanism to encourage long-term planning and decisions. Such a mechanic will see that a coworker gets the job done 20% faster if some short-cuts are taken. To management and the customer, this person looks like they are 20% more efficient. And, if management later finds out about the shortcuts, a case can be made that it brought in 20% more profits with little or no measurable evidence of customer loss. (An experienced mechanic eventually learns which shortcuts are the least likely to be detected or traced.)

The bottom line is that complexity is tough to manage and even tougher to manage for the long-term. Most of the processes to achieve it are not in place. It takes all of these (below) to work, not just 2 or 3. For example, you can't reward somebody for something done 4 years ago if you have no record of what they did. Here is a recap:

Necessary Steps to Long-Term Results

  1. Research (info collection)

  2. Study (the research)

  3. Encouragement (of long-term thinking)

  4. Measurement

  5. Recording (all the prior)

  6. Hindsight (studying the past)

  7. Reward and Punishment (via paychecks, perks, plaques, etc.)

Pure indoctrination without follow-up may work for short periods of time on some individuals, but in the long run lack of sufficient feedback (rewards) will get you.

Although it sounds like we are bashing shortsighted, chaotic companies; in many cases their internal experience with chaos allows them to better deal with chaotic external markets. Further, the pace of change and confusion is increasing, not decreasing. For good or bad, software is being seen and advertised as a disposable technology.

In short, OOP may have some great theoretical benefits, but often does not fly any better in the real world where the feedback cycle is shorter than the (alleged) OO payback cycle, unless you want to toss current, widely-accepted investment principles out the window. Forcing increased complexity on to developers without the intention to follow-up in order to get the benefits is a waste of time, training, and resources.

 

Up-Front Investment?

Global Models Versus Local Models

It is sometimes said that OOP takes more effort or planning up front, but pays off in the long run if such "investment" is taken.

However, my observation is that OOP takes more time up front because it tries too hard to create a "global model". If you get the global model wrong, then your application may be messy or hard to change.

Procedural/relational modeling, on the other hand, strives for "local models" (or sub-models). You generally apply relational algebra to create the "virtual view" that is needed by a given task. This approach is superior in my opinion because first of all, one modeling viewpoint is often not sufficient. There is no One Right Model. Relational algebra allows one to create the custom view needed by a particular task or user.

Second, the software is not bloated up with the "global model". You don't have to wade through the global model to work with stuff not related to it. One task's viewpoint usually does not interfere with another.

Third, one does not have to worry as much about "getting the model right up front". Since the viewpoints are created as-needed per task, there is no need or pressure to create a "good" global model up front. The model/viewpoint is created and tuned for local needs as they come up.

Forth, a virtual local relational viewpoint/model is generally easier to change without affecting other parts. P/r model/viewpoints tend to be isolated by task. One task's model does not interfere or compete with eye-space for that of another task. (There are times when one may want to factor common local patterns into a global, or at least shared viewpoints/models if they are likely to stay the same over time.)

True, a good relational model has to be designed well up front. But, the same issues found in relational modeling also pop up in OO modeling. The issues of relational modeling tend to be universal design issues, such as the "quantity of relationships" (one-to-many, many-to-many, etc.), and repetition factoring. Repetition factoring is factoring out duplication so that a given piece of information or pattern is referenced in one or fewer spots instead of being replicated. This reduces code or schema size and makes it easier to make many kinds of changes because one does not have to hunt down all duplication instances to make the change. In other words, relational modeling tends to be a subset of an OO model. It is simpler, however, because behavior is generally not also dragged into the mix.

Some textbooks talk about "grouping related or dependent items" into their own tables in relational design, but I am generally skeptical of these. Quantity of relationships and repetition factoring alone are usually sufficient, in my opinion, and excess grouping creates unnecessary mutli-aspect conflicts.
The "global model" viewpoint is also working its way into OO patterns, which is not a good sign. One pattern does not fit all.

I notice that OO proponents tend to fight over "which diagram (UML) is best" for application design. It is my opinion that all of the chart types can potentially contribute. Each chart type offers a different perspective on which to analyze the problem space.

 

Abstraction Distraction - The OO Tax

I will tentatively agree that OOP is a good tool for providing high levels of abstraction. (Although procedural and relational abstraction is under-explored and often given a bad rap via poor or rigged examples in pro-OO books.)

The problem is that building generic abstract modules (intended for reuse in later projects) requires roughly three times the effort as a project-dedicated (regular) module. Although no references will be given here, this figure is fairly widely accepted in the industry.

The problem is that most business do not want take this expense. The reasons for this are many and some were already discussed. Most businesses that care about long-term frameworks are those in the business of making software for others. Examples may be ERP systems or graphic component builders.

However, what about the rest of the businesses who don't want to make the abstraction tradeoff? They are currently nearly being forced to deal with OOP and OO concepts that are not applicable to them.

It increases training costs and software complexity because OOP roughly doubles the complexity of a programming language and gives more "tech toys" to half-informed programmers to make messes with . If programmers made messes with procedural and relational concepts (I have seen too many), what makes you think they will be nicer to OOP? More indoctrination? Not!

Because OO concepts sound so wonderful on brochures and in vendor sales meetings, it is forced into too many products and languages. Thus, non-abstractors are being forced to pay an "OO tax" without representation. (Most programmers use at least some abstraction, but not at the level that OO philosophy assumes.)

Can abstraction framework building be divorced from framework users? Microsoft has somewhat done this by not directly putting OO inheritance in their Visual Basic (VB) product. They suggest using C++ to build high-end OO components for VB. I believe this is one of the reasons behind the success of VB despite its reliability and consistency problems. (Inheritance is the least useful and most abused of the OO concepts in my opinion.)

Why are excessive abstraction-related or reuse-related concepts and idealism being shoved down the throats of those who don't need and/or want them? OO proponents like Meyer and Booch have done a great disservice to many in the business community.

Booch even had the audacity to say the big Internet companies should have planned more. If Ebay stopped to take the inherent "abstraction investment" delay, they would not have their famous growing pains. If they had listened to Booch they would not be growing fast in the first place. They are big mostly because they were early! You can plan and abstract all you want now, but your chance of taking Ebay's market share away is Booch, um.....I mean Zilch. (The companies most likely to take away market share from Ebay are other big early-birds like Yahoo and Amazon.com.)

I am sure the excuses from Booch and Meyer will resemble something like, "Well, if you only paid for expensive trainers and consultants like us, then you would have reaped the OO benefits." Shall we call that abstract marketing?

Note that other paradigms can also have well-packaged components/interfaces/API's etc., and do them well depending on the language and skill of the programmers. However, the OO philosophy and/or training materials emphasize these more for some reason. OO's brand of high-abstraction is not the only game in town, just the one with the most attention paid to it.

See Also:
Planning Q & A
Goals and Metrics
Procedural/Relational Patterns
Abstraction
The Reuse Dilemma

Note: this section is slated for a re-write one of these days.


Main OOP Criticism
© Copyright 2001 by Findy Services and B. Jacobs