Title: Can Paradigms or Tools be Objectively Evaluated?

!!!!!!!!!!!!!!!!
NOTE: This document has been superseeded by: Goals and Metrics.

 

Much of the debate and name-calling here (in forums) seems to orbit around concepts which appear to be either subjective or so far non-measurable.

Is it possible to come up with practical metrics for different paradigms and different problem-solving approaches.

If we can find reasonable metrics, then we can evaluate the alternatives without making personal insults based on a fuzzy impression that we have about our favorite paradigm or tool. I am a bit skeptical that it can be done, but have not entirely ruled out the possibility.

I will toss in a few candidates just to start the discussion.

1. Code brevity

This could be the number of lines of code needed to carry out a given algorithm. However, "lines" can be defined in different ways. Further some languages purposely encourage one to split structures into multiple lines for alleged readability purposes. Counting characters disfavors long variable names, which are not necessarily bad.

The most objective way I have come up with is to assign points to different "elements" of code. Here is a rough example:

variables: 1 function or method calls: 2 math and concatenation operators: 1 assignment: 1 IFs: 2 Loops: 4 Subroutine defs: 4

Still, a problem with code brevity is that short is not always better. A longer solution may be more changeable or scaleable for example. However, these will be other factors to weigh into the equation.

2. Scale-ability

This is how scaleable or general-purpose the solution is. A Bubble-sort is simple to write, for example. However, it does not scale well compared to other sort methods.

General-purpose may mean it can be applied (plugged-in) to many different types of problems. For example, a general-purpose add-in may be able to accept a wide variety of data-types. However, in some cases a wide variety of data-types may not be encountered. For example, unique record, node, or item ID's are usually just numerical or string.

In factoring scale-ability or generalness, one may also have to consider the likelihood of reuse. If reuse of a component is not likely to happen anytime soon, then it may not be an important issue.

In finance it is often common to discount future results. In other-words, a dollar today is worth more than a dollar tomorrow. This is not just inflation, but also a measure of risk. The source of tomorrow's dollar may dry up. For example, if new technologies or paradigms come along every few years to replace the existing system, then one should not put much effort into long-term reuse paybacks.

Further, it appears that many employers inadequately reward for long term results and overly reward making preset deadlines. This is a messy political factor that must be considered.

Then again, some argue that financial principles should not be applied to IT.

Note that it can be argued that scale-ability and reuse are separate criteria. However, the distinction between reuse for another project, and a slowly changing (growing) existing problem can be seen as minor.

Further, reuse is only useful if it is actually done. There is a cost to the effort to hunt down, evaluate, and reapply an existing module, class, or add-in. Sometimes reinventing the wheel is quicker or insignificantly different than such reuse overhead. A good artist may be able to draw a new butterfly faster than they can search their clip-art. For this reason, some RAD packages emphasis fast reinventing of the wheel instead of reuse.

3. Protection

This is a language or system protecting accidental misuse of one part within another. Database referential integrity is one example. Strong variable and/or object typing is another.

Protection is a nice feature, but may incur other penalties. For example, database referential integrity slows down processing somewhat because the system has to check every transaction to make sure it follows the rules.

Some also feel that strong typing results in bloated programming code because there are more formal steps that often need to be taken to prepare one item for use in another. These often come in the form of conversion functions or methods.

4. Manageability

This is the ability to manage the overall structure of code or data structures. For example, I claim that Control Tables offer good manageability in many situations because related methods and categories are visually and logically grouped very close to each other without the formal need for repetitious "packaging" or digging around in stringy linear code.

5. Feature Bloat

Some languages and tools try to make life easier on the programmer by providing built-in data structures, persistence, and/or functions/methods that perform commonly needed operations.

However, too many of these built-in items and the tool becomes too large and perhaps too expensive, and perhaps reducing alternative vendors.

The choice of what goodies to include can also have an influence. A tool with rich math-handling is of little use if you deal mostly with text. Here is one attempt to classify the common types of goodies that can be included and/or favored:

- String operations and parsing
- Collection manipulation (sorting, searching, summarizing, persistence, cross-referencing, etc.)
- Project organization (classes, protection levels, scoping options, etc.)
- Access to system level operations (system calls, direct memory access, etc.)


Main
© Copyright 1999 by Findy Services and B. Jacobs