Thoughts on Software Design--to see the list of articles used in this paper click here (section B).

When considering the software development process one of the more overlooked, yet one of the most important aspects, is the design process. Even with a team of outstanding developers and top-notch testers the product will run into serious problems if little attention is paid to design before hand. Along these lines, the intent of this paper it to give a very high-level overview of some of the more important aspects of software design by providing abstracts of eight major papers on the subject along with some personal insight into how this design topic relates to industrial practice.

Opening the series of papers on design, "An Introduction to Software Architecture" by Garlan and Shaw provides the reader with an introduction to software design while addressing reasons behind good design, with the primary objective being to deal with the increasing complexity of today’s systems.

Starting with a brief history of software architecture, the paper shifts to some of the widely used implementations in design, with some examples being pipes and filters, event based systems and layered systems. The authors conclude with numerous case studies describing how these implementations were used in various projects.

When viewing this paper from an industrial standpoint no new ideas or solutions to problems are offered. That was not the intent of this paper however; rather, the intent is to make the reader aware of some of the more widely used techniques when considering design. While nothing revolutionary, I would without hesitation recommend this paper to any beginning professional in the field. While the examples presented in this paper are not the only available options for design, they help provide the reader with a base to work from--which is important when you are, like me, relatively new to the industry.

Moving from the general to the more specific, "Beyond the Black Box: Open Implementation" by Kiczales provides a description of black-box abstraction, a software design hallmark that, while exposing functionality while hiding implementation, can lead to serious performance issues. To address this Kiczales argues for a more open implementation by allowing a client to control the implementation strategy of a module to best suite its needs. The author then goes on to describe an example where this strategy proves useful, ending in the goals this implementation should provide.

From a professional standpoint, I found Kiczales argument interesting (as my job deals with performance in great detail) but I also feel it could turn around and violate the idea of controlling complexity; a concept which he names as a central pillar in engineering. Methods that allow for more client control have, as the author discusses, their strong points. Methods that follow this implementation become increasingly difficult in terms of the development and testing involved however. More importantly, while Kiczales argues for an open implementation he offers no real solutions. What meta-interfaces should be used? What choices should the client be provided versus what they can implement themselves? Kiczales proposes a good idea, but leaves many unanswered questions.

"Design Patters…" by Erich Gamma et al. proves to be difficult reading the first time as one can completely miss the intent of his paper. Gamma proposes design patters "as a new mechanism for expressing object-oriented design experience." At first he appears to be doing nothing more then giving classifications to various design patters, along with examples of existing systems that apply the ideas laid forth in his paper. Upon closer inspection however, one realizes that Gamma is doing more then providing an object-oriented dictionary; he is in fact presenting a system of classification for design patters with the intent being to facilitate better communication.

Design patters consist of three essential parts: an abstract description, an issue in design addressed by the abstract description, and the consequences of applying this abstract idea to a system architecture. The benefit to having design patterns, so long as they adhere to the previously mentioned criteria, is that they remain "sufficiently abstract to avoid specifying implementation details, thereby ensuring wide applicability." Simply put, design patterns can remain abstract enough to not lose their value in some specific detail, all while providing hints about possible implementation problems.

Gamma then gets to the heart of his paper--categorizing design patterns in terms of two orthogonal criteria: jurisdiction and characterization (jurisdiction is the domain over which a pattern applies while characterization reflects what a pattern actually does). He then goes further by breaking down pattern jurisdictions in great detail for each characterization, concluding with examples of how these design patterns have successfully been applied to existing systems.

In my short time in the industry I have already noticed misunderstandings arise due to different interpretations of some concept of software design. While I may be stating a design principle a certain way, someone else may hear my words differently. To me a common language is needed in any aspect of software design, which is why I view "the English of object oriented design" put forth by Gamma importantly. While I may not use every aspect of his paper, the terms and terminology he presents is where I feel the importance lies.

Adding support to the concepts put forth by Gamma, "Experience Assessing an Architectural Approach…" by Sullivan and Knight describe their experiences using design patters to help implement a large-scale system. Choosing OLE, the authors hope to address the concept of "architectural mismatch," a phrase first coined by Garlan et al. to describe "a variety of difficulties encountered in building an application from a collection of large-scale existing software components." The remainder of the paper is spent discussing various issues that they encountered in attempting to design their program to support fault-tree analysis, such as abandoning their initial design due to architectural problems to limitations with the development tools themselves. Ultimately OLE was able to overcome many of the problems behind "architectural mismatch," but some difficulties remained. As long as attention is paid early on to component design, the software industry should continue to make advances "toward realizing the promise of rapid software development through integration of large-scale, reusable application components."

Looking at this paper as someone who has been in the industry, it’s nice to see some of these concepts put to use. I believe that is where the real value of this paper lies. Too often the academic side of computer science is not given as much attention as it could because industry is resulted oriented. While someone may present a beautiful concept on software design, unless it is backed by solid examples of successful implementation it is likely to be given less merit or altogether ignored. Papers like this one help to bridge the gap between industry and academia by showing that these some of these ideas about design really do work.

Moving away from design patterns, Ralph Johnson outlines the idea of Frameworks in his paper "Components, Frameworks, Patterns." Simply put, frameworks are another object-oriented reuse technique. The simplicity ends there however, as the reader is introduced to two different, but correct definitions of a framework. While the author concedes that a framework may be difficult to define clearly, this does not take away from their importance.

To sell the reader upon the notion of frameworks, Johnson explains frameworks in terms of components (in this case code reuse) and design. Frameworks, Johnson contends, are an intermediate form--part code and part design reuse, thus eliminating the need of a new design notation by substituting an object-oriented programming language. In addition to these benefits Johnson also lists the standard arguments: faster time to market, uniformity, and maintaining the concept of open systems (an argument mentioned earlier by Kiczales).

As already discussed, the definition of a framework can vary based upon situation. While the definition may be in doubt, the benefits provided by frameworks should always be similar. One key benefit is the notion of an abstract class--a class with no instances, used only as a superclass. This allows a framework to become a large-scale design, describing how a program is decomposed into a set of interacting components, objects and the interaction between them. Another benefit, and one of the more interesting notions behind a framework is the concept of "inversion of control." Traditionally reused components are called from the code they are being used in; frameworks are the exact opposite. In a framework the code that is being written is reused, as a developer decides what gets plugged into the framework and whether or not any new components need to be added. While all these benefits of frameworks are important, the big takeaway is design and analysis reuse, as they are likely to provide the largest benefits in the long run.

The rest of Johnson’s paper deals with the concepts behind a framework, from what to learn to what to evaluate to how to design to problems likely to be encountered along the way. The author concludes with some final thoughts on frameworks, which may best be summarized as "frameworks are hard, but worth it."

From a professional standpoint I would agree. The concepts and benefits involved from using a framework can, if used properly and enough time allotted, make any development process go smoother. The only issue I had with Johnson’s work is that while he provides a substantial defense and description for frameworks he does not spend enough time describing an effective recipe for creating one. I feel is due in part to the fact that a framework can vary from project to project, and therefore a general description is hard to ascertain.

Taking a slightly different approach to problems associated with software design, Sullivan and Notkin attempt to address the notion of component independence. When designing a system with separate components, ideally all these components should be independent of each other. Simply put, their execution should not depend on the existence of another component. In practice this is not usually the case, as systems are designed with this idea of independence compromised. Proposing one solution to the problem, Sullivan and Notkin introduce the idea of a mediator.

A mediator can best be thought of as an additional component that sits between two existing ones, helping to maintain relationships between them. The benefit of mediators is that they allow for increased independence for each of the components in the relationship in which they participate. The independence comes from the fact that neither component is aware of its part in the relationship, addressing the notion of component independence described previously.

To support this idea the authors begin by describing three examples of alternative design approaches: encapsulation, hardwiring and events. In each of the examples independence is compromised in some fashion, making changes or additions to the system increasingly difficult. The fourth and final example involves using mediators and events, maintaining the idea of component independence, which the other examples were unable to achieve.

From a professional standpoint the concept of mediators has sound basis and should be considered in software projects, but not without a caveat. When adding mediators to maintain relationships between components you add increased complexity into the system through these additional levels of indirection. Before deciding on whether or not mediators are appropriate for a project, one must analyze the benefits they may potentially provide now and in the future. In some instances, the added work and complexity brought about by mediators may not warrant their use, especially in small software projects.

Out of all the papers reviewed, Parnas’ "On the Criteria To Be Used…" proved, in my opinion, to be the most outstanding of the lot. While this paper was written in 1972 the author conveys ideas that ring true today. The main point of the paper, using modularization as a mechanism for improving software and shortening the cost involved in producing it rings as true today as it did more then twenty-five years ago.

Parnas beings by describing a module, which he defines to be "a responsibility assignment rather then a sub-program." In other words, it is not enough to simply be a separate function in a program; the function must also have a specific task it carries out in the context of the program. The author then goes on to describe a small KWIC index system, giving two descriptions a design implementation. The first implementation is more traditional, where each major task of the program (i.e., input, circular shift, output, etc.) is broken into a separate module. The second implementation, on the other hand, makes use of information hiding, where each module is characterized by knowledge of a design decision that is hidden from all others. The support that Parnas offers for the latter implementation is quite striking in that he raises the notion of programs being able to cope with change. The second implementation, Parnas argues, is more able to handle changes in various parts of the program while the first requires updates that spread across more then one module. The remainder of the paper continues to support the second implementation as the preferred method, as Parnas strongly encourages the reader to change their notions of software design to follow the criteria demonstrated in creating the second implementation rather then the more conventional line of thinking used in the first.

From my standpoint, the most amazing thing about Parnas’ paper is how he lays out some of the early concepts behind object oriented programming years before it was even implemented. Some of the notions he lays out, such as information hiding and the minimization of communication across modules are hallmarks of OOP today. The most striking statement in Parnas’ paper however can be found in his conclusion, where he states "we must abandon the assumption that a module is one or more subroutines, and instead allow subroutines and programs to be assembled collections of code from various modules." In that sentence we can see the beginnings of two major pillars of object oriented design--inheritance and code reuse. When you consider the date of this paper, it makes the work even more astounding. Viewing Parnas’ paper from a professional standpoint, the concepts he describes are the foundations for modern software design.

In support of his original work, Parnas also wrote "Designing Software for Ease of Extension and Contraction" to clarify some of the principles he introduced in his "Criteria…" paper. Parnas discusses that present (relative to the date of the paper) software and software engineers are not able to handle the notion of change in the program because the concept of writing a family of programs versus a single program was not recognized early on in the design phase. The result is a large program that is difficult to modify since any changes would ripple throughout the entire system versus changing one small module in the program; best summed up in his quote "it is always possible to remove code from a program and have a runable result." To combat this, Parnas introduces the uses relationship.

Beginning with an analogy of a virtual machine to help change the reader’s notions about traditional software design, Parnas describes the uses relationship to be "a system divided into a set of programs." These programs in turn can be invoked either by the normal flow of execution, interrupts or an exception handling routine. As an example, consider two programs, A and B. According to the author A uses B if correct execution of B is necessary to complete whatever task is handled by A. The reader is cautioned of the mistake in considering "use" and "invoke" to mean the same, when in fact they mean two different things. If A needs to call B only when certain conditions occur, but A can be correct if B is incorrect absent, then A has merely invoked B.

Parnas dedicates the rest of his paper in support of the uses relationship, even laying out a system of hierarchy to prevent unrestrained usage, which would cause interdependency throughout the entire system, making it difficult to adhere to the earlier notion of removing code but still having a runnable result. Again using modules A and B as an example, a uses relationship is if:

Parnas concludes with an example of a system created with the uses criteria in mind, and re-emphasizes the points in his paper and how they make the software design process easier.

The ideas laid about by Parnas in this paper impact my day to day work the most. From a testing standpoint, modularization makes perfect sense especially when dealing with automation. There have been numerous instances when I have taken code from development in order to automate and/or test a certain piece of the product. The less the code I pull out is dependent on other areas the easier my job becomes. While these ideas may have been new for their time, I do not see how anyone in the industry could overlook them today, as they are fundamental to design.

Software design is not an easy task. If not enough attention is paid to it one is likely to have problems in the long run. It has been the intent of this paper to give the reader a brief overview of some of the aspects involved in design as well as some personal insights as to how they effect an industry setting. While each paper presents a different aspect of software design, I think together they address an overall topic: "the ability to code does not a product make." Regardless of how proficient someone is writing or testing code, it does not make up for a poor design.