The operator ++ (pronounced plus-plus) in the name C++ means that this language is an increment over C. C++ was first thought of as C with classes, then later an enhanced C. Dr. Stroustrup, the creator of C++ once wrote that he wanted C++ to be as close to C as possible, but no closer. | |
It is usually considered that the obligation of being that much compatible with C didn't let Dr. Stroustrup design a clean language. This explains why C++ is not as elegant and purely object-oriented as languages like Eiffel, or Smalltalk. These languages were designed from a clean slate. | |
Dr. Stroustrup denies this. |
Newcomers to C++ can be awed by the puzzles that even some innocent language constructs can define. An advanced C++ magazine called C++ Report prints a monthly column called Obfuscated C++ where bewildering language entanglements are shown. Sometimes it is sadly funny to see how a much errors a poor C++ programmer can inadvertently create in his or her work. |
The size of a language is defined by the number of its reserved symbols: words, operators etc. By this measure C++ is a huge language. That means it is more difficult to create compilers for C++ than it is for other languages. And that the compilers offer less confidence. | |
As we all know a compilers is a form of software like any a word processor, a spreadsheet, etc. There's not still a way to fully demonstrate a software correction, particularly in these days of complex software and operating systems. One can only test a limited sample of all possible conditions for a software. | |
So a complex software is much more prone to errors than a simple one. Thus, potentially a C++ program has a greater chance of being compiled incorrectly than does a program written in a simpler language. Even supposing that a program was compiled correctly, a complex language is also more likely to induce the programmer into error. That is, the greater number of choices one is faced with correspondlingly increases the odds of writing a buggy program. |
However, from a programmer point of view, C++ has some advantages.
It is common sense that C programs are efficiently translated to a processor's machine language, since C structure reflects the conceptual structure of a typical processor. | |
Since the C++ programming language is so close to C, it is valid to expect the same thing for C++ programs; |
When the first implementation of C++ was done (see cfront below), the compilation generated C code. Wise guys from Bell Labs. | |
As C had compilers for virtually every processor, that allowed C++ to spread all over C world. Every computer that had a C compiler could have a C++ compiler. Instant portability! | |
Since Borland 3.1, C++ compilers now yield straight machine code. And the situation nowadays is the opposite: with very few exceptions, there are no more C compilers available, for all C compilers are actually C++ compilers, that have C as a kind of subset. |
C++ has lots of useful features: from low-level pointers to multiple inheritance, everything is there. | |
This is why Dr. Stroustrup says that C++ allows multiparadigmatic programming. That means a programmer can use it to implement a structured design, without even thinking of OOP; or use it to implement orthodox object-oriented designs; |
Since C++ has lots of features, to program in it is very comfortable. Usually we don't have to contort the the language to do what we want: there's a feature just waiting for us somewhere. All a programmer has to do is use the feature. | |
That is not a synomym for bad programming practices. A program with a bad design is a bad program in any language. That only means a programmer doesn't need to automatically be enforced to the same view of programming as the language's designer. |
It is never useless to repeat that C++ is compatible with C. That means there are lots of softwares written in C that can be easily adapted to run in C++. | |
For instance, operating systems: several operating systems are written in C. That means most of their features are instantly available to C++ programmers. They don't have to wait till someone creates a binding for their language. |
This is just a provocation that will bring the language wars to these pages, but any C++ programmer can only be pleased to see how fast his or her programs run. | |
As someone said, other languages are fast ways to create slow programs. |
Up to the moment, no longer available.
Dr. Stroustrup is no longer the sole designer of the language, but still is a very important voice in every decision of the comitees. At least three of his books are very important to every C++ programmer:
About Bjarne
Stroustrup's books published by Addison-Wesley .
His peers say his present interests today are limited to Mickey Mouse, as he's the principal software engineer at Disney studios. =8-)
Two books by Lippman are generally considered important:
About Andrew
Koenig's books published by Addison-Wesley . Andrew Koenig has also
published Ruminations
on C++: A Decade of Programming Insight and Experience, coauthored
with Barbara
Moo, that currently heads AT&T's Internet Architecture Division.
A classical example of paradigms and of paradigm shift is given by mechanics. During the Newtonian paradigm, speed was conceived as having no limits and the mass of a body was supposed not to change with a change of the speed of the body. Then there was a paradigm shift and that model of thought was abandoned: the relativistic, or Einstenian, mechanics became the new model of thought in mechanics. In relativistic mechanics, the speed of a body can't be higher than the light speed, and a body mass grows with the increase of its speed.
In programming, the word paradigm shift is used to mean the change in the way the programmers community conceives and creates programs. It is often said that structured programming (SP) was a paradigm most programmers used till the beginning of the 80's. In SP, the emphasis is in functions --- or fragments of code, subroutines etc. --- and in the top-down decomposition of a program. Then in the end of the 80's and the beginning of the 90's, there was a paradigm shift and programmers begun to thing of programs in terms of objects, aggregates of functions and data (see What's in an object below). Then the bottom-up approach became important as lots of libraries were available and people were interested in using them (see What's reuse below).
Ideally, in a paradigm shift, the new paradigm replaces completely the old one. That was what happened in the transition between the Aristotelic mechanics --- all qualitative, numberless --- and Newtonian mechanics --- in certain ways a branch of analytical geometry.
However, that didn't happen in the transition from Newtonian and Relativistic mechanics: there are plenty of people using Newtonian mechanics today, and indeed some people will never have to learn Relativistic mechanics. Only people that deal with speeds close to the light's speed and huge distances are forced to learn it.
Since Newtonian mechanics is far simpler and more intuitive than Relativistic mechanics, people tend to use Relativity only when they're really forced to do it. So mechanic theories are more like tools: they use it when there's a problem that need it, much like we use a hammer to fix a nail, and a knife to cut something. In the same way a scientist will use Newtonian mechanics for situations closer to day to day experience and Relativistica mechanics to speeds close to the the light and astronomical distances.
The same happens with structured programming and object-oriented
programming: they must also be seen as tools. Some problems are are better
solved with SP techniques, while others are better solved by OOP. In When
to use Objects the two types of situations are outlined.
In object-oriented programming, an object is more than just data, more than just a couple of functions. Following the example of Bertrand Meyer, let's consider a common radio. It has a state and some means to alter and to consult this state. It state is, of course, the radio station it is dialed in, its sound volume and sintony. It has buttons to alter any of these dimensions of state, and usually a normal radio has some form of displaying the radio station frequency and sometimes a display of sound volume.
Usually, a normal radio has also a way to turn it on and off, often the same button, sometimes the volume button.
Now it is necessary to translate the radio elements to
an object-oriented nomenclature. Since OOP isn't exclusive of radios, the
translation of radio elements to more general ones is a means to see how
OOP can be used to solve other problems. Table 1 below shows a mapping
between the simple radio elements and the more general OOP terms.
Radio | OOP |
---|---|
On/Off | Constructor/Destructor |
Display elements | Accessors |
Volume and sintony dials | Transformer |
Station frequency, sound volume and sintony | State |
He meant to say that to conceive classes all we should do is to mirror the objects in the application domain. That is, if we are going to create an application to deal with accounts payable, a good candidate for a class is simply the account. If we are going to create a GUI (Graphical User Interface), with windows and buttons, then windows and buttons are good candidates to being classes. So, in the accounts payable application there would be an ACCOUNT, and in the application with GUI there would be classes WINDOW and BUTTON. And as much instances of the class --- or objects --- as needed to solve the problem.
That may sound a good idea to some of us, but this is not what authors Coad and Yourdon think. In their book Object-Oriented Analysis, they say about Meyer's idea: "Nuts!"
Such a strong word is very rare in the literature about object-orientation, and it surprised me then as much as it surprises you now. Coad and Yourdon state that the conception of classes in an application should be the result of a careful process of object-oriented analysis and design.
Ferocious debates like these are typical of the initial phases of all fields of technique and science. As more experiences are gathered, people tend to converge at intermediary positions and to forget the extremism of their own initial ideas.
An intermediary position can be that the conception of classes should be neither a faithful mirroring of the application domain, nor its complete neglect. Certainly, most of the object-orientation appeal lies in the possibility of mimicking some application domain behaviors in the software architecture level. However, an application is an application is an application: the purpose of an application is not to mimick faithfully the application domain, but to reproduce some of the most useful behaviors of the problem. Not all behaviors, not even all useful behaviors The development of an application usually is a negotiation between user and developer: some features are not implemented due to time or computer limitations, while other ones are added since their cost is small, since they're closer to computer nature, or to the software tools used to implement the application.
The fact is that the software development process is not done in the application domain world, but in another world: the world of the computer and the software tools (programming languages, compilers, libraries etc.) used to develop the application. An accounts payable is an application that will help solve accounts payable problem, not a form of accounts payable. Even if such an application is created by an accountist, it is still some form of software development, not a form of accounting practices. The world of software development has its own laws, that must be followed if we want that the software being developed be a good one.
So, going back to the question " how can we conceive classes
and objects, given an application domain?", a good idea seems to try to
dive deeply into the application domain in the first steps of the software
development process, and try to capture as much of the application domain
as we can. Then sit back and try to fit this application domain's conceptual
architecture into a good software's architecture.
An application programmer that creates an accounts payable software will receive only this minimal help from it. For this programmer, a piece of software more directly related to the accounts payble problem would be more useful. For instance, some software library that had functions to do the most common problems of accounts payable processing.
Since that library would "know" more about the problem
of accounts payable, some say the library is more intelligent. The
extreme limit of this would be a monolithic function, say AccountsPayable(),
that would care about all and every aspect of the problem. So, your C++
program would be only:
int main ( int argc, char** argv); int AccountsPayable ( int argc, char** argv ); int main ( int argc, char** argv) { return (AccountsPayable (int argc, char** argv)); }
Very good, isn't it? An application programmer would have only to write a few lines of code and everything would be done.
Very good, but very limited, for no matter how configurable AccountsPayable() can be command line parameters or setup files, it will have a limited flexibility. Even with configuration parameters, only a limited number of forms of doing accounts payable will be possible through AccountsPayable(): there's a strong possibility that some particular form is missing, even if it is minor one.
However, when we develop a specific application in source, it is usually because the existing applications in compiled form don't do what we want, and so you need maximum flexibility, and that can't be achieved with a solution as monolithic as this one. And the single function approach is as flexible as a closed application. This is the expression of a programming saying:
In this application, if we need to change something in AccountsPayable(), that couldn't be changed by its parameters, we'll have to change AccountsPayable(), source code. In that case, of course, we won't be reusing it. A better idea is to have pieces of code not as minimal as printf(), but not as maximal as our AccountsPayable(). That relative size is called granularity: the smaller a piece of code, the lower is its granularity; and vice-versa, the larger a piece of code, the higher its granularity.
The golden goal of reuse is to maximize reuse without minimizing flexibility. Unfortunately, with any reuse we lose some flexibility: maybe a menu can't be changed, maybe some file format will not contain the every information we we want etc. But then we'll save a great deal of programming time, as some features of application programming will be already implemented. This balance is very important, as it limits te usefulness of the reused elements. If the package can't do, or what's worse, it won't allow us to do, something we consider very important, or if it does it too slowly, then it will be useless for us, at least for a particular application.
If that's the case, we'll have to find another package
to reuse. Or maybe to develop our own package, using elements like printf(),
that have minimal granularity, but maximal flexibility.
The idea of encapsulation is that the data (state) of the object we try to reuse will be isolated from other parts of our software. Since the objects to reuse are insulated from the exterior, we can use them as small building blocks of our software, and we can be sure that unexpected side effects will be minimized.
The word module is the programming name for such building blocks. Programing done with the use of modules is called modular programming, and the first forms of modular programming predate OOP. For instance, languages as C can do a good job at modular programming. However, OOP and the object-oriented programming languages (OOPL) express more clearly the module concept. Sometimes, when using a modular language without object-oriented features, it is not so easy to see during the creation of a module what is to share and what to isolate from other modules.
The hard to understand FORTRAN IV programs and their evil
COMMON blocks that I had to painfully debug are an eloquent witness
of this. (Click here for
a brief note on the FORTRAN IV programming language and its COMMON
blocks.)
The idea of an encompassing term is possible and sensible since these types of technologies are closely related. In very gross terms, one can say they differ only in the level of detail they approach the problem of software development. One can say OOA is done in the macroscopic level, without caring too much with the implementation details, and that OOP is done in the microscopic level, not only close to the implementation: it is the implementation technique. OOD is the gateway between the two: we can say it is done in the mesoscopic level.
That similarity between the three levels of object-oriented software development seems to be obvious but it is not: actually it is due the the fact that the notion of object is a unifying concept. Previous methods, like structured analysis (SA), design (SD) and programming (SP). To know a lot about SP wouldn't help very much in learning SA or SD.
So, the idea of object is just one more Columbus' egg: it looks very easy, and even very obvious, when we see it done. However, to get there was difficult and non-obvious. Like the discovery of the American continent by Cristopher Columbus.
To present the Object Technologies (OT), we'll use an
analogy with the several stages of a building construction. The idea of
a building isn't casual: if all we're going to do is to repair broken tiles
in the kitchen, only masons would be necessary. In the same way, in a very
small application, no analysis or design would be strictly necessary.
In terms of software development, this phase tries to arrive at an specification, that informs what characteristics the building should have in terms of architectural style, number of floors, and of course, budget available and expected deadline.
When this specification phase is done, the architect goes back to the office to detail the project a little more. Then he gets back to the client, for the aprovation of the building plans. The project is eventually aproved, after some minor modifications.
Then the architect details the project a little more, this time to ask a civil engineer to design the building physical structure.
The dialog between architect and engineer will be much more technical than the dialog had between architect and client. And a seasoned architect will actually conceive the building with an eye in its physical structure, to make easier the engineer's work.
When the engineer begins working in the project, it becomes more concrete and detailed. That's a consequence of the engineer's work: not only to conceive in abstract terms, but to determine the physical ways of making the project feasible.
In the software side of our metaphor, there is also a thinking in general terms, and indeed people talk about architectures; of course, software architectures. It means how the larger software units will be related to each other. However, the person that conceives a software architecture isn't usually called an architect, but an analyst. The idea here is that a software analyst is a person that understands a system that already works outside the software world. If there's not an existing system, people usually try to create in software an ideal system, respecting of course the limits of time and budget available for the software development.
In any case, the systems analyst must try to understand how the real or ideal system works, and this is done by breaking the system in its larger unities, and understanding the relationships they have to each other. The analysis phase then is brought to a finer level, and the analyst asks more detailed questions to the user. This process is repeated till the software analyst has a clear enough picture of the system. Then begins a synthesis phase, when the analyst will propose to the client his understanding of system. The analyst will present a new system, equivalent to the first one in general terms, but conceived with a view for software implementation.
For instance, if in some part of the original system a routine supposes the intelligence or creativity of a human being, then in the new system the software analyst should substitute it for another, simpler routine, that even a computer is able to execute.
That proposal of a software system will documented in texts, diagrams, and even program fragments called a prototype, that will have only some screens, and no heavy code behind them. It i as if a prototype was a movie scenario, in which houses are representated by their façades only. Usually, in this step the client has the equivalent of a croquis in architecture, and he's able to think more about the project. So, the analogy of software development with a building construction becomes stronger, and here as well as there, the user is able to discuss with the software analyst the details of what he expects the software to be. Here, as well as there, the experience of the software analyst will help the client think about every aspect of the software. The software analyst will hear what the client has to say about the building, as a means to inspire the general terms of the software. Even in these general terms, the software analyst will be able to help the client think about the software project, informing why some ideas must be rejected because they're unfeasible, suggesting more pragmatic or cheap alternatives, and will change his or her points of view if the client is able to present better arguments.
When they finally discuss all details of the software project in these general terms, the software analyst goes back to the office and prepares a formal proposal. This proposal will be very detailed and will contain a formal statemente about the software purposes, called a software specification, or just a specification. Besides the specification, this proposal will inform about the software deadlines and, of course, its cost.
That specification is commented by client and analyst, but mostly in terms that imply in cost and deadline: the technical side should be already stable. Usually, the software analyst will present several alternatives, and the client will chose between them.
The larger unities of the system will be translated as clases and objects, and the software analyst will be able to enrich the client specification with enough information that will ease the software designer work. For instance, when in the specification there's something like "the dept. X delivers information i to dept. Y", it will be translated as something like "the object X, of class Dept, will write the information i in database d, so that the it can be read by object Y, also of class Dept."
It is important to note in a building construction, sometimes the architect can forget about the engineering conveniences when creating the building form, in the software development, the systems analyst conducts all of the analysis in the light of a computer implementation. It is not that the analyst will also conduct the implementation together with the analysis, and it is not that he will talk to the client in very detailed terms of a computer program. No, the talks between client and analyst will be done in generic, non-techical terms. However, the analyst shouldn't forget in any moment, that the program will eventually be run in a computer system that is -- we all know it -- not very bright or creative.
So, the analyst role will be conducted with an eye towards
making easier the software implementation, remembering the software will
be run in a computer system, as well as making the implementation easier
for the techniques the software designer will use.
His most important book is Object-Oriented
Software Construction, a fundamental book. A second
edition of it has been recently released.
C Chest, Holub's column was so influent world-wide that it would not be an extreme exaggeration to state that most old C programmers that didn't come from the Unix world have learned their C from it.
Possibly due to his academic links , Holub never forgets the concepts behind most programming techniques. Of course, being also an industry man, he's always interested in the practical aspects of things. So his tips actually work. Maybe another consequence of Holub's links to academy is his criticality. Differently from other equally proficient programmers that receive media coverage, he's not afraid of criticizing tools by Microsoft and by other big shot tool makers, while acknowledging their qualities.
After his start as a C programmer, Holub migrated to C++, like everyone else. He documented his C++ apprenticeship in a very interesting book, C + C++: Programming with Objects in C and C++. This book highlights an attempt to do OOP in C, that is lead to its limits, showing that real world OOP can be done much more easily in an OOP language, and that leads to C++ programming. That approach has the interesting side effect of showing how things are done under the covers in C++: how constructors are called etc. In that sense, this book precedes Lippman's Inside the C++ Object Model , but possibly without the same specialization and a little less up to date.
Holub has worked also with the OO expression of the Win32 GUI elements. The Win32 API is unanimously acknowledged as a very complex piece of software, so the idea would be to use object-oriented techniques to organize, to structure and to simplify Win32 programming. Microsoft, Borland and lots of others have tried to do this. About Microsoft attempt, called MFC, or "Microsoft Foundation Classes", Holub said that their authors are very good OO programers, but bad OO designers. So, in a series of papers published in Microsoft Systems Journal, Holub presented a better OO design for doing Win32 GUI, and also a way to do persistent storage, an OO jargon for the automatic storage and retrieval of objects that knows about the classes of the objects stored.
Also like every C++ programmer and his sister, Holub became interested in Java. His most recent works are the documented creation of a Java Virtual Machine and several courses on Java.
Allen Holub information at Holub
& Associates.
If software engineering was engineering in the usual sense, Martin would have the right engineering approach: he tries to find useful techniques and concepts in authors of several different schools. For instance, he's basically a C++ type, he's not afraid of quoting the French author Bertrand Meyer, the creator of Eiffel. That's even more surprising when you remember that Robert Martin is a heavy user of Booch methods, as documented in his book Designing Object-Oriented C++ Applications using the Booch method. It would be even more remarkable if Martin was American, as American programmers tend to prefer American authors.
So, let me tell you: he's American.
Martin's approach to software engineering isn't also limited to to OO authors: he's also a reader of Barbara Liskov, the creator of the modular programming language CLU and an author writing on modular programming.
Possibly the reasons for Martin's open mind for different authors lie in his programming background: he begun his OO development career as a Smalltalk programmer. Maybe that proximity with Smalltalk, one of the purest OO languages available.
Maybe that proximity to Smalltalk purity can also explain why Martin's proposal are so coherent, in spite of the different authors he adopts. Martin's books and many papers can be read as one of the most complete and critical introductions largely available. Most Martin's papers are available for free download at his site in Object Mentor Publications.
Robert Martin is mostly a C++ instructor and mentor, so most of his works are presentation of someone else's work. Maybe with time the originality of his fusion of the work of other authors can grow into an original work.
Here is the reference of Martin's book:
Designing Object-Oriented C++ Applications using the
Booch Method
New Jersey, Prentice Hall; xxi + 528 pp.; 1994
ISBN 0-13-203837-4
The design of several applications is presented in detail: from a cross-reference tool to a security management system for buildings. The approach used is to introduce theoretical concepts as they are needed to solve some practical problems. That's certainly a very good way to present ideas that otherwise would be too abstract.
You can read more about the above book --- and even buy it --- in Amazon.com.
Robert Martin's Object Mentor
Associates.
It is free, but it is not public domain software. They have what is called a copyleft, as opposed to the common copyright. This idea, exposed in a quite readable legalese in COPYING, briefly states that everyone can freely copy and use GNU software. However, if you change it, then you must offer the changes to public domain too.
It is amazing, but even being free, GNU software uses to be faster and more stable than commercial software. The gcc compiler includes an extension to the ANSI language, called GNU C, that allows for instance, arrays of size known only at runtime, like in
void some_func (const int buf_siz) { char buff [ exp (3*log (buf_siz)) ]; /* ... stuff */ }Besides ANSI C and GNU C, gcc also compiles C++ and an interesting language called Objective C, an object-oriented, Smalltalk-like extension of C. Objective C is the native language of NeXT operating environment. Probably we'll hear a lot about it in the near future, as Apple bought NeXT.
The latest version of gcc is gcc-2.8.1.tar.gz, released in March 1997. It is a very huge file, with more than 8 MB of size in source code. So, unless, you have a fast, stable connection, don't try to download it straight: look for another site that may have it in slices.
The gcc distribution is so big because it includes the full source for the gcc optimizing compiler and runtime library, and makefiles for several platforms.
It is amazing again, but gcc is easier to install than
several commercial packages I've seen.
A computer science student, called Linus Torvald, was investigating the multitasking capabilities built in in the Intel 386 chip. He delivered the interesting code he developed to one of his teachers, who offered it to the public domain in the university's ftp server.
Time has passed and Torvald's notorious programming abilities as well as his leadership transformed the program fragment in a whole Unix-like operating system. A public domain operating system developed wholly in Internet, by a group of other equally proficient programmers.
One of Linux best qualities is the stability: the system seems to be very robust. A system crash is a very unusual event in the life of a Linux-powered machine. Another of Linux best qualities is the low machine demand. Tecbnical folklore says that a 386 computer ruled by Linux can do things as fast as a Pentium under Windows NT. Add to this the availability of very good to excellent public domain softwares, like X11 packages, the gcc compiler, database management systems, etc.
Linux good technical qualities have attracted a lot of users, both dilettanti and professional. Linux has already several companies that offer technical support, as well as several commercial softwares.
Several WWW sites, both commercial and academic, offer information and links about the Linux system.
Back to page Free C++ Help.
Back to page On the C++ Programming Language.
Back to main page.