Software Reading List

Table of Contents

  1. Miscellaneous
    1. "High-Pressure Steam Engines and Computer Software" by Dr. Nancy Leveson
  2. Personal Improvement
    1. "How to Defend an Unpopular Schedule" by Steve McConnell
  3. Process Improvment
    1. "Incremental Process Improvement" by Rick Clements
    2. "Management Commitment Vital to Ensure Process Improvement" by Neil Potter
    3. "Writing (SMART) Achievable Goals" by Mary Sakry
    4. "Steamrolling the Organization With Process, or Is There a Better Way?" by Neil Potter and Mary Sakry
    5. "Compelling Improvement" by Neil Potter
    6. "Process Improvement Meets `Peopleware' - People are the key asset of any organization" by Mary Sakry
    7. "Compelling Improvement" by Neil Potter
    8. "Revising Your Approach to Process Improvement" by Mary Sakry
    9. "Planning Improvement" by Neil Potter
    10. "Process Improvement Is a Contact" Sport by Mary Sakry
    11. "Real Process Improvement-Getting What You Need" by Mary Sakry
    12. "Create Change or Let it Happen to You" by Rick Clements
  4. Project Management
    1. "Haste Makes Waste When You Over-Staff to Achieve Schedule Compression" by Doug Putnam
    2. "Simple Project Tracking Approach Puts You in Control of Your Project" by Doug Putnam
    3. "The (Almost) Perfect Software Project Using the SEI Core Measures" by Jim Greene
    4. "Classic Mistakes" by Steve McConnell
    5. "Software Quality at Top Speed" by Steve McConnell
    6. "Upstream Decisions, Downstream Costs" by Steve McConnell
    7. "Software Development Checklists" by Steve McConnell
    8. "Software Development Plan" by Steve McConnell
    9. "Sample Release Checklist" by Steve McConnell
    10. "Complete List of Survival Checks" by Steve McConnell
    11. "Evolutionary Project Managers Handbook" by Tom Gilb
    12. "Requirements-Driven Management: A Planning Language" by Tom Gilb
  5. Risks and Risk Management
    1. "The Therac-25 Accidents" by Dr. Nancy Leveson
    2. "Coping With Risk" by Mary Sakry
    3. "Taxonomy-Based Risk Identification"
  6. Test Automation
    1. "Improving the Maintainability of Automated Test Suites" by Cem Kaner
    2. "Test Automation Snake Oil" by James Bach
  7. Testing
    1. "Software Testing 101" by Rick Clements
    2. "Impossibility of Complete Testing" by Cem Kaner
    3. "Testing During Rapid Change" by Randall Rice
    4. "Testing Techniques News" (TTN) Archive
    5. "Everything I Know About Testing I Learned From the Bible" by Randy Rice, CQA, CSTE
    6. "What Testers Can Do about Technical Debt" by Johanna Rothman
This list contains articles on software engineering and process improvement that I've found interesting.  (Or material that I am in the process of reading.)  It includes an abstract or long quote for each article.

Miscellaneous

"High-Pressure Steam Engines and Computer Software" by Dr. Nancy Leveson (http://sunnyday.mit.edu/steam.pdf)

As different engineering disciplines have matured, they have become more regulated.  This is because of accidents that have happened.  This paper traces the evolution of the design of high-pressure steam engines though this process.  It then compares that to the current state of software engineering.  Software is being used in more and more safety and financial critical functions.  When will software related failures cause the same changes in software engineering?

[This is a pdf file.]

Personal Improvement

"How to Defend an Unpopular Schedule" by Steve McConnell (http://www.construx.com/stevemcc/ieeesoftware/bp03.htm)

Current estimation practices are a problem, but Steve explains that current scheduling practices are the more serious problem.  Philip Metzger observed 15 years ago that developers were fairly good at estimating but were poor at defending their estimates (Managing a Programming Project, 2d Ed., 1981).  Steve hasn't seen any evidence that developers have gotten any better at defending their estimates in recent years.  Steve looks at ways of better defending our estimates.

Process Improvement

"Incremental Process Improvement" by Rick Clements (http://www.oocities.org/rick_clements/process.htm)

This paper looks at the process of moving from an ad-hoc testing process to a process to a defined process.  In an ad-hoc process, there isn't time to collect data for metrics.  How can the process be improved without data or metrics?  The good news is that since there are big problems the problems are easy to identify.

"Management Commitment Vital to Ensure Process Improvement" by Neil Potter (http://www.processgroup.com/fv2n2.htm#PI)

Getting management commitment requires the following steps:

  1. Determine the manager's needs
  2. Present the process improvement idea in a way that shows how the manager's needs can be met
  3. Determine and address the manager's concerns and fears about process improvement
  4. Verify that process improvement is meeting the manager's needs.
"Writing (SMART) Achievable Goals" by Mary Sakry (http://www.processgroup.com/fv4n2.htm#PI)

For goals to be achievable they need to be simply stated and specific, measurable, as-if-now, reasonable and timed toward what you want.

"Steamrolling the Organization With Process, or Is There a Better Way?" by Neil Potter and Mary Sakry (http://www.processgroup.com/february99.html#CS)

This article talks about two useful techniques for introducing change.  The first is selecting introducting the change to those most receptive first.  The second is introducing the change in increments.

"Compelling Improvement" by Neil Potter (http://www.processgroup.com/fv3n1.htm#CS)

Determine the compelling purpose for process improvement in your organization.  Use your customer needs, project goals and existing problems to help determine the purpose of your process improvement program.  Make sure that the key opinion leaders agree on the purpose of the process improvement program.  Keep the purpose statement in front of the organization when working on process improvement.

"Process Improvement Meets `Peopleware' - People are the key asset of any organization" by Mary Sakry (http://www.processgroup.com/fv1n1.htm#CS)

When we think of process improvement, we must consider the balance of three components: process, people, and tools.

"Some Bright Ideas for SEPGs" by Neil Potter (http://www.processgroup.com/fv1n1.htm#PI)

A list of ideas for making your SEPG (Software Engineering Process Group) more effective.  (These ideas will work for most groups trying to introduce change.)

"Revising Your Approach to Process Improvement" by Mary Sakry (http://www.processgroup.com/fv5n1.htm#CS)

"Planning Improvement" by Neil Potter (http://www.processgroup.com/fv4n2.htm#CS)

The overall flow of the strategic planning process is:

"Process Improvement Is a Contact" Sport by Mary Sakry (http://www.processgroup.com/fv4n1.htm#PI)

An important part of process improvement is for the coach to motivate the team to make the change.

"Real Process Improvement-Getting What You Need" by Mary Sakry (http://www.processgroup.com/fv3n1.htm#PI)

A problem is that people often see external standards or models such as the Software Engineering Institute's Capability Maturity Model (CMM) or ISO 9000 as goals themselves.  They become too focused on the activities listed and lose sight of the intent, concentrating on proving they have achieved a standard, rather than ensuring that they are receiving the intended benefits of these practices.

"Create Change or Let it Happen to You" by Rick Clements (http://www.oocities.org/rick_clements/change/change.htm)

Change will happen.  We have the change of directing that change to improve our processes.  Change isn't easy.  This paper is a presenter's script to a 45 minute to an hour workshop on creating change.  Power Point slides may be downloaded.

Project Management

"Haste Makes Waste When You Over-Staff to Achieve Schedule Compression" by Doug Putnam (http://www.qsm.com/risk_02.html)

Although adding people to a project might seem like a straightforward remedy for schedule compression, this page shows that it is often counterproductive.

"Simple Project Tracking Approach Puts You in Control of Your Project" by Doug Putnam (http://www.qsm.com/risk_01.html)

This paper show how to track several project measures.  The project is track graphically showing the projected metrics, acceptable deviation and unacceptable deviation over time.

"The (Almost) Perfect Software Project Using the SEI Core Measures" by Jim Greene (http://www.qsm.com/perfectseixpir.pdf)

This paper looks at managing a software project with the SEI's four core measures.  These measures are software size, time, effort and defects.

[This is a pdf file.]

"Classic Mistakes" by Steve McConnell (http://www.construx.com/stevemcc/ieeesoftware/bp05.htm)

This is part of Steve's best practices series in IEEE Software.  It lists 10 of the top problems with software projects and why they are a problem.  I can remember participating in may examples of the problems on this list.

"Software Quality at Top Speed" by Steve McConnell (http://www.construx.com/stevemcc/articles/art04.htm)

Some project managers try to shorten their schedules by reducing the time spent on quality-assurance practices such as design and code reviews.  Some shortchange the upstream activities of requirements analysis and design.  Others - running late - try to make up time by compressing the testing schedule, which is vulnerable to reduction since it's the critical-path item at the end of the schedule.  These are some of the worst decisions a person who wants to maximize development speed can make.

Barry Boehm reported that 20 percent of the modules in a program are typically responsible for 80 percent of the errors.  On its IMS project, IBM found that 57 percent of the errors clumped into 7 percent of the modules.  Studies have found that reworking defective requirements, design, and code typically consumes 40 to 50 percent of the total cost of software development (Jones 1986).  That 95-percent-removal line - or some point in its neighborhood - is significant because that level of pre-release defect removal appears to be the point at which projects achieve the shortest schedules, least effort, and highest levels of user satisfaction (Jones 1991).

Testing thus becomes the messenger that delivers bad news.  The best way to leverage testing from a rapid-development viewpoint is to plan ahead for bad news - set up testing so that if there's bad news to deliver, testing will deliver it as early as possible.  Reviews vary in level of formality and effectiveness, and they play a more critical role in maximizing development speed than testing does.

"Upstream Decisions, Downstream Costs" by Steve McConnell (http://www.construx.com/stevemcc/articles/art08.htm)

Barry Boehm and Philip Papaccio found that an error created early in the project, for example during requirements specification or architecture, costs 50 to 200 times as much to correct late in the project as it does to correct close to the point where it was originally created.  Why are errors so much more costly to correct downstream?  One sentence in a requirements specification can easily turn into several design diagrams.  Later in the project, those diagrams can turn into hundreds of lines of source code, dozens of test cases, many pages of end-user documentation, help screens, instructions for technical support personnel, and so on.

"Software Development Checklists" by Steve McConnell (http://www.construx.com/chk.htm)

This page contains links to pages with checklists for many software activities.

"Software Development Plan" by Steve McConnell (http://www.construx.com/survivalguide/sdp.htm)

This page contains an example software development plan.

"Sample Release Checklist" by Steve McConnell (http://www.construx.com/survivalguide/releasechecklist.htm)

This page contains an example a checklist for releasing software.  It assumes the software is application software distributed on a CD, but it is a good list anyway.

"Complete List of Survival Checks" by Steve McConnell (http://www.construx.com/survivalguide/surchk-all.htm)

This is the complete checklist from the "Software Project Survival Guide."

The Evolutionary Project Managers Handbook by Tom Gilb (http://ourworld.compuserve.com/homepages/KaiGilb/EvobookRTF.ZIP)

Evolutionary Project Management ("Evo") is a significant step forward in managing complex projects of all kinds.  It promises and delivers earlier delivery of critical results and on-time delivery of deadlined results.  It can be used for getting better control over quality, performance and costs than conventional project management methods.

The key idea of Evo is "learning" and consequent adaptation.  It is about learning about realities as early as possible and taking the consequences of any project reality, external or internal, and making the most of that information.

[A zipped Word document.]

" Requirements-Driven Management: A Planning Language" by Tom Gilb (http://www.stsc.hill.af.mil/Crosstalk/1997/jun/requirements.html)

RDM is based on the following ideas:

Risks and Risk Management

"The Therac-25 Accidents" by Dr. Nancy Leveson (http://sunnyday.mit.edu/papers/therac.pdf)

Between June 1985 and January 1987, a computer-controlled radiation therapy machine, massively overdosed six people.  These accidents have been described as the worst in the 35-year history of medical accelerators.

[This is a pdf file.]

"Coping With Risk" by Mary Sakry (http://www.processgroup.com/1august.html#CS)

This article talks about a risk management meeting.  The meeting addresses the following points:

  1. Risk identification
  2. Risk analysis
  3. Plan to mitigate risks
  4. Review risks
"Taxonomy-Based Risk Identification" (http://www.sei.cmu.edu/pub/documents/93.reports/pdf/tr06.93.pdf)

This provides a standard taxonomy of software risks to aid in risk management.  This isn't the best place to start with risk identification, but it is useful in identifying additional risks.  (Starting with this may cause you to miss project specific risks.)

[This is a pdf file.]

Test Automation

"Improving the Maintainability of Automated Test Suites" by Cem Kaner (http://www.kaner.com/lawst1.htm)

Automated black box, GUI-level regression test tools are popular in the industry.  According to the popular mythology, people with little programming experience can use these tools to quickly create extensive test suites.  The tools are (allegedly) easy to use.  Maintenance of the test suites is (allegedly) not a problem.  Therefore, the story goes, a development manager can save lots of money and aggravation, and can ship software sooner, by using one of these tools to replace some (or most) of those pesky testers.

"Test Automation Snake Oil" by James Bach

Make no mistake.  Automation is a great idea.  To make it a good investment, as well, the secret is to think about testing first and automation second.  If testing is a means to the end of understanding the quality of the software, automation is just a means to a means.  You wouldn't know it from the advertisements, but it's only one of many strategies that support effective software testing.

Testing

"Software Testing 101" by Rick Clements (http://www.oocities.org/rick_clements/SWtest/test101.htm)

This paper provides a 45 minute overview of software testing.  It's intended for the person who's new to software testing or managing a software test group regardless of their experience with software design.  It talks about requirements, configuration management, test plans, test cases test procedures, bug tracking and test reports.  Power Point slides from the presentation may be downloaded.

" Impossibility of Complete Testing" by Cem Kaner (http://www.kaner.com/imposs.htm)

This paper explores three themes:

  1. I think that I've figured out how to explain the impossibility of complete testing to managers and lawyers, with examples that they can understand.  These are my notes.
  2. A peculiar breed of snake-oil sellers reassure listeners that you achieve complete testing by using their coverage monitors.  Wrong.  Complete line and branch coverage is not complete testing.  It will miss significant classes of bugs.
  3. If we can't do complete testing, what should we do? It seems to me that at the technical level and at the legal level, we should be thinking about "good enough testing," done with as part of a strategy for achieving "good enough software."
"Testing During Rapid Change" by Randall Rice (http://www.riceconsulting.com/arttest.htm)

Is It Possible to Completely Test During Rapid Change?

Actually, no.  However, that's a trick question because in most cases it is not possible to completely test software even in stable environments.  The essence of this question might be to ask, "Is it possible to test effectively during rapid change?"&ngsp; Can we expect to make the best use of people and other resources to test software?  Can we expect to find the expected number of defects?

"Testing Techniques News" (TTN) Archive (http://www.testworks.com/News/TTN-Online/)

This is the archive to a monthly magazine on software testing.  It includes testing techniques, testing issues and software conferences.
 
 

"Everything I Know About Testing I Learned From the Bible" by Randy Rice, CQA, CSTE (http://www.riceconsulting.com/bible.htm)

What Testers Can Do about Technical Debt (part #1)  (part#2) by Johanna Rothman  Part #1 (http://www.stickyminds.com/r.asp?F=W3629) Part #2 (http://www.stickyminds.com/r.asp?F=W3643)

If you're a tester, how can you recognize technical debt?  Look for indicators like these:


(house)Rick's home page

Last updated: $Date: 2003/02/23 23:12:08 $ GMT