Judge Not, Lest Ye be Judged: 
An Examination of Over 25 Years of DCI Judging

Introduction
History and Background
Analysis
Argument
Conclusions
Appendix I
Appendix II
References


Introduction

No subject is more hotly contested among drum and bugle corps fans and members than that of
judging. In the stands, fans voice their displeasure at the judges' decisions in the form of loud
"boos." In the parking lot, corps' staff members scan through recap sheets, pausing from time to
time to lament the "ignorance" of the judges. On corps buses members discuss the judge that
"has it in for us" or "picked on us unfairly." And on the Internet, people from all walks of life
exchange stories of woeful and inadequate judging and conspiracy. If Drum Corps is indeed a
community, then these are our "urban legends:" 
     
"In 1988 Madison won because they went to Europe." 
"In 1989 SCV beat Regiment because they got shafted the year before." 
"In 1991 Star won because they paid the judges." 

The list of drum corps conspiracies is long and intricate.  Many people tell these stories with
disbelief, passing them along as one would pass a ghost story, yet some believe in these stories
as the gospel truth.  It is sometimes difficult to look at the events of the past few years and try to
believe that there is no conspiracy favoring certain corps, and keeping others down.  Recent
developments, including an incident where a judge was found to have a paper on him that
allegedly had the competing corps' scores pre-determined, have not helped the issue. 

As difficult as it may be to believe, there is no grand conspiracy that keeps certain corps down
and certain corps near the top.  Logic alone dictates this.  A conspiracy of this nature would have
to be kept secret by not just one or two individuals, but many honest and hard-working judges,
some of which are alumni of some of the corps people believe are "slammed down" by the
conspiracy.  It simply wouldn't work.  What we do have is a judging system that has at times
obscured rather than illuminated.  Donald Angelica, longtime DCI judge and member of the DCI
Hall of Fame once said, "I'd rather have good judges and a bad system, than bad judges and a
good system." 

     This document is prepared as a sort of case study of some of the ills plaguing drum corps
judging, an analysis of past systems, and a suggested solution for the future of the activity.  In
doing research for the preparation of this paper, it became evident that there are some ground
rules that must be established. First, we have to realize that judging, in this art form or in any
other, is essentially a "beauty contest."  There are no set standards or scientific measurements by
which we can judge corps. There are no robotic judges.  There is no computer software that can
easily and reliably count the total number of mistakes in a show, nor would we use one even if
such were available. 

As long as we understand that the process of judging relies on an inherently flawed idea  (that art
can be judged), we can begin to approach the system with a better frame of reference.  A judge,
being a human being, cannot help but be biased and flawed. Judges can be moved by personal
experiences, moods, health, friendship, and/or loyalty to assign numbers that may not be what
the crowd and/or members would hope. It is the same way in every activity of mankind, be it the
Miss America pageant or an election. With this said, the system can still be tweaked so that the
potential for mistakes is minimized, and the results are more reflective of the "any given night"
philosophy DCI would like to establish.   It is with sincere hope that this document may shed
more light on the subject.

History and Background

Since its inception in 1972, Drum Corps International (hereafter referred to as DCI) has used
several judging systems in an attempt to improve its reliability and keep up with the times. From
1972 through the 1983 season, DCI used what is now known as the "Tick System." The Tick
System was used to determine the performance scores for the Brass, Drums, and Marching and
Maneuvering captions. "Ticks" were mistakes, and were worth from one tenth of a point for
individual mistakes, to half a point for unit mistakes. 

The first use of the Tick System in 1972 involved the following: 

-Two judges for "Marching and Maneuvering." 
-Two judges for the "Drums" caption. 
-Two judges for the "Bugles" caption. 
-One judge for Content/Analysis. 
-One judge for M&M General Effect. 
-One judge for Bugles General Effect. 
-One judge for Drums General Effect. 
-One judge for timing and penalties. 
-One judge to tabulate 

Total: 12 judges 

In the execution captions, two judges per caption stood on opposing sides of the field. The show
would start with a pistol shot from the Timing and Penalties Judge, and the 6 judges in charge of
the execution captions would tally ticks for 11 « minutes. The judges would write on a
clipboard, identifying the section that had caused the tick so that they could correct the problem,
which also slowed the judge down so that corps would not receive lower scores (Mitchell, 1997). 

At the 111/2-minute mark, the Timing judge fired another pistol shot. This signified the end of
performance judging. From the 11-1/2 minute mark to the 13-minute mark, corps were judged
on GE sheets only. Each judges' sheet was inspected by the Tabulator, each tick subtracted from
a perfect score in that category, then the two scores for each caption were averaged for the final
caption score, except in the case of the "Bugles," which also had an additional "Content
Analysis" sub-caption. 

The Content Analysis judge analyzed the difficulty of the music, so as not to unfairly penalize
those that were more likely to "tick" because they were playing more difficult music. This
caption was worth five points on the sheets. Content Analysis (C/A, as it appeared on the sheets)
was the first subjective caption, and actually appeared two years before DCI, in 1970. It was in
its day the most difficult and controversial caption, because it asked judges to evaluate difficulty
of program.

The system was slightly revised the following year in the following manner: 

-The "Marching and Maneuvering" caption dropped from 30 points to 25 points. 
-There were still 2 judges each for M&M and Bugles, but the "Drums" caption was changed to a
"Field" caption worth 15 points and a "Difficulty" caption worth 5 points. 
-Content Analysis became Musical Analysis (M/A), and was broken down as follows: 

Tone Quality and Intonation:  4.0 
Musicianship:            3.0 
Content (Demand/Exposure)     3.0 
Total                              10.0 

This system would endure for the next 10 years with a few modifications, but the basic concept
was the same: Judges would start a corps with a full score, then deduct points as they saw or
heard mistakes. Judges in charge of the build-up captions were given scales to evaluate corps
upon.    Before long, the Percussion and Visual captions followed with the additions of their own
build-up analysis captions. Percussion Analysis appeared in 1978; Visual Analysis appeared in
1980, bringing up the total number of judges to 14. This was also a year of major changes for the
activity: the rules were changed to eliminate the starting and finishing lines, grounding
equipment, and color presentations. After years of prohibitive rules, corps were now free to
experiment with varying visual programs. 

The results were immediate and explosive, much to the chagrin of those that bemoaned the loss
of "tradition." The slow demolition of the tick system came to a climax after the 1983 season.
Rising problems with the cost of judges combined with the new mindset in the activity to lead to
a re-thinking of the judging process. When judges started using tape recorders to aid in the
conveyance of their evaluation, corps staffs were able to see at long last the discrepancies
inherent in the system. What was a tick?  What was not a tick?  Were some mistakes ticks? 
Were all ticks mistakes?  (Mitchell, 1997)

The Tick System was finally abandoned in 1984 for a completely new system that involved the
building up of points. "Marching and Maneuvering" was replaced in favor of "Visual," "Bugles"
was replaced by "Brass," "Drums" with "Percussion," and "Color Guard" with "Auxiliary." In this
system, 9 judges were in charge of the following categories: 

GE Brass       15 points
GE Percussion  10 points 
GE Visual           15 points 
Field Brass         10 points 
Ensemble Brass 10 points 
Field Percussion    10 points 
Ensemble Percussion 10 points 
Field Visual        10 points 
Ensemble Visual     10 points 

The change was astounding: The 1st place score jumped from a 94.4 in 1983, to a 98.0 in 1984.
In fact, during the first four years of this nine-man panel (1984-1987), 14 corps scored above a
95.0 at Finals. In the 12 years before that, only one corps (the 1982 Blue Devils, 95.25) had
managed to achieve that remarkable feat.

However, budget concerns reigned supreme, and after the 1987 season DCI was forced to come
up with something that would cut back the number of paid judges. In 1988, DCI instituted a new
system that was to reward General Effect more heavily than before, also eliminating the "field"
and "ensemble" sub-captions in favour of broad "performance" captions.  However, there were
no field judges.

GE Brass       20 points
GE Percussion  15 points
GE Visual           20 points 
Performance Brass   15 points 
Performance Percussion15 points 
Performance Visual  15 points 

Scores fell slightly that year, despite the whopping 55 point General Effect caption. The
Madison Scouts barely eked out a win over the Santa Clara Vanguard, scoring a 97.10 (a .8
drop-off from the previous year's winner) to Santa Clara's 96.90. In an attempt to "shake things
up," DCI instituted a "blind draw" system, which basically kept the corps, fans, and judges in the
dark about the placements following Semifinals. The Top Five knew they were in the Top Five
(for PBS live broadcast purposes), but did not know either their score or their placement. This
"blind-draw" system was ill fated and did not re-appear at the 1989 DCI Finals. What did
re-appear was the GE-heavy judging sheets, and along with them came the highest score any
corps has ever posted: 98.80, by the Santa Clara Vanguard. In an ironic twist, the Phantom
Regiment scored a 98.40 to place second, 98.40 being the previous record score. For only the
third time up to that point, the top three corps all scored above a 97.0. 

However, just as quickly as it was implemented the system was gone, and DCI reverted to the
same nine-man panel that was used 1984-1987. The 1990 championship saw a 1.1-drop in the
champion's score, as the Cadets scored a 97.7. The system remained in place until 1994, when it
was replaced by a seven-person system, again because of budget concerns. 

The seven-person system broke down as follows: 

GE Music:           20 points 
GE Visual:          20 points 
Ensemble Music:     15 points
Ensemble Visual:    15 points
Performance Brass:  10 points
Performance Percussion:10 points
Performance Visual:      10 points


This year (2000), the judging system has once again been tweaked.  The new system is as
follows: 

GE Music       20 points
GE Visual           20 points
Performance Visual  20 points*
Ensemble Visual     20 points*
Color Guard         20 points*
Performance Brass   20 points*
Performance Percussion20 points*
Ensemble Music      20 points*

*These captions are divided by two (2) in order to get the appropriate amount of points. 

This new system will call for eight judges instead of seven, and finally rewards the part of the
corps in charge of bringing most of the color and emotion into the show. 

That is where we stand today. While there have been many different systems used, there have
been only 2 broad eras in DCI's history: The tick, and the subjective/build-up system. Many have
argued in favour of both systems, and most agree that something needs to be done to normalize
the system and fine-tune the process. But what is the best way? 


Analysis

Following is a brief breakdown of the seven-judge system used by Drum Corps International
judges from 1994 to 1999. This is not intended to be a complete breakdown by any means, but
rather a brief exploration of the pressing issues involved in the most recently used system, and a
way to familiarize those that are not familiar with the system.  Although a new system is now in
place, most of the same concepts still apply.

The first issue that must be dealt with is on the title page of the 1994 Drum Corps International
Judging Manual:  "In addition to the DCI Judge community, all corps managers, instructors, and
members who compete in DCI competitions must also have thorough knowledge of the contents
of this manual, since it describes the process through which their units are evaluated." (DCI,
1994)   In reality, very few corps members really know what rules and regulations they are
judged on. If DCI feels that there is a need for the performer to know the rules, the how and why
of the numbers, then drum corps staffs should make more of an effort to at least make the DCI
Judging Manual available for perusal, which is not the case with many corps. 

Another important issue is that of "General Effect." According to Section 7.1.4: "The primary
premise of Effect judging is that the judges must prepare mentally to allow themselves to be
entertained and engaged." This is important, because in both the Ensemble and Performance
captions, judges are there to judge some sort of execution, whether it is on the field or in the
stands disguised as ensemble. With General Effect, a judge's role switches from one who is
trained to find mistakes to one who is expected to "be a fan." According to one former judge,
this is difficult to do and happens rarely. The end result is a population of DCI judges that do not
allow themselves to be "overcome" by the show because they are too busy being execution
judges, albeit perhaps subconsciously. 

Many fans chide drum corps for ignoring the reaction of the crowd, but one finds this little gem
in Section 7.2.9: "Credit that which evokes an engagement of the audience-whether that
entertainment and engagement result from aesthetic, emotional, or intellectual entertainment and
engagement." In fact, the manual points out that the judge should still reward any sort of reaction
from the crowd, even if he or she is personally unimpressed.  So here we have it: General Effect
judging that somehow involves audience reaction. But how often is this practiced? How much of
an effect does the crowd's reaction have on a judge? 

We can not measure this, but should we? Should we reward popular corps with high General
Effect scores? Should we reward corps that merely entertain but do not execute? Another baited
question, this particular one lies at the heart of the GE dilemma.  Jeff Mitchell, former DCI Head
Brass Judge, says "It is difficult to judge GE, BECAUSE you have to be a fan of EVERY corps
and react to ALL shows, even division 3. There are not a lot of good GE judges, it ain't easy.
Performance judging is easier, not execution which connotes mistakes, but talking about the
good and bad issues."

The present judging system relies on what is known as the criteria-reference system.  This
system is designed to give uniformity of judging from contest to contest, regardless of site. 
Judges are asked to examine their impressions of the performance and convert them into a
specific score by means of criteria that are present at each level of performance.  Each
sub-caption on the score sheets is subdivided into scoring areas, each area containing a
description of the performance level achieved, and the numerical worth range assigned to that
performance level.

     At the end of the performance, the judge reviews the performance in his or her mind and
assigns a number based on the wording of each scoring area.  Some key words that can be used
as guidelines are never (box 1), rarely (box 2), sometimes (box 3), often ( box 4), and
consistently (box 5).  (DCI, 1994, Sec. 5.0)

The Judging Sheets: 

1. GE Music 

As stated previously, the General Effect category is divided into Music and Visual. Under the
General Effect Music caption are two further sub-captions: Repertoire Effect, and
Showmanship/Performance Effect, each one worth 10 points. 

Under the Repertoire Effect sub-caption, the following areas are evaluated: 

Coordination 
Imagination and Creativity 
Variety 
Artistry 
Continuity 
Climax 

To achieve the highest score (Box 5): "The repertoire consistently produces an optimum effect
maintaining the highest levels of audience intrigue and aesthetic appeal. The superlative
blending of all the audio/visual elements with an absolute command of the principles of artistry,
continuity, and climax are demonstrated." (Sec. 8.5.1.6) 

Under the Showmanship Effect sub-caption, the following areas are evaluated: 

Involvement 
Communication 
Intensity 
Emotional Range 
Spirit 
Professionalism 

To achieve the highest score (Box 5): "Superlative achievement by the performers in the
communication of the emotional involvement and intensity. The highest standards in
communication are established and maintained throughout the program. The audience is
constantly captivated, engaged and intrigued by the ability of the performers to infuse the written
program with the appropriate and desired feeling(s), aesthetic qualities and intensities. The
highest demonstration of professionalism is constantly present." (Section 8.5.2.6.) 

2. GE Visual 

The General Effect Visual category is divided into two sub-captions: Repertoire Effect and
Showmanship Effect. 

The Repertoire Effect sub-caption evaluates: 

Interpretation and Enhancement 
Artistry 
Integration and Pacing 
Continuity 
Creativity and Imagination 
Climaxes 
Variety of Impacts 
Coordination 

To achieve the highest score (Box 5): "The program always displays quality substance and depth.
Concepts are always understood and are successfully and effectively developed. Imagination and
creativity are constantly woven into the program. Integration and pacing are highly successful in
generating effect..." (Sec. 10.7.1.6). 

The Showmanship Effect sub-caption evaluates: 

Involvement and Communication 
Intensity 
Emotional Range 
Spirit 
Professionalism

To achieve the highest score (Box 5): "Performers demonstrate a superior level of
communication. Their involvement is effortlessly displayed throughout the program. 
Emotional efforts are genuinely displayed by the performers while they are being asked to
develop a wide range of responsibilities and roles. A superior level of performance is
demonstrated, while creating a dynamic and consistent engagement with the audience." (Sec.
10.7.2.6). 


3. Performance Brass 

The performance brass sheet is divided into 2 sub-captions: Quality of Technique, and
Musicianship. 

The Quality of Technique sub-caption evaluates: 

Uniformity of Method and Enunciation 
Technical Proficiency 
Timing and Rhythmic Accuracy 
Quality of Sound 
Pitch Control and Accuracy 

To achieve the highest score (Box 5), players must "exhibit brass training of the highest level.
Rhythmic lapses are rare and minor. Tonal focus is rarely lost and timbre is uniform throughout.
Concentration is superior. Maximum demands are consistently present and are almost always
met." (Section 8.4.2.6). 

The Musicianship sub-caption evaluates: 

Phrasing 
Expression 
Style/Idiomatic Interpretation 
Communication and Involvement 

To achieve the highest score (Box 5): "Clear, meaningful and expressive shaping of musical
passages; proper and uniform stress; natural, well-defined, and sensitive playing throughout;
valid, tasteful and idiomatically correct interpretation; tempo, rhythm, dynamics, phrasing,
accents, timbre all combine to interpret stylistically and communicate emotionally; involved;
musical. Maximum musical demands are consistently present throughout the entire
performance." (Sec. 8.4.5.6). 


4. Performance Percussion 

The performance percussion caption is broken down into two sub-captions: Quality of
Technique, and Musicianship, worth 5 points each. 

The Quality of Technique sub-caption evaluates: 

Clarity of Articulation 
Implement Control 
Uniformity of Technique 
Technical Proficiency 
Timing and Rhythmic Accuracy 
Quality of Sound 

To achieve the highest score (Box 5): "No weaknesses are evident. There is superlative
achievement of timing and rhythmic control. The highest degrees of concentration, commitment,
musical purpose, and professionalism are constantly displayed by all performers. All technical
aspects are characteristic of the finest playing. The highest quality of musical and physical
demands, requiring maximum ability and skill are present throughout the entire performance"
(Sec. 9.4.2.6). 

The Musicianship sub-caption evaluates: 

Phrasing 
Expression 
Style and Idiomatic Interpretation 
Communication 
Involvement 
Pitch Accuracy 

To achieve the highest score (Box 5), a percussion section should have "no weaknesses. There is
a constant demonstration of the finest qualities of musicality and subtleties of expression and
interpretation...Tuning is superlative...maximum ability and skill is constantly presented
throughout the entire performance." (Sec. 9.5.2.6). 


5. Performance Visual 

The performance visual caption is broken down into 2 sub-captions: Excellence of Form, Body,
and Equipment, and Movement and Equipment Technique. 

The Excellence of Form, Body, and Equipment sub-caption evaluates: 

Alignment 
Spacing 
Breaks and Turns 
Equipment/Tempo/Pulse Control 
Timing 

To achieve the highest score (Box 5) a corps must display: "...superior achievement with
movement and equipment in the areas of space, time, and line. Flaws, when noted, are minor in
nature even when performers are challenged by responsibilities of a greater magnitude and these
flaws are usually caused by individual lapses. 

The Movement and Equipment Technique evaluates: 

Effort Changes 
Principles of Movement 
Equipment Technique 
Articulation in Body and Equipment 
Style 

To achieve the highest score (Box 5): "Performers display a superior level of achievement. Style
is completely refined...The principles of movement are consistent and refined." (Sec. 10.4.5.6). 


6. Ensemble Music 

Ensemble music is broken down into three sub-captions: Musicality, Sound and Tuning, and
Ensemble Technique. 

The Musicality sub-caption evaluates: 

Phrasing and Expression 
Style & Idiomatic Interpretation 
Communication and Involvement 

To achieve the highest score (Box 5): "Clear, meaningful, and expressive shaping of musical
passages; proper and uniform stress; natural, well-defined and sensitive playing throughout;
valid, tasteful, and idiomatically correct interpretation; tempo, rhythm, dynamics, phrasing,
accents, timbre all combine to interpret stylistically and communicate emotionally; maximum
musical demands." (Sec. 11.1.2.6).

The Sound and Tuning sub-caption evaluates: 

Tuning and Pitch Control 
Focus 
Consistency 
Appropriateness of Timbre 

To achieve the highest score (Box 5): "Players exhibit the best possible control and most highly
developed concept of sound production. Tuning is exemplary, and flaws, if any, are most often
caused by environmental difficulties. Concentration is superior. Exceptional demands, present
throughout the program, are placed on the performer. (Sec. 11.2.2.6). 

The Ensemble Technique sub-caption evaluates: 

Ensemble Cohesiveness 
Tempo/Pulse Control 
Rhythmic Accuracy 
Balance and Blend: Brass to Percussion, within Brass, within Percussion. 

To achieve the highest score (Box 5): "Superlative achievement of proper balance techniques.
Flaws, if any, are spontaneous, minute, short-lived. Solid, complete control of rhythm, tempo,
and pulse. Formations have no impact on pulse, mature players, confident in tempo
sub-divisions. Sound arrives at focal point with solidity and control. Maximum skills required
throughout the performance." (Sec. 11.3.2.6). 


7. Ensemble Visual 

Ensemble visual is broken down into three sub-captions: Composition, Achievement of Overall
Accuracy of Form, Body, and Equipment, and Quality of Technique of Form, Body, and
Equipment. 

The Composition sub-caption evaluates: 

Quality of Orchestration 
Visual Musicality 
Variety 
Creativity 
Artistic Expression 
Unity 

To achieve the highest score (Box 5): "Compositional intent is always apparent. It displays
qualitative use of the elements of design. Innovation is present in FBE. Visual musicality ranges
from phrasing to congruence. Variety assists the depth of the design. Depth and scope is present
in opportunities for artistic expression. Unity is constant. Orchestration and organization are
superior." (Sec 10.5.1.6). 

The Achievement sub-caption evaluates: 

Excellence 
Definition 
Uniformity 
Adherence to Roles and Styles 
Clarity 
Control of Tempo and Pulse 

To achieve the highest score (Box 5): "Performers display a superior achievement in ensemble
FBE. Flaws are infrequent, generally minor in nature, and are the result of momentary lapses by
individuals. Superior demonstration of skills observed in response to changes in responsibilities.
Style is well-refined throughout FBE." (Sec 10.5.2.6).

The Quality sub-caption evaluates: 

Principles of Movement 
Variety of Technique 
Equipment Technique 
Articulation in Body and Equipment Technique(s) 
Recovery 

To achieve the highest score (Box 5): "Performers demonstrate a superior awareness and
achievement of ensemble techniques. Role technique effortlessly maintained. Equipment, body,
and form technique(s) are always displayed in a well-developed manner. Recovery, if needed, is
swift and accurate." (Sec 10.5.3.6). 

     This ends the analysis of the seven-person judging system.  The most obvious concern to
this researcher is the issue of "General Effect" that has already been mentioned.  A new system
of judging would have to reduce the most subjective category of all (GE) and still leave room for
the auxiliary, while putting equal weight on the musical and visual aspects of a drum corps
show.
 
Argument

There is a legion of drum corps fans that clamors for the return of the "tick system." The main
claim is that it is a return to objective judging. However, even in the tick system there is room
for subjectivity and opinion. Take, for example, this passage from a 1982 paper on percussion
execution: 

"There is no such thing as perfection in execution, as long as humans are the performers. The
judge, therefore, must consider what is the tolerance level, or acceptability limit, or acceptable
norm-deviation, or standard." (DCI, 1982) 

Even if this was not the case during the era of the tick, what would have stopped a biased judge
from ignoring certain corps' ticks, or adding a few more ticks to corps they "had it in for"? The
tick system, in the wrong hands, could be every bit as subjective and biased as the current
system. 

There are those that believe that the tick system was infallible because every error was counted.
This was not the case. If judges used the tick system and counted every error, many corps would
have been "zeroed-out" in certain captions (DCI, 1982). Clearly, the tick system had problems of
its own.  The task that faces the drum corps community is to come up with a system that leaves
little room for error. One of the more commonly voiced concerns reflected in a survey conducted
by this researcher (see Appendix I) was the General Effect caption. One participant commented
that "GE judges are too clinical (in their) evaluation of performance." Another said, "GE should
be less of the overall score." 

The very real problem that faces DCI in regards to the GE caption is a semantic one: "General
Effect," as practiced today, is a misnomer. It was even worse during the days of the 9-man panel,
because GE had been broken down into 3 different sub-captions: Brass, Percussion, and Visual.
Today it is Music and Visual, but this is still by no means a "general" or "overall" effect. Too
many judges become caught up in the other captions when judging General Effect...there must
be independence established between captions. A GE judge should not be judging whether or not
the sopranos are in tune or if someone is out of step, these are mistakes covered in the
performance categories. A GE judge must "stick to the script" and not allow other captions to
creep in when judging. To be truly effective, the GE caption should be just that-a General Effect,
an overall impression on how the parts of the show fit together, one judge to score the impact of
both the visual and musical programs. 

At the same time, consider this: The fewer judges there are, the more we are asking each
individual judge to be in charge of, and the more error you are likely to encounter. Under the
system used from 1984-1987 and 1990-1993 (the 9-person panel), a judge that was assigned to
General Effect Visual had control of 15 points of the total score. In the present system, the same
judge wields power over 20 points, an increase of 5. If that judge made a mistake, the impact
would be more under today's judging system. However, if we make General Effect a one-judge
caption, we are still left with one person in charge of 20 points. This person would obviously
have to be of the highest caliber; both a musician and one who can judge the visual aspects of a
show. This should not be an impossible task; after all, many good drill designers are musicians
as well. Having one judge in charge of 20 points may seem a bit daunting to some, but this is
already going on, if you will recall, within the current system.

Budgetary concerns being a motivating factor, a way must be found to change the system while
not eliminating too many judges, since this places more responsibility in the hands of one single
judge--an unfair situation to both judges and corps. Indeed, many survey respondents proposed
their own systems of judging in the "free comment" section of the survey, all but one calling for
more judges instead of fewer. 

Another concern of survey respondents was that DCI judges should have previous corps
experience. Many felt that judges with more than 10 years of experienced are biased, and called
for "new blood" in the judges' pool. A startling comment from one respondent was "judges will
do anything to change the face of drum corps." This response would seem to be contradictory to
the fact that the same corps seem to occupy the top five spots in DCI Finals year to year, with the
exception of the Toledo Glassmen, who made the top five for the first time in 1998, and the
Boston Crusaders, who made the Top 5 in 2000.

However, out of the 42 respondents to the survey, almost half of them (19) made some mention
of the need for judges that have been an active part of drum corps. One responded "(the)
requirements for a visual judge should be a minimum of three years of experience in a division II
or I corps as a member. WGI types are not acceptable, their focus is too narrow." While this may
be an impractical solution, enough people feel it to be a concern that DCI should look into its
judging policy, and give more serious consideration to those that have marched over those that
have not. 

A concern that came up more than a few times was this: Are judges judging Finals on the basis
of what happens during that performance (and only that one performance) or do they reflect the
season's events? (Note this is not the fault of the system, if it does occur. It is the fault of the
judge involved). Can Corps A, who is undefeated, lose to Corps B if Corps B has a great show
that night? One respondent commented, "It is extremely rare that a corps' position changes from
prelims to finals or even from show to show. Something is wrong with a system that does not
account for corps having extremely strong shows one night and very poor shows the next." 

If special consideration is given to results prior to the judged contest, we need to make sure this
is addressed. Too many respondents felt that judges do not judge the contest on its own, but
rather combine the show's effect with previous scores to arrive at the manufactured number. This
seems to be a big concern, especially during the early season, when judges tend to go by the
results of the previous year's Finals placement. Some very sloppy and/or incomplete shows are
sometimes scored higher, simply because the judges know (or feel) that the show will eventually
improve. 

Of course, it has been brought up numerous times in the recent past that this particular problem
is a product of the corps' staffs; what they want to see judged and how.  Most corps staffs are
intolerant of score fluctuations, and indeed some have chastised judges for dropping them a
point in any given category from one night to the next, even if the performance itself was not as
solid as the previous one.  If this is the case, and judges do as staff members want, then it
becomes almost impossible for corps to move up or down any legitimate amount of points or
positions because they are "locked in" by consistency (the same sort of consistency that may lead
a judge to be careful in assigning his number, and review the last scores a corps has received in
order to "get it right").  This, of course, is what we refer to as "slotting," and it is just as
disagreeable to some staffs as is the idea of a 1 or 2 point swing in scores...so a judge ends up
trapped between a rock and a hard place.

In summary, survey respondents were almost unanimous in their opinion of the system, but split
on their opinion of the judges. Of the 42 participants, 23 felt judges are competent, 17 felt they
are were not competent, one had no opinion, and one did not answer the question. On the issues
of how judges arrive at scores, six felt they "judge shows on their merits," 31 felt they "slotted"
them according to previous scores, and four felt they slotted them according to their
bias/preference. One person did not answer the question. Fully half (21) of the participants felt
that bias is shown toward the established corps, and only 14 felt that judges are fair and call
them as they see them. Six felt that judges favour the underdogs. For the full results, please see
Appendix I. 


In a paper on interpretive judging, Poole and Maniscalco (1980) offered tips on how to best
prepare for the judging of a show. Among the more thought provoking were: 

-Review Percussion Analysis rules prior to each contest. 
-Maintain a positive attitude by approaching the caption with the express purpose of appreciating
musical composition and performance. 
-Become personally involved in the performance presented before you-allow yourself to be
moved emotionally by not falling into the pitfall of "execution-itis." 

This from a paper written in the waning days of the tick system! One can only hope that all
judges follow the same advice.  Judges are asked to find mistakes, and sometimes this can
engender an attitude of antagonism toward corps.  "I have to find the mistakes" can be the
driving force behind many of DCI's judges, whether a conscious attitude or not.

In analyzing today's system, we run into a question that can never be answered: What is more
important, General Effect or Performance? To some, the enjoyment of the crowd is the most
important facet of drum corps. Yet it is difficult to think of many well-performed shows that are
not entertaining. To be sure, drum corps has always had corps with a capability to be
entertaining. The old line is indeed true: You can find any sort of entertainment in drum corps,
from big band to jazz to classics to Broadway. Some people love one style, some another; apples
and oranges. 

Does the nameless, faceless crowd want to see an enjoyable show that is not performed well? Is
that even possible? Wouldn't we have to consider any show that entertains the crowd well
performed? What if it is dirty? At what point does entertainment factor out cleanliness? At what
point does cleanliness factor out a lack of entertainment value?   Another important question that
needs to be addressed: What should bear the most weight in any judging system? Should it be the
brass? The visual program? A "Music" caption that includes brass and percussion? Or should it
be General Effect? Asking any 5 people might yield 5 different responses. Who decides what is
effective? It is easy enough to see what is clean and what is dirty, but 12 different judges may
have 12 different interpretations of effect. 

Looking at the issue objectively, one can make a case for the brass. After all, in most corps in
DCI, brass comprises between 50% and 60% of the entire corps. However, would a judging
system that focuses too heavily on brass hurt those corps with smaller horn lines, such as
Division II and III corps?

Another large component is marching, as that is the root of our activity, and everyone in the
corps does it save the pit/front ensemble. Marching should dominate the score sheets, after all, it
is what differentiates drum and bugle corps from a concert band. Yet no corps has tried putting
together a show that consists of merely marching without music, although the 1997 Cavaliers
included a full minute of silent drill at the beginning of their show which was greeted with
mixed reactions.  If you turn off the volume on your DCI videotape, you will see why music
must also be considered at least equally important. 

Are music and visual equally weighed in our judging system?  Before this summer, the answer
would have been "no."  In the old system, the Visual category (in all of its incarnations) was
responsible for 35 points. Music received 55 points. We can see that there is not an equal
distribution of the points. However, in the new system, visual gets 50 points, and music gets 50
points, thanks to the addition of the color guard score. 
 
What factors have affected the way judging systems have evolved? What are the specifications
of the current system?   In perusing through the DCI Judging Manual, the following statement
leaps out: 

"The emphasis shall be on the performance of the performer." 

A little further down, we come across this: 

The scoring system [developed by DCI] is a means to encourage and reward new standards of
achievement in the competitive arena, while providing a means for the participants to be
educated in such a way that they will grow to understand and evolve toward their highest
potential (DCI, 1994). 

But while the majority of the scoring is predicated on the performance of the members on the
field, at least a certain portion of the score derives from the "product." In other words, not only
are the corps members judged, but the designers are as well. Section 4.1.6 of the DCI Judging
Manual (1994) states that an interaction exists among the performers, instructors, and designers,
but that the "emphasis shall be placed on the performance of the performer." But to what degree?
Where do judges distinguish between a faulty product and the performance of the corps
members? 


Conclusions

The problems with the current system are evident through analysis and through response at the
level of the corps member, the corps staff, the fan, and even some judges.  However, the system
of scoring the corps is only part of the equation.  Such problems as critique, slotting of corps,
going by previous contests' scores, and peer collusion are important to look at as well.  There are
those that criticize the judging of the designers in some categories (notably General Effect).  It is
a necessary evil that we judge the design as well as the performance, in order to establish
difficulty and vocabulary.  The end result of a drum corps program is a product of both members
and staff.  Consider this analogy: A movie can have actors of the highest quality but still be a bad
movie because of design (the script).  Design standards are important, and keep the activity
growing.  Without some standard of design, corps would still be marching symmetrical drill. 

Any new system, of course, would still rely heavily on the integrity of the judges, and it is
understood that no system can truly be devoid of the potential for human error. One important
subtraction from the present system would be critique. It is the opinion of this researcher that
anything that needs to be said on the order of improving a show can be said on tapes and/or
sheets.   What exactly is the need for critique? A corps' staff should not have to "sell" their shows
to the judges. If they need to explain something to a judge, they have already failed to produce a
show that can be understood by most paying fans that come to a show to be entertained, not
"educated." 

In reality, critique has turned into a sort of free-for-all, where anything from petitioning of the
judges to actual assault, verbal and physical, can occur. With all the pressure already on them,
judges should not have to deal with this. In a similar vein, corps staffs are going to have to
realize that if we want a better system, there are going to be fluctuations...scores cannot keep
steadily rising and rising without taking into account performance variances from night to
night...not every corps can be "up" every night. Even the best corps have off-nights: Is this being
dealt with and rewarded/punished consistently? Or are certain corps being rewarded night after
night even thought their performances may not be consistent? 

One solution is to limit the critique.  Perhaps a system can be developed so that critique is
eliminated or phased out by the middle of July.  An important addendum to the suggestion
would include having those judges who are scheduled to judge Finals Week adjudicate corps in
the early part of the season.  That way there would be no "cold reads" as there have been in the
past, particularly in Divisions 2 and 3.  Perhaps critique could be terminated after the first focus
weekend, so as to allow all the corps to be in one area for these judges to view and sample.  Or
perhaps critique can be entirely eliminated and concerns can be brought up through the Judge
Executor, a concept introduced a little further down.

Judge independence is also a serious concern. Judges should not discuss scores among
themselves, ever, for any reason, at any time during the season. Vincent Ferrera, Assistant
Professor of Neuroscience at Columbia University, says "when one makes a set of
measurements, one strives to ensure that those measurements are independent and unbiased
estimators of the thing being measured. If you are taking a survey or conducting a psychological
experiment on naive subjects and you want the results to reflect characteristics of the population
at large, then you really don't want your participants to discuss their answers with one another
because then each individual is more likely to conform with the majority." Yet this is what
judges do all season long. For the most unbiased and "true" score sampling, it is necessary to
prevent judges from discussing scores with each other. Implementation of this policy would be
difficult but not impossible. 

Perhaps a system much like was used in 1988 could be viable.  Since most judges do take into
account previous placement and points, perhaps the Quarterfinals, Semifinals, and Finals could
be sequestered (much like a jury) from each other and from other fans and judges so as not to be
biased by previous decisions.  Much like in 1988, judges would have to judge shows on the
merits of what happened on that particular night, and not what has happened over the course of
the season.  Corps staffs would most likely veto any such change; however, it is in their best
interest.  In the current system, many people feel that your first few performances are your most
important ones, because they determine where you will be slotted the rest of the season.

The idea of a Judge Executor in charge of reviewing the performance of the judges is an
interesting one.  This person would ideally be a newly retired adjudicator, someone with
knowledge of the system and the experience to understand it, yet someone who is not bogged
down by having to judge any longer.  The Judge Executor would be in charge of reviewing the
numbers assigned by the judges.  Any wide variances in scoring would be the province of the
Executor, and would have to be accounted for by the judge in question.  Any problem a corps'
staff had with any judge would channeled through the Judge Executor.  Under no condition
would a corps staff be allowed to talk face-to-face with a judge.  Any such encounter would
result in a penalty for the corps, and a slight suspension for the judge.


Appendix I: 
The Survey and Results

The survey was designed to be as broad as possible, in order to eliminate any shaping or
streaming of answers. Because of this, some of the questions (particularly about the judges) were
painted with as large a stroke as possible. These questions were not intended to offend or insult
the judging community. There were 42 respondents. Results are bold in parenthesis. 


This survey is being conducted in order to discern the feelings toward judging in the DCI era.
The results will be used in a treatise on the subject of judging, evaluation, and adjudication in
the DCI era. The treatise will cover the various systems that have been used in the past 27 years,
as well as possible solutions to future problems in the DCI judging systems.   Please email your
responses to me (pilato_n@xxx.xxx.edu 

Please answer the following: 
 
1. Have you ever marched a Division I, II, or II corps, or marched a Senior Corps in the DCI era
(1972-present)? 

(35 Yes, 4 No, 3 Abstained) 

2. How many years did you march? 

(Average ~3 years) 

3. In your opinion, judges today are: 

a. Competent. (23) 
b. Not competent. (17) 
c. No opinion. (1) 
(1 abstained) 

4. In your opinion, judges today: 

a. Judge shows on their merits alone. (6) 
b. Slot them according to previous scores. (31) 
c. Slot them according to personal preference. (4) 

5. In your opinion, judges today: 

a. Show a bias toward the established corps. (36) 
b. Are fair and call them as they see them. (3) 
c. Favour the "underdogs." (3) 

6. The current system breaks down as follows: 

GE Music (20) 
GE Visual (20) 
Ensemble Music (15) 
Ensemble Visual (15) 
Performance Brass (10) 
Performance Percussion (10) 
Performance Visual (10) 
How do you feel about this system? 

a. It is adequate. (11) 
b. No opinion. (1) 
c. It needs changing/modification. (6) 
d. It is bad. (23) 
(1 abstained) 

7. Would you like to see the color guard represented on the score sheet (under the current
system, color guard falls under the Visual categories)? 

(33 Yes, 7 no, 2 abstained) 

8. Would you like to see the tick system re-instated on some basis (for example: only in the
performance categories?) 

(14 Yes, 23 No, 5 abstained) 

9. In 1988 and 1989 the GE caption was worth 55 points. Should DCI revert back to this system? 

(39 No, 2 Yes, 1 abstained) 

10. Should DCI Finals be judged by: 

a. Judges with more than 10 years experience. (13) 
b. Judges with more than 5 years experience. (15) 
c. Judges with more than 3 years experience. (11) 
d. Any judges should be able to judge Finals. (3) 

11. Free response. Write any comments you want about the judging  system and your thoughts on
it, at any time between 1972 and 1999. Thank you for participating in this survey. Results will be
included in a treatise that will be presented here on RAMD sometime after the corps season is
over. 


Appendix II:
An Alternate System of Adjudication

The following document is presented as an alternative form of adjudication.  It encompasses
some major concerns expressed by the at-large drum corps community, including fans and
members.  The document is by no means complete or perfect, being a labor of a human being,
and therefore prone to error and bias, as is any judging on the scale of human existence.  

Although this document is focused on the judging system, it must be understood that without the
highest level of dignity, compassion, and trustworthiness from those that adjudicate, no system
can ever truly work.  As Donald Angelica used to say, "Judging is judges."  

Judging (no pun intended) from responses to the survey and dissemination of the current and
former guidelines for judging, the following system has been prepared in response to the issue
detailed above. The main concerns were the incorporation of the color guard as a score on the
sheets, and the reduction of the "General Effect" category. It is respectfully submitted to
whomever may be interested for dissemination and perusal. 

General Effect (20 points). General effect should have no sub-captions as it does now, for that is
no longer "general" effect, but rather "specific" effect. "General Effect" should take into
consideration all the aspects of the show and how they contribute to the finished product. The
reason for reducing the caption by 20 points is simple: It is by far the most subjective of any
caption. Who does it affect? The judges? The fans? The single biggest problem with the GE
caption is that it puts the job of finding some sort of emotional impact in the hands of people
who are trained to find mistakes...GE should be about everything: Brass, Percussion, Visual, and
Auxiliary. GE is the total package, and should not be split up in three different ways. The GE
judge would sit in the stands, somewhere in the middle between the top level and the sideline. 

Visual (20 points). This category would deal with the formations themselves: Are they straight?
Are they crooked? Are they supposed to be that way? Are the members in the form, or are they
falling apart and collapsing a form? The ensemble visual judge would sit at the highest vantage
point in the stadium. His duty is not to look for people out of step, or poor posture, but simply to
focus on the "big picture." 

Music (20 points). The judge would be in the stands, and he would be responsible for scoring
according to the corps' intonation, their balance and blend, and their overall cohesion with each
other, as well as their interaction with the percussion section, including the front ensemble. This
is basically the old "ensemble" caption given more weight. 

Brass (10 points). This caption would focus on individual performances on the field. Intonation,
tone colour, attacks and releases, and overall execution of the music...all of these are subjects for
the brass judge. This caption could really be done either of two ways, as a build-up caption, or as
a "tick" caption, where the judge would tick away one-tenth of a point for each mistake he heard.
The brass judge would be on the field. 

Percussion (10 points). This caption would focus on individual and ensemble playing, both in the
battery and in the front ensemble. This too could be either a build-up caption or a "ticked" one.
The judge would also be on the field. 

Marching & Maneuvering (10 points). This caption would deal with the "mechanics" of
marching, such as proper posture, marching in step, phasing, and intervals. The judge would
need to be on the field. Could also be done as a "tick" caption. 
 
Auxiliary (10 points). This judge would be on the field and be looking for correct technique in
rifles and flags, as well as number of caught tosses versus dropped ones. Performance is the
mainstay of this caption. This caption would probably best be done as a ticked caption. 

This system may be effective because there is room for the Auxiliary, and because it puts the
brunt of the scoring in the two largest areas in drum corps, brass and visual. It reduces the most
questionable category from 40 points to 20 points, removing much of the opinion in judging, and
forcing us to focus on performance.

The system, of course, still relies heavily on the integrity of the judges, and it is understood that
no system can truly be devoid of the potential for human error.  Following is a break down of the
suggested captions in the style of the DCI Judging Manual, with some influence from the system
used by the Florida Bandmaster's Association, adopted for school year 2000-2001.

1.  General Effect.  (20 Points)

General (or Overall) Effect is the total package.  Brass, percussion, visual, auxiliary...do they
combine effectively to give the greatest performance possible?  Or do some elements detract
from others, thereby lowering the effect level?  All aspects of a drum corps show must be judged
in this caption, with no aspect receiving any more weight than another.

It is important for adjudicators to realize that entertainment comes in all forms, and each corps
must be rewarded for their communication of that form or style, regardless of the adjudicators'
personal preference or background.  The entire range of emotion must be considered in this
caption, from elation and joy to sadness and pathos.  A show that reaches the audience
successfully on the "darker" side of the scale (sadness, anger, pathos) should receive just as
much credit as one that reaches the audience on the lighter side (comedy, joy, elation).

Adjudicators must strive to stay "in caption."  A GE judge is there to judge only effect, not
performance!  Trust your colleague to deal with matters of intonation, attacks, marching &
maneuvering, etc.  Be focused only on THE BIG PICTURE!  The audience should play a large
part in what the judge assigns.  However, the influence of the audience's reaction should be
tempered by the interpretation of it being a genuine response to excellent performances, rather
than a vocal minority and/or hometown supporters.  An audience reaction that leaves the judge
personally unimpressed DESERVES CREDIT, for it has achieved effect.  On the other hand, a
judge should also credit a production that he or she finds worthwhile, even if the reaction of the
audience is subdued. 

Each element of the total production must be weighed equally, do not spend an inordinate
amount of time on any one element.  Entertainment and the effect it generates can come from
three sources: the aesthetic , the emotional, and the intellectual.  Credit that which evokes an
engagement of the audience, whether that entertainment and engagement result from aesthetic,
emotional, or intellectual productions.

Only the highest quality judges would be able to judge General Effect, therefore only judges
with a set number of years of experience should be assigned to this caption.  The caption shall be
divided into 3 sub-captions.  Each sub-caption is further divided into 5 boxes; each with a set of
criteria that must be reached before a score can be awarded in that box.

Program Effect (50 tenths)

Was the total presentation effective?  Did it display creativity and originality throughout?  Was
the visual appeal effective?  Was the musical appeal effective?  Was continuity and flow
effective?  Was the demand placed on individuals and the ensemble effective?

The following criteria are used to award points in this caption:

1-10 tenths: There is a lack of clarity, program is immature and lacks understanding of design. 
Concepts are fragmented or underdeveloped.  There is little or no audience intrigue or appeal. 
Program effect is minimal.

11-20 tenths.  The program shows little understanding of design.  Attempts at pacing and flow
are minimal.  Concepts show little imagination and development.  Flow of ideas and continuity
is sporadic and does not engage the audience.  A team effort in developing the program is
lacking.

21-34 tenths.  The coordination of the music and the visual elements occasionally demonstrates
imagination and creativity.  Program shows some understanding of design.  Occasional periods
of audience intrigue and appeal generate some effect.  Drill design/Music arrangement levels,
while sometimes weak, can still generate some effect; however, effects are not maximized.

35-44 tenths.  The program contains knowledge of proper fundamentals of design and pacing. 
There are moments of unique design, and audience appeal/intrigue is evident.  Mood is
established and there are a variety of ideas producing effect.  The drill is closely coordinated
with the music and shows the aspects of phrasing, tempo, and dynamics.  The program shows an
advanced blending of all the elements.  New concepts or new variations on old concepts are
displayed throughout the program.

45-50 tenths.  The program contains a high degree of imagination and creativity within the style
of the particular competing unit.  Continuity and pacing is evident.  Audience engagement is
high.  Concepts are well developed and creative, generating a high level of effect.  Mood is
constantly sustained.  Drill and staging concepts are well developed and show phrasing, multiple
lines/ensembles, meter, tempo, and dynamics.  Music shows interest and variety. 

Performance Effect (5 points)

Was there emotional and aesthetic appeal in the total program?  Did the performers "bring the
show to life?"  Did the performers communicate the program to the audience?  Was expression
communicated both in the drill and in the music?

The following criteria are used to award points in this caption:

1-10 tenths.  Insufficient training and/or lack of maturity brings down the overall performance
effect.  The show does not communicate to the audience through either music or visual aspects. 
The performers do not exhibit and understanding of their role in the program.

11-20 tenths.  Members display little or no awareness of the skills needed to connect with the
audience and communicate the music and/or the visual aspects of the show.  Concentration is
minimal.  Inconsistencies are evident throughout the performance.  The performance is
unproductive.  Audience involvement is minimal.

21-34 tenths.  Members are aware of their roles, however, there are inconsistencies in
communication.  The audience is engaged but becomes distracted by lapses in concentration,
lack of intensity, communication, or professionalism.  Performance wavers and fluctuates.  The
performance is mechanical and is not brought to life.

35-44 tenths.  Members are aware of the skills involved in communicating with the audience. 
Lapses in concentration can cause varying results, detracting from the effect.  At times the
performance can seem lifeless.  The audience is often engaged and provides feedback.  The
performers often display emotion and professionalism in the music and in the visual aspects.  

45-50 tenths.  Members consistently display an awareness of communication, and are sensitive
to its use and effect in involving the audience.  The level of emotion and expression is high.  The
audience is constantly engaged, captivated, and offers strong levels of feedback.  Lapses are rare,
and style is well developed.  The performers communicate with the audience and the
adjudicator.

Integration (10 points)

Was there a coordination of all elements in the show?  Was the visual package coordinated to
the music?  Was there a presentation of various styles and moods?  Was staging effective?  Were
the concepts clearly defined and communicated?  Is emotion displayed through all facets of the
program?  Does it reach the audience?

The following criteria are used to award points in this caption:

1-20 tenths.  There is an obvious lack of involvement between the design team/individual and
the finished product.  There is no coordination between the music and the visual elements.  The
overall program does not work together.  There is a consistent lack of clarity.

21-44 tenths.  The design team/individual seldom displays awareness of blending elements to
raise emotional impact.  Impact points are seldom effective, and resolutions and visual ideas are
seldom coordinated.  There is little coordination between the music and visual elements. 
Percussion is seldom used to enhance musical effect.  Auxiliary is seldom used to enhance visual
effect.  Drill moves are not reflective of the music or mood.  Staging is weak and ineffective.

45-74 tenths.  The design team/individual shows understanding of element blending to induce
greatest effect.  Some impact points and/or resolutions are coordinated.  Staging of musical and
visual elements is evident.  Percussion is sometimes used to enhance the musical effect. 
Auxiliary is sometimes used to enhance the visual effect.  

 75-90 tenths.  The design team/individual often displays a high level of attention to detail in
creating a blend of the musical and visual elements.  Impact points and resolutions are
coordinated effectively.  Staging strengthens the impact of the musical and visual elements. 
Percussion and auxiliary are often used to enhance the program.  There is a high degree of
coordination.

91-100 tenths.   The design team/individual consistently displays a full understanding of the
intricacies of blend within all the elements of the show.  Coordination is evident between all the
sections of the corps.  Impact points, resolutions, and staging consistently enhance the overall
product.


2. Visual (20 points)

The visual caption deals with the look of the visual program from the stands or top of the
stadium.  It does not deal with foot phasing, out-of-step marchers, or posture, those are areas
tackled by the M&M adjudicator.  The Visual caption deals with the flow of forms, cleanliness
of the program, and construction of the visual show.  The caption shall be divided into two
sub-captions. Each sub-caption is further divided into 5 boxes; each with a set of criteria that
must be reached before a score can be awarded in that box.

Accuracy (10 points)

Are the lines straight?  Are the intervals consistent?  Are forms balanced?  Are there any
breakdowns in flow from set to set?  Is the style consistent?  Accuracy does not take into account
difficulty of drill or composition.

The following criteria are used to award points in this caption:

1-20 tenths.  The presentation contains few or no readable forms.  The drill detracts from the
other aspects of the program.  There are no recoveries.  Intervals are non-existent.  Forms are
unbalanced.

21-44 tenths.  The presentation generally lacks readable forms.  There are many moments of
uncertainty and hesitation.  Style is not consistently displayed.  Recovery of mistakes is rare. 
Intervals are inconsistent.  

45-74 tenths.  The marching program displays some readable forms.  Expressive movement
techniques are sometimes included and sporadically enhance the visual aspects of the program. 
Flaws tend to be apparent and recovery time is long, exposing the mistakes for longer periods of
time.  Style is evident but not well defined.  Understanding of intervals is sometimes displayed.

75-90 tenths.  The marching often displays readable forms.  Forms are balanced, and
understanding of intervals is often displayed.  Angular/linear forms are often straight.  Lapses are
less frequent, and recovery time is minimal.  Style is well defined.

91-100 tenths.  The marching program is consistently readable.  Intervals are consistently
displayed.  Forms are balanced and focused.  Errors are rare, and recovery is instantaneous. 
Style is well defined.


Composition  (10 points)

Are the forms staged successfully?  Does the drill design show concepts of phrasing, tempo, and
dynamics in relation to the music?  Is there contrary motion?  Is there use of positive and
negative space?  Does the drill "lead the eye" to certain features?  Composition takes into
account difficulty of drill.

The following criteria are used to award points in this caption:

1-20 tenths.  Lack of readability throughout most of the program.  No staging is evident.  There
is little or no contrary motion.  The drill does not enhance the visual aspects of the show.

21-44 tenths.  The intent of the drill design is not readily apparent.  There is sporadic use of
staging.  Contrary motion is brief and uninteresting.  The drill does little to enhance the visual
aspects of the show.

45-74 tenths.  The intent of the drill is sometimes apparent.  Staging and contrary motion are
sometimes evident.  The drill sometimes leads the eye to featured sections.  Phrasing, tempo,
and dynamics are sometimes evident through the drill.  There is sometimes a variety of drill
style.  Orchestration and organization are good.

75-90 tenths.  The intent of the drill is often apparent.  Staging and contrary motion are evident. 
There is variety of drill style.  The designer understands the use of positive/negative space.   The
designer understands the concept of "leading the eye."  Orchestration and organization are
excellent. 

91-100 tenths.  The intent of the drill is apparent at all times.  Staging and contrary motion are
frequent.  The marching program is broad and varied.  There is a significant level of versatility,
resulting in more complex forms.  The designer understands all the key elements of drill design. 
Innovation is evident.  Orchestration and organization are superior.


3. Music (20 points)

The Music caption deals with all the musical aspects of a drum corps program combined.  It
judges both brass and percussion, and how the two combine to produce the highest quality of
music.  The caption shall be divided into two sub-captions. Each sub-caption is further divided
into 5 boxes; each with a set of criteria that must be reached before a score can be awarded in
that box.

Musicality (10 points)

Is there good, musical interpretation of the music?  Do the brass and percussion achieve balance
and blend?  Is intonation superb?  Is phrasing evident?

The following criteria are used to award points in this caption:

1-20 tenths.  There is a lack of clarity that prevents any display of musicality.  There is no
response to direction.  There is a general inability to play together.

21-44 tenths.  There is little attempt at expression or interpretation.  Poor tone production
produces timbre differences.  Players exhibit little training or control.  Little evidence of tuning
exists.  Pitch and tone quality are below average.  Phrases are not developed. 

45-74 tenths.  There is lack of proper balance between the brass and the percussion. 
Achievement of balance is limited due to timbre differences created by poor tone production. 
An occasionally mechanical approach to expression exits, and lapses in concentration are
evident.  Phrases are sometimes developed.

75-90 tenths.  Brass and Percussion are often successful in achieving proper balance.  Lapses
may occasionally occur due to design problems.  The ensemble often achieves a musical
rendition of important passages with uniform and subtle gradations.  Musicianship skills of an
extraordinary nature are often required, maximum demands are often met.  Good control of most
aspects of tone production and intonation.  Timbre may be affected in extremes of range and
volume.  Pitch accuracy is high.  Phrases are often developed.

91-100 tenths.  Ensemble consistently achieves proper balance.  Lapses are infrequent and
minor.  There is excellent control of timbre and intonation.  Concentration rarely falters. 
Members exhibit the utmost control.  There is clear, tasteful, and idiomatically correct
interpretation.  There are exceptional demands placed on the performers throughout the
program.  Phrases are fully developed with little or no breaks.


Technique (10 points)

Is the rhythmic interpretation correct?  Is the tempo steady?  Are attacks and releases together?

The following criteria are used to award points in this caption:

1-20 tenths.  There is a lack of clarity which prevents a display of technique.

21-44 tenths.  Tempo and pulse control are lacking.  Recovery is non-existent, and concentration
is poor.  Average ensemble technique demands are present.  Phasing is an ongoing problem
throughout the performance.  Technique is inconsistent section to section.  Most attacks/releases
are not together.

45 to 74 tenths.  While there is a sense of tempo and pulse control, phasing is still a concern. 
There are individual problems in rhythmic interpretations and rapid passages, which often lack
togetherness.  Some attack/releases are not together.  Recovery is sometimes difficult.  High
demands exist. 

75-90 tenths.  Members display a good awareness of pulse and tempo. Rhythmic interpretation
and accuracy in fast passages is excellent.  Attacks and releases are often together.  Spread
formations can cause some phasing.  Recovery is good.  Extraordinary demands are displayed
often.  

91-100 tenths.  There is a superior control of tempo and pulse.  There may be occasional
anticipation at beginnings of phrases, but little or no phasing.  Spread formations may still
challenge the members, but recovery is instant.  Maximum demands placed on members
throughout performance.


4. Brass (10 points)

This caption focuses on the brass section of the corps, and is judged on the field.  It can be done
as a build-up or tear-down caption.  If done as a "tick" caption, the criteria below is ignored in
assigning points, but is still valid for "sampling" of mistakes.  The caption shall be divided into
two sub-captions. Each sub-caption is further divided into 5 boxes; each with a set of criteria
that must be reached before a score can be awarded in that box.

Musicianship (50 tenths)

Is phrasing evident?  Is tone quality high?  Is intonation consistent within sections and from
section to section?  Are attacks and releases together?  Is individual technique consistent?

The following criteria are used to award points in this caption:

1-10 tenths.  Performance is lacking in clarity.  

11-20 tenths.  An occasional attempt is made to express the melodic line.  Dynamics and
phrasing are inconsistent.  Musical demands of an average nature are present.  Intonation is
questionable from section to section.  The sound is often rigid or uncomfortable

21-34 tenths.  The melodic line is generally successful in presentation.  Dynamics and phrasing
are generally successful, although a rigid, mechanical approach to expression is sometimes
evident.  Demands requiring above average musical understanding are present.  Intonation is
generally consistent section to section.

35-44 tenths.  Players often achieve a musical rendition of important passages with uniform and
subtle gradations.  Phrasing is mostly uniform and sensitive.  Musicianship skills of an
extraordinary nature are required, maximum demands are often displayed.  Intonation is good.

45-50 tenths.  There is clear, meaningful, and expressive shaping of musical passages. 
Well-defined and sensitive playing throughout.  Maximum musical demands are consistently
present throughout the entire performance.  Intonation is superior.  

Technique (50 tenths)

Is there correct and uniform technique?  Are phrases developed by individual players?  Are
individuals performing within the group, or are there those that "stick out" of the ensemble?  Is
stylistic articulation present?  Do musicians know the music?

The following criteria are used to award points in this caption:

1-10 tenths.  Performance is lacking in clarity.

11-20 tenths.  Immature players exhibit little training or control.  Rhythmic interpretation is
poor.  Breath support is seldom present.  Articulation is inconsistent.  Little evidence of
instrument tuning exists.  Demands are average and are only met sporadically.

21-34 tenths.  A generally good approach to proper brass playing is evident.  Flaws occur due to
extremes of range or volume.  Rhythmic interpretation is generally good.  Good approach to
proper tone exists; however, players are often taxed beyond their ability at extremes.  High level
of technical demands is sometimes present.

35-44 tenths.  Members exhibit good to excellent training.  Players sometimes taxed at upper
extremes of range and volume.  Rhythmic lapses are infrequent and minor. Excellent control of
most aspects of tone production.  Extraordinary demands are present and often met.

45-50 tenths.  Players exhibit brass training of the highest level.  Rhythmic lapses are rare and
minor.  Maximum demands are consistently present, and consistently met.  Concentration is
superior.  Tonal focus is superior.  


5. Percussion (10 points)

This caption focuses on the percussion section of the corps, and is judged on the field.  It can be
done as a build-up or tear-down caption.  If done as a "tick" caption, the criteria below is ignored
in assigning points, but is still valid for "sampling" of mistakes.  The caption shall be divided
into two sub-captions. Each sub-caption is further divided into 5 boxes; each with a set of
criteria that must be reached before a score can be awarded in that box.

Technique (50 tenths)

The following criteria are used to award points in this caption:

1-10 tenths.  There is a lack of clarity which prevents a display of technique.  Basic training is
not evident.  Rudimental drumming skills are not evident.

11-20 tenths.  Some control is present, patterns are discernible, however, the performance is
bogged down by lapses in concentration.  Rudimental drumming is present but poor.  There is
little basic training evident.  Players seem to be concerned only with playing the notes. 
Segmental clarity is inconsistent.
 
21-34 tenths.  Occasional display of good technical and timing accuracy.  Rudimental drumming
is present but not fully developed.  Patterns are readable, though clarity is not consistent.  A
consistent use of average skill demands is present throughout and there are occasional displays
of above average technical demands.  Segmental clarity is often flawed.

35-44 tenths.  Excellent control of technique by all performers resulting in consistent clarity and
tempo.  Rudimental drumming is present and excellent.  All segments are continually aware of
their responsibilities.  Flaws only occur during passages requiring a high level of technique and
concentration.  There are often maximum demand skills displayed by all performers.

45-50 tenths.  No weaknesses are evident.  The ensemble plays with solid tempo control.  The
rudiments of drumming are present and are high quality.  The highest degrees of concentration,
musical purpose, and technique are constantly displayed by the performers.  The highest quality
of musical demands requiring maximum ability and skill, are present throughout the entire
performance. 


Musicianship (50 tenths)

The following criteria are used to award points in this caption:

1-10 tenths.  There is a lack of clarity which prevents display of musicianship.  

11-20 tenths.  There are few attempts at expression or musical interpretation.  Performance is
mechanical.  A consistent use of average skill demands is present.  Some attempt has been made
to tune the equipment.

21-34 tenths.  Segments generally work as a unified musical entity.  Musical intent is always
recognizable.  Quality of sound is consistent.  Equipment is mostly tuned, with few lapses. 
Passages of above average demand skills are present and achieved consistently.

35-44 tenths.  Style and idiom are tastefully and accurately communicated.  Communication of
musical passages is clear, meaningful, and expressive.  Excellent tuning is often displayed. 
Maximum demand skills are being presented and displayed by all performers.

45-50 tenths.  There are no weaknesses.  There is a constant demonstration of the finest qualities
of musicality and subtleties of expression and interpretation.  All players utilize the best possible
techniques to communicate in the style and idiom chosen.  Tuning is superior.  The highest
quality of musical and physical demands are present throughout the performance.


6. Marching & Maneuvering (10 points)
 
This caption focuses on the mechanics of marching: Foot technique, roll step, posture, horn
carriage, phasing, etc. are all subjects for the M&M judge.  This judge will be on the field,
sampling each section of the corps as best as he or she can.  No judge should spend an inordinate
amount of time on any one section of the corps. The caption shall be divided into two
sub-captions. Each sub-caption is further divided into 5 boxes; each with a set of criteria that
must be reached before a score can be awarded in that box.


Accuracy & Definition (50 tenths)

The following criteria are used to award points in this caption:

1-10 tenths.  Individuals show no training in marching and maneuvering principles. 
Concentration is lacking.  Readability is lacking.

11-20 tenths.  Individuals show little training in marching and maneuvering principles.  Breaks
and flaws are frequent.  There is little adherence to a single marching style.  Performers are
unaware of even the most basic responsibilities of spacing, interval, and alignment,

21-34 tenths.  Individuals show some sense of alignment in upper and lower body.  Marching
and maneuvering principles are sometimes displayed.  There is some uniformity of style.  Breaks
and flaws are still frequent, but recovery is attempted.  The training process is in a development
stage.  Performers are challenged with average responsibilities.

35-44 tenths.  Individuals often demonstrate marching and maneuvering principles.  They may
still vary from individual to individual in instances of high demand.  Uniformity of style exists. 
Flaws are present, but are minor in nature, and recovery is fast.

45-50 tenths.  Individuals consistently demonstrate marching and maneuvering principles.  There
is a high degree of uniformity at all times.  Flaws are infrequent, and recovery is evident and
quick.  Performers display superior achievement and are challenged often.

Fundamentals (50 tenths)

The following criteria are used to award points in this caption:

1-10 tenths.  There is no consistency of marching fundamentals.  Unit is unprepared.  A lack of
clarity exists.

11-20 tenths.  The ensemble/individual displays little understanding of the fundamental concepts
of marching and maneuvering.  Proper horn angles are often lacking.  There is often phasing and
members out of step with the ensemble.  Foot technique is inconsistent.
21-34 tenths.  The ensemble/individual shows some understanding of the fundamental concepts
of marching and maneuvering.  Horn angles are sometimes correct.  There is sometimes phasing
and members out of step.  Foot technique is mostly uniform with few lapses.  Recovery of
mistakes is attempted.  

35-44 tenths.  The ensemble/individual often achieves the fundamental concepts of marching
and maneuvering.  Breaks and flaws still occur, but recovery is quick.  Concentration and
stamina is high.  Horn angles are often correct, and phasing is rare.  Foot technique is excellent.

45-50 tenths.  Highest demands placed on the individual marcher are met consistently.  Horn
carriage is superb.  Intervals, alignment, and overall posture is superior.  Lapses are rare, and
recovery time is minimal.  There are little or no weaknesses.

7. Auxiliary (10 points)

     The auxiliary judge is there to judge the members of the color guard units. The caption
shall be divided into two sub-captions. Each sub-caption is further divided into 5 boxes; each
with a set of criteria that must be reached before a score can be awarded in that box.

Repertoire (50 tenths)

The following criteria are used to award points in this caption:

1-10 tenths.  The program is immature and displays a lack of understanding of unity in design.  It
does not reflect or enhance the show music or visual package.  Clarity is lacking.

11-20 tenths.  The equipment work and/or movement fundamentals are written at a basic level
and seldom reflects or enhances the show music of visual package.  There is a lack of continuity. 
Staging is not present.

21-34 tenths.  There is an occasional demonstration of equipment work and/or movement which
enhances the show music and/or visual package.  Equipment work may not be compatible to the
auxiliary's level of development.  Continuity and staging are sometimes present.

35-44 tenths.  Variety and creativity is demonstrated in the equipment work and/or movement. 
Work is compatible with the level of the auxiliary's development.  There is continuity and
creative staging evident.

45-50 tenths.  Variety and creativity in equipment and/or movement is imaginative, strong, and
consistent.  There is excellent unity of design elements, continuity, and creativity of staging. 
Work is challenging to the auxiliary and compatible to the auxiliary's level of achievement.

Technique (50 tenths)

The following criteria are used to award points in this caption:

1-10 tenths.  Basic training is lacking.  No apparent attempt at accuracy and definition has been
made.  There is a lack of clarity.

11-20 tenths.  There is little basic training.  Concentration and stamina are not evident.  There is
seldom any adherence to style in equipment and/or movement.  Equipment work is not
consistent from member to member.

21-34 tenths.  Equipment and movement technique shows some training, and there is sometimes
an adherence to style.  Some attempt at accuracy has been made.  Equipment work is sometimes
consistent member to member.

35-44 tenths.  Equipment and movement technique often shows training in all areas.  Adherence
to style occurs often.  Accuracy is high and developing.  Equipment work is often consistent
member to member.

45-50 tenths.  No weaknesses are evident.  Equipment and movement technique show a high
degree of training.  Accuracy is superb.  Equipment work is consistent member to member. 
Adherence to style is superior.


In summary, the judging system outlined above closely resembles the system currently employed
by Drum Corps International, and is created to follow the same definitions and formats found in
the DCI Judging Manual.  Some tweaking of the above system would most likely result in a fair
system that can be incorporated into the existing system without too much change.  The most
important faction in any suggestion, however, is the judges themselves, and what they do with
that suggestion.  Judges must continually seek to educate themselves for their on betterment and
the betterment of the activity they serve.  Judging accountability is another topic that needs to be
examined in depth by the drum corps community to ensure that we are developing a system
where judges can do their job without harassment or recrimination for their honest appraisals of
a show's performance.

While it is highly unlikely that Drum Corps International would consider the above system at the
next Rules Congress, it is the sincere hope of this researcher to keep making changes,
modifications, and improvements to this text so that one day the most perfect system, within our
flawed frame of reference, can be arrived at and implemented.  On that far-off day, perhaps we
will be able to stop focusing on the men and women in green shirts, and start focusing on the
men and women performing on the field.

In closing I would like to thank a few people.  First and foremost, my significant other, Shelby,
she of the infinite patience and loving disposition.  Thanks must also go out to Jeff Mitchell, a
former DCI judge who was invaluable in giving shape to some of my thinking processes, and
who was great to bounce ideas off of.  Thanks are also in order to Neil Jenkins, Ivan Wansley,
and Dr. Michael Dressman, whose seminar on adjudication in the state of Florida was a great
help in solidifying my ideas.


REFERENCES

Anonymous. (1992). DCI Judge Administration Report. Lombard, IL: DCI Press. 

Anonymous (DCI). (1982). Percussion Execution White Paper, 1982 Guidelines. Lombard, IL:
DCI Press. 

Collier, David. (1993). Percussion Caption Philosophy. Lombard, IL: DCI Press. 

Mitchell, J. (1997). The Evolution of Judging Performance. 

Oliviero, George. (19xx). The Principle and Process of Achievement. Lombard, IL: DCI Press. 

Oliviero, George. (1993). Visual Caption Philosophy. Lombard, IL: DCI Press. 

Poole, C.A., Maniscalo, M.J. (1982). Degree of Excellence: Resolving the Interpretive Dilemma.
Lombard, IL: DCI Press. 

Sorensen, R. (1977). Exposure to Error: DCI Guidelines and Interpretations. Lombard, IL: DCI
Press. 

Various. (1994). Contest Rules and Judging Manual. Lombard, IL: DCI Press.

    Source: geocities.com/marchingresearch