THE SAIGON PAPERS

DEREK DILLON'S UNPUBLISHED ARTICLES



VirFut Q-Pro©
Public Policy Implications of Quantum Computing

Not United States, but critical states;
Not a borderless world, a world of fractal borders;
Not one currency, an m-currency;
Not global monoculture, a diversely-identical metaculture;
Not English as the imposed onelanguage, Musculpt as universal metalanguage.

time a dynamic factor of forces
In the early 1960s, I became aware of the degree to which American corporate management practices have been modeled, since at least the Civil War era, on the prevailing conventions of American military staffing. Having grown up in a military family, and having been exposed early and often to discussion of the theory of strategy, tactics, contingency and command, I could not help fretting over the linear reductionism imposed on these military disciplines when translated into civil application, or when civilian administrators unschooled in strategic thought dictate decision processes. Witnessing consequences of this late in the decade -- standing at 03:05 on the 31st of January, 1968, inside MACV Headquarters before an expanse of black glass and looking at rocket round after rocket round impact upon the tarmac of Tan Son Nhut Air Force Base -- I could not escape memory of Abdul Said’s lectures on strategic contingency theory at the School of International Service (SIS), The American University. Many times since, have I wondered what the world would now be like, had SIS and the Johns Hopkins School of Advanced International Studies, where the emphasis is on field experience, produced the model of the international system informing the American policy imagination, rather than Harvard, Tufts and Stanford; had George Washington University, rather than the Wilson and Fletcher Schools, formalized prevailing notions of civil law and public administration; had the Wharton School of Finance, rather than Harvard Business School, transformed military contingencies projection into multiple scenarios strategic planning for the corporate world.

The theories of escalation control options, of “mixed strategy” and the “minimax”, of Everett algorithms, even of force-structure pyramids and triangle variants drawn on four-color maps can be regarded as in many respects civil parody of authentic strategic thought. In the same way that prevailing practice of multiple scenarios planning is abject reductionism relative to contingencies projection, so escalation control theory, for instance, is utmost fogged-in and grounded linearity compared to soaring Clausewitz. At the end of ON WAR, in “Ends in War More Precisely Defined”, Clausewitz uses the following phrase: “…time by a dynamic analogy as a factor of forces”. The great strategist seems confused on this issue. Initially, he says that to arrive at this concept of time in military strategy is “completely wrong”. Just a page later, however, he hedges somewhat and says that time itself can have some effect upon a military situation, but only if the almost defeated foe can no longer hope to bring forth another battle, only if, in effect, the war has already been won. But earlier, back toward the beginning of the book, in “End and Means in War”, he acknowledges, by using the term “duration”, that the enemy can be overcome by wearing him out: mere duration can eventually cause the opponent to lose force. Clausewitz again qualifies this by saying that duration can not be used to achieve a positive offensive object, for it is a “principle of the pure defensive”. Yet again, in “Assembly of Forces in Time”, he says that, if “time on its own account” can have significant effects tactically, in the realm of strategy these effects diminish to the vanishing point. This confusion was a mark of Clausewitz’s genius! The strategist ran up against the brick wall established by his assumptive framework, and made probes in an attempt to find a way to the other side. Who knows what would have happened to the art of war had Clausewitz not fallen prey to a Polish cholera and died upon his return from the frontier to Breslau?

Clearly, the Clausewitzian view of war and power politics does not admit to the rigidity of physical law that Newtonian physics describes, but, nonetheless, the conceptualization of the whole system of nation-states and their interactional dynamics, in peace and in war, is so governed by analogy to those physical laws that he could not codify a framework radically departing from the constraints thereof. Clausewitz, obviously, was bound by Newtons’s concept of absolute space and time. But just think of it! Some three-quarters of a century before Einstein’s Special Relativity, at the very moment Lobatchewsky at Kazan was producing the fundamental ideas for his “pangeometry”, Clausewitz in Berlin brought forth the idea of time as a dynamic factor of forces -- if only to struggle with the concept and reject it in the end. Even General Relativity does not go so far as to make time active, except insomuch as it is made space-like. Operator-time plays no part in contemporary consensus physics.

world picture and pure conception
Not only did Clausewitz use Newton-derived concepts like friction, equilibrium, summation of forces, the extremum principle, but he directly borrowed the meta-structure of Newtonian science: the distinction between the physical world-picture and the physical world. In his very first chapter, “What is War?”, he establishes definitions distinguishing between the “pure conception of war” and “war adapted to the real world”. The world-picture and the pure conception are idealized constructs or models cultivated for their heuristic value. If we remove complicating factors like friction, for instance, in creating a model of a physical process, it is much easier to create that model. We are left, however, with the problem of determining the relation of the idealized model to the real world. This latter problem, in the early years of science, was not a matter of much preoccupation. It was only well after the time of Clausewitz, when men like Max Planck came upon the scene, that this became a problem in physics of almost insurmountable proportions. And later yet, with the entry of Heisenberg, the problem became intractable: not only a question of the relation of the physical world to the world-picture, but also a question of the picture of our relation to both. Planck personally addressed a lengthy essay specifically to the subject. But Clausewitz, in the domain of his chosen application, anticipated these difficulties -- somewhat.

rotational logic
Not only did this problem -- the relation between the pure conception and the real world -- present itself in the issue of the relation of time to force, but also in the question of how rigidly one should adapt the logic of Francis Bacon’s NOVUM ORGANUM to the assessment of a military situation. Clausewitz deals with this issue most directly in his discussions of the principle of polarity: simply another way of talking about binary logical relations. He points out that the conflict of interests between opposing commanders is a firm binary exclusion, a true polarity, while the modulation of force between the offensive and the defensive allows no such rigid dicotomy: the polarity, here, exists only in the decision, not in the process. This was a hole -- having to do with the real world -- in the brick wall of Newtonian assumptions, through which Clausewitz, had he looked without reservation, might have seen Sun Tzu, and foreseen Mao Tse-tung.

The logical method of ancient China, in contrast to that of Francis Bacon, posits a complete transparency of opposites: the rotation of yin and yang in the tai chi symbol. This is a pure conception, an idealized model of the modulation of force between the offensive and the defensive which goes so far, in Sun Tzu’s formulation, as to break down even the “true polarity” of interests between opposing commanders. There is a certain “relativity” involved. The Chinese, of course, never felt themselves constrained to play “zero-sum games”. Is it any wonder that Vinegar Joe was so confused by the situation in China? Is it any wonder why so many others were later confused by the situation in Viet Nam?

a hyperspace governing the combat
The non-orientability, the automorphism, the mutual convertibility of the cheng and ch’i forces -- pick any set of opposites you wish: fixing-flanking, diversion-decision -- in any tactical or strategic situation, not only destroys binary logical relations, but paints a new face for war. This “rotational” element in the logic of the meta-structure of the pure conception of war, generates, in the full technical mathematical sense of that word, an abstract hyperspace governing the outcome of the combat. The origin of this logical rotation, this propositional spin component, is the animism of the object-made-sacred -- be it talisman, landscape feature, sword, instrument, point-of-origin, cultural seat, whatsoever. Identity of subject and object: their thermodynamic reversibility. A logic of identity-relation, rather than truth-value! Superficially, it appears the opponents in war are forced to adopt what amounts to one or another form of Borel’s “mixed strategy”, which, in the long run, always reduces to the “minimax” touted by von Neumann. But only if viewed superficially through the lens of a non-rotational binary logic. It is not simply the fact that the distinction between combatant and non-combatant has been completely destroyed; it is not merely that we must WHAM’em: win the hearts and minds by grabbing their balls -- be those balls military, economic, or cultural. The fight is no longer for control of physical ground; it is for control of the hyperspace. That’s the hyperspace, not the conscious minds of the people! We have warfare taken, not only to a new threshold of technology or totality or intensity (higher or lower), but to a new level of abstraction in the very idea of war. Now, the elements in tactical and strategic equations are not simply concrete variables (numerical signifiers of the power series of participant force structures), nor are these elements merely first or second order derivatives of the concrete variables integrated over time. Calculus has become obsolete as a tool of strategic planning. One must “tensor” the conflict, ascertain the boundary-values being imposed, determine likely escapements and intrusions of extra-combat free-energy. There are extraordinary reservoirs of free-energy available, not identified in the summation of forces. The hyperspace maps the field of free-energy flux. One must consider, for instance, how constant the entropy surface is upon which the army marshalls its force structures. On this basis alone can maximizing escalation control options become self-defeating.

Sun Tzu says: “Those skilled at making the enemy move do so by creating a situation to which he must conform.” He says, moreover, that “ …a skilled commander seeks victory from the situation and does not demand it from his subordinates.” In other words, victory should occur with the same ease that a boulder rolls down a hill. And for the same reason! Because the hyperspace in the cosmological neighborhood of the boulder is curved in such a way that it rolls down the hill of its own accord: gravity -- not as Newton saw it, but as Einstein did! It is not a matter of application of force to the physical body of the boulder; it is a matter of the event-gradients established in the hyperspace.

event gradients, attractors, and the action of protraction
In developing civil application of Sun Tzu’s thought, Mao, the Taoist practitioner, adapted strategic contingencies projection for tactical improvization in an extraordinary fashion by concerning himself primarily with the total configuration of the environment of the combat. If the event-gradients in the environment were right, then the combat would take care of itself. How did Mao bend the hyperspace of national liberation war? He used time as a topological handle on the space! What else is protraction for in war? Time is a dynamic factor of forces -- on the hyperspace -- as Clausewitz almost knew. The “del operators” that have always tacitly controlled the field of the combat have now been directly mapped on the hyperspace which determines the outcome of the action. A “sorting demon” of a new type has arrived on the field of battle: the action of protraction on the hyperspace. Call it “temporal curl”.

This twisting by Mao of the civil affairs handle constitutes an application of strategic contingency theory of organizational adaptation, and an interesting expansion thereof: not only does a changing environment modulate organizational behavior through autopoietic resource exchange across boundaries, the altered sub-unit power relations impress themselves on the very same environment. All in animistic violation of thermodynamic irreversibility! Which acts first? Active time as a factor of forces is entropically transparent, like Maxwell’s demon! Absent passive linear time, entropy is undefinable. Two fields of contingency coupled in mutual inductance by temporal operations on the hyperspace. We surely have here a path to field theory in sociology.

What does this mean in practical terms? It means that when Clausewitz drew a distinction between “preparations for war” and “war itself”, he was wrong. It is the business of the commander to destroy this distinction. If he can destroy the distinction -- thus changing initial conditions and boundary-values of the conflict -- he can win the war. The political infrastructure that controls those individuals who will never carry arms, whose business it is to manage “the whole litany of subsistence and administration”, in Clausewitz’s phrase, is the primary vehicle available to the commander to bend the hyperspace of the combat so that victory will roll down the hill like a boulder. The idealized, pure conception of this -- in the limit -- is to win the combat without even having to fight. Like the World Wars, much of the Korean War belonged to Newton: in spite of destiny being tick-tocked by a second hand, as MacArthur imagined, spatial modes of thought determined the actions of both sides -- the logic of the Inchon Landing being the most obvious case in point. Compression, envelopment, isolating component, pincer, flank, vise, geographical potential, end run, salient. The dilemmas of the Yalu, however, were points of entry into Mao’s world -- and America wisely hung back. Viet Nam, in contrast, fully belonged to Einstein: much more abstract, non-spatial modes of thought are required to comprehend what actually transpired there, because an unfamiliar concept of time was masterfully invoked. It was not Giap who slew Westmoreland at Tet; it was Einstein who slew Newton. And this, of course, is to sidestep the question of how Sun Tzu knew something of what Einstein came to know.

In adapting strategic contingency theory for civil affairs, Mao applied the rotational logic of Taoism to the problem. At Harvard Business School, and later Stanford Research Institute International, the logic used was that of Francis Bacon. Mao came up with operator-time as a means to twist the hyperspace so as to form event-gradients (or chaotic “basins of attraction”), which “pull” action into conformal patterns. Harvard and Stanford came up with multiple scenarios lineally conceived, which, in strategic planning practice -- and in conformation with the Anglo-Saxon-Protestant linear view of history as converging on an omega point: Kairos -- involves choosing the most likely or most desirable scenario, relative to which comprehensive planning techniques are then applied. The difference between these is so stark as to be staggering. The Chinese-Einstein-Chaos-Complexity way is to look “inside” of external events (“basins of attraction”), while the Harvard-SRI-Newton model focuses on interactions of external forces, environmental trends, patterns of events. It is easy enough to see why Royal Dutch Shell would embrace the latter model; and very difficult, indeed, to understand implications of the fact that Singapore embraces the same model, while simultaneously espousing “traditional Asian (i.e., Chinese) values”. The Singapore case is far beyond being merely staggering.

the linearity of multiple scenarios
A scenario is something like the story line of a popular-fiction novel. It is a future history, a linear sequence of happenings conceived as being logically and causally connected. As SRI developed the method, scenario logics are sets of binary exclusions: the sort of thing Francis Bacon just loved and Clausewitz struggled to get free of. Hence, scenario dimensions or formats (the story’s themes) generally are limited to two possible states (she marries him; she doesn’t marry him); and the number of formats depends upon how many external societal driving forces are chosen as instrumental in a given scenario or planning exercise.

A societal driving force could be a long-developing drought, a simmering conflict over an oil field, a population trend driven by a one-child policy, the waxing and waning of a moral convention (her “decision” to marry him came in the context of a shot-gun wedding). Fixing the choices of formats and states according to identified properties of the driving forces generates logic sets, which are often graphically represented as axial arrays of arrows. The concept of history implied here, as determined by vector (the arrows) sums with Boolean-only properties, is so consonant with the action-based sociologies produced at Harvard and Berkeley in the 1950s (which assume wholly distinct actors), it is not at all surprising that there should be a strong great-man-theory-of-history dimension to scenarios planning. This comes in with stakeholder identification and analysis. Stakeholders are the great men of scenario future history whose discretionary acts relative to the key decision factors (opportunities versus threats) coalesce into the societal driving forces responsible for the logic sets. Pretty neat circle.

Stakeholders are those with a stake, those whose interests will be affected by scenario outcomes: leaders, managers, specialists, institutions, stockholders, customers, PACs, public interest groups, and other such identifiable discrete entities which are inherently in conflict (as action-based sociologies and Newton’s atomistic billiard-ball vector sums describe). Key decision factors are positive and negative environmental conditions affecting the scenario: positive factors are opportunities; negative factors, threats. Opportunities and threats, of course, are true polarities -- like Clausewitz described. They are not inclined to turn into their opposites, like Sun Tzu imagined. There is no “relativity” between them, no tai chi rotation in their logical exclusion properties. On practical considerations, three or four scenarios only are generated by weighting differently the chosen societal driving forces (if traditional morality is given a strong weight in Scenario “C”, a shot-gun wedding is more likely than in Scenario “A” where the effects of TV have undermined tradition).

People often get confused by having more than one vision of the future, so, on practical considerations, late in the strategic planning process the most likely or the most desirable scenario is often chosen, within which prescriptive comprehensive planning techniques are applied. Relative to the future history outlined by the chosen scenario, plans are made to take advantage of the projected opportunities and to counter the projected threats by supporting the right stakeholders, trends, environmental factors, driving forces -- and opposing the others. Depending upon sophistication of the persons involved, some non-standard practices can come into play here in the implementation stage: disjointed incrementalism, tactical improvisation (two terms for much the same thing: being creative in the moment in response to changing circumstances, and not being compulsive about sticking to the long-range plan).

In multiple scenarios strategic planning, time, of course, is not an instrument used by the planner. It is a passive backdrop to the linear sequence of happenings constituting the story line of the given scenario or future history; it is the kind of passive backdrop which is a sort-of-almost ruler, against which the flow of events can be measured. For the planner from Harvard and SRI, time is not a “dynamic factor of forces” in the Clausewitzian sense. This civil planning process is very far from military strategic contingencies planning, and even farther from the Einsteinian innovations introduced into military science by Mao.

the linearity of multiple worlds
Physicists early in the century decided to interpret quantum mechanics in terms of probabilities for the same reason multiple scenarios planners presently choose the most likely scenario: people (including physicists) often get confused by having more than one vision of the future. This confusion, of course, is a people problem, not a descriptor of physical reality or historical dynamics -- whatever the magnitude of efforts made to justify the contrary. It is relative to the practicalities of implementation that planners choose to choose the most likely scenario. The same situation prevails in quantum physics, where the practicalities of the act of measurement are understood as collapsing all the simultaneous states of the system down to the one which turns out to have been most likely. Before this collapse, all possible elementary particle scenarios were equally real. But how can you do anything with a system where all possibilities are real? How can you control it? How can you plan experiments? How can you hold everything equal while varying one thing so you can find out how it works? If all scenarios are real, time cannot be what we know it is. How can you think about something when time isn’t what it is? It wouldn’t make any sense; and even if it did, the scientific method wouldn’t work. Such considerations weighed heavily in the collective decision to adopt probabilities, rather than the new logics-of-all-possibilities created/discovered at Cornell University by Emil Post in the teens of the 20th century.

Eighty years have passed and still Post’s uTm logics have not been adopted by the physicists. These logics are not likely to be taken into the family of physicists in the next few years, but the time is approaching when they will be: the advent of quantum computers guarantees it. If all states of a system are simultaneous and equally real, then even the notion of virtual reality is inadequate insofar as a “real world” serves as the basis of a holographic reference beam: all “real worlds” would be equally unreal. But the reason why people are confused by the vision of more than one future is the clear implication that somehow they must be in each of those futures: all incarnations would be simultaneous, so to speak. Such a thing would confuse most people, while sufferers of hysteria with their multiple personalities seem best equipped to not be confused -- if it were not for the fact that their personalities generally are not aware of eachotherself. What quantum computers soon will be demonstrating is: While all “real worlds” are equally unreal, the world of the “unreal” is the truly real world. Post’s uTm logics-of-all-possibilities describe the TrulyRealReallyTrue, and after much human resistance quantum computers will insure acceptance of these logics.

superposition in contingency planning
Military contingency planners are interested in a plan that must succeed in all possible futures. A contingency is a situation where them has been allowed a choice. This is different from a branch point in a decision or logic tree, which is a situation where we must make a choice. If we make our decisions well enough, their decisions will be made for them. Contingencies arise when something has been overlooked. Contingency planners try to discover all the things they are overlooking, and how to insure that once discovered those things are never again overlooked. A contingency, once incorporated into the plan, is no longer a contingency; it is a branch point in the decision tree. Multiple contingencies planning is more like simulating the projection visualizations of a chess Grand Master, than the story line production of scenarios generation. While any experience with simulacra is useful training, I suppose I was particularly fortunate in that my father was stationed at Wright-Patterson AFB Systems Command in the early 1960s: as a teenager, I was able to climb in and out of the experimental flight simulators and gaze at the early autopilot displays developed by AFSC, which were the precursors of virtual reality technology -- and have Sunday dinner with the scientists developing them. In military planning there is only one battle plan, that is, one scenario: We win! But the “logic tree” of how that win is to be accomplished can become so branched by possible contingency-forced decisions that even the simplest of prospective engagements can begin to look like neural network diagrams. This is where a little bit of chess and all the flight simulation you can get are useful aids to visualization. Each decision branching creates a new unreal “real world” simultaneous with all the other decision-created unreal “real worlds”: every branch keeps on branching and branching and branching. And they may branch back into one another! Clear acetate sheets are useful here, with the decision trees boldly etched thereon, as all unreal “real worlds” should be regarded as transparent -- and since they are equally “real” and simultaneous, they are stacked on top of one another in overlay. The plan that must succeed in all possible futures is that implicit in the whole stack of acetate sheets, not any one of them, or any subset thereof. So, how do you make sense of what you see looking down through all the sheets in the whole stack? Visualization, imagination, intuition, gut feeling, and inner voices traditionally have been guiding lights used to illuminate that confusing stack of more than one vision of the future. What quantum computers will make available for wringing sense out of the whole stack is Post’s uTm logics-of-all-possibilities.

m-valued logics and identity transparency
What is uTm? The m is the number of possible logical-values; u the number of such values permitted to any given proposition. In traditional Aristotelean-Baconian logic, a proposition is defined as any statement that can be determined to be true or false. There are two possible logical-values (the m factor, that is). But the statement must be either one or the other; it cannot be both true and false at the same time. So only one value is permitted per proposition. This traditional logic, then, is single-valued (the u factor, that is). Two-valued binary logic is single-valued. Confusing? Post’s system is better. Traditional logic is a 1T2 logic. Why Post chose “T” I don’t know. Maybe for logic tree. In Post’s logics, the factors u and m can take on any number. “This statement is false” is a proposition in a 2T2 logic, but such interpretation does not lead very far, so long as the values are regarded as “truth-values” -- as Post appears to have regarded them. This is why he maintained that 1T2 logic is the fundamental order of value, and why he called uTm logics m-valued “truth systems”. But what does it mean for a given proposition to be both true and false simultaneously, as in the 2T2 case? Even in ancient Chinese logic, Yin becomes Yang as Yang becomes Yin as the tai chi wheel spins out time cycles (the wheel being operator-time). Much more complexly, what does it mean for a proposition to have u simultaneous valid contradictory answers? What could, say, a 3T4 logic possibly refer to? Once the values of u and m are understood as relating to degrees of identity transparency -- not truth-value -- it’s a different matter altogether.

Identity transparency is animism: unity of the subject and object. The object, too, is animated; and the subject can experience an identity with it. And maybe the object can experience identity with the subject -- which in certain times and places has been referred to as possession. Empathy, identification, indulgence, sympathy, compassion, fellow-feeling, obsession, resonance, possession, participation, contagion, invasion: each of these is a different degree and tone of identity transparency. We are all familiar with the meaning of the term “identity crisis”. A person cannot make up his mind who he is, what he wants to be: doctor, dentist, soldier, sailor, sales manager…drunk. The crisis is the inability to make a choice; one must be either this or that, not both/and. Identity is singular, not plural; it is a result of drawing distinctions, not removing them. This is 1T2 identity. Individualism is its consummate expression. But we know that in other cultures and in other times there have been other sorts of identity: the against-distinction identity of the Japanese salaryman; the fused identity of the rhythm-driven African field worker team; the ecstatic identity of the evangelical holy-roller; the wearing-many-hats identity of the smooth operator official; the group identity of the dolphin pod; the identity implied by collective insect behaviors, collective bacterial behaviors, collective behaviors of electrons and bubbles of the breaking wave. uTm logics interpreted relative to identity transparency would involve m representing how many possible identity states are available in the given domain, while u would represent how many of these are simultaneously permitted. A binary information unit, a bit, exists in the Boolean information domain of digital computers, and possesses 1T2 identity. A q-bit, a quantum bit, exists in the information domain of quantum computing, and possesses uTm identity by virtue of its quantum superposition capabilities. Each u is an identity tag relative to one of the m multiworlds of the quantum superposition. No matter how fast -- billions of times faster than presently -- 1T2 computers get, no matter how massively parallel their architectures become, they will always be the lowest order of uTm devices.

Given that most massed warfare in the human past (and a good case can be made for this) has been between those cultures based on 1T2 logic and those cultures based on rotational forms of 1T2 logic (i.e., cultures rooted in some type of animism), we can be relatively certain that one class of future warfare will be between 1T2 androids and full-blown uTm androids. And the root cause of those wars will have been Turing’s decision to abide by the decision of British Intelligence not to consider transfinite numbers of operations -- even though everything in his pre-Bletchley Park orientation disposed him to do so. This decision -- based on acquiescing to cultural blinders, rather than adhering to the realities of mathematics -- tied computer science from the beginning to only 1T2 systematics. The world of the transfinite, of the Cantor universe, which Turing was fully immersed in by the early 1930s, is the domain of uTm systematics. At historical cusp points, on such issues of moral courage does human fate hang. A future shift to uTm systematics will not be made smoothly, in absence of turmoil. There will be enormous conflict.

operator-time and the Dedekind cut
Clausewitz’s notion of time as a “dynamic factor of forces” is one way of stating that the time-reference of partial derivatives and integrals in the differential and integral calculus is epiphenomenal of the nesting properties of infinite sets implicit in the notion of a limit as a Dedekind “cut”. If, then, the very definition of operator-time is “that which decomposes (“cuts”) transfinite set-nests into limit-cycle infinite sequences (tai chi spinors),” we have very interesting implications about how Mao Tse-tung’s rotational-logic hyperspace is generated. I’ve begun to realize that the process of mathematically reconstructing the Hilbert space of quantum mechanics as an m-valued-logic hyperspace can be understood as a “multiplexing” problem. Boolean logic is 1T2. Post’s logics are uTm . In binary logic trees, there are two branches leaving every switch (decision) and only one path can be taken. As 1 goes to u and 2 goes to m, the simultaneous paths and multiple branching arrays become a hellish multiplexing conundrum, but this is treating the logic problem with simply-connected notions of space (i.e., in this case, with one-dimensional, selfsame, not skew-parallel threads or strings), when m-valued-logic Hilbert space would be peppered with shadow points, u cross m, upon each decision -- generated by a process analogous to Everett’s notion of the universe splitting into multiworlds upon each measurement, as given in the “relative-state formulation of quantum mechanics” (the very thing Turing disallowed from consideration by ruling out transfinite numbers of operations!). Solving the laser multiplexing problem required for a non-reference-beam, non-photographically-based holography is at the same time to construct an m-valued-logic Hilbert space (and to gain considerable insight into the holographic model of brain function).

Musculpt as mathematical notation
One problem encountered in undertaking this task of constructing m-valued-logic Hilbert space, which involves learning how to think in uTm logics, is that all existing forms of mathematical notation work against the process of learning how to think in uTm logics. It is too complex to do without notation, but as soon as you try to notate it, the notation becomes the primary obstacle. The very fact that notation is written across the page -- even if you simply “register” the aesthetic properties of the patterns displayed, rather than scanning it lineally -- beats against the stacked simultaneity uTm logics most essentially deal in. Even if the notation is apprehended simply as a trigger to already codified chains of nested stacks of associations, still, the very fact of using a series of static self-identical symbols precludes uTm type explication. The problems involved are similar to those encountered in some avant-garde art music composition -- composition of the type which uses the act of composition as a means to gain greater insight into the nature of space and time. The act of translating the inner aural imagination into written notation, thence into sound production, works against, rather than facilitates, the process of gaining greater insight, which is the primary purpose of the enterprise of such composition. What needs to be notated is not notes, but the empty space between the notes! At any given instant in the duration of the piece, the total structure of the present sound configures a spatial correlate. Here we seek guidance from harmony theorist Heinrich Schenker, musicologist Victor Zuckerkandl, and composer Karlheinz Stockhausen: tonal space as place; tonal time-gestalt; interval as particle of sound, not tone; musical color as direction; point-field density; interval fields; groups; shapes as opposed to motives. Hear not the tones themselves, but the space between the tones! Hear the interaction channel resonance invoked by the interval connecting the tones; it is through this interaction, this relative-state, that musical spacetime is. This is well past Newton, well past the need for musical form based on the motion of tones (bodies) in space. This idealized motion of empty space itself (the interval) is the geometrodynamics by which configuration of no-thing generates form of everything. Zuckerkandl says: “Chordal steps lead from one sound-state to all tonal space…they seem steps of tonal space rather than in tonal space.” Sounds like a bootstrapped stack of interaction channel resonances to me! Chord as effect of topological operation on tonal space. Chords as the domain structures created by operator-time. The chord is to tonal space as the 3-geometry is to twistorized superspace. We are no longer interested in the dynamic quality of tone, but in the motion patterns of the empty interval between the tones. The time for meta-music of the Musculpt manifold has arrived! The sculpture in music-sculpture (Musculpt) is there because the presence of the art object is the notation problem in the plastic arts.

Mallarmé’s concern with the “anterior sky” seen through his “Windowpane” was the same concern Marcel Duchamp later displayed in fretting for two decades over his “Large Glass”. But the person who most penetrated this problem of the art object as notation and obstacle to insight was the painter Rice Pereira in her brilliant essay on the “layered transparent”, entitled “THE TRANSCENDENTAL FORMAL LOGIC OF THE INFINITE -- Evolution of Cultural Forms or the Transformation of Nothing and the Paradox of Space.” (For a beautiful recent description of certain perceptual aspects of m-valued-logic Hilbert space written by a student of both mathematics and film, see the Mapping Awareness piece by Lawrence Au: laau@erols.com). Think about sonic holograms in relation to light holograms as the way to bring order to the acoustico-optical Musculpt continuum, the layered transparent. Aesthetic form should be the notational equivalent of the seen sound, the heard image. There should be no distinction between notation and form! If one had a sound hologram with identical interference patterns to that of a light hologram, would not there be a relationship between the projected light structure and the transmitted sound-complex? Consider the elements of the class of such relationships to be the set of operational and functional symbols -- constant signs; numerical, sentential, and predicate variables -- of the formalized language, Musculpt. Any such given acoustico-optical phoneme-morpheme would have a multiplicity of variable properties, or “facets”, to which u cross m factors could be associated in representing propositions in uTm logics. In this autopoietic case, there would be no notation problem, because the notated statement would be the equivalent of the seen sound, the heard image. The notation would work with the efforts of the mathematician, composer, artist, not against those efforts. In the theory of autopoiesis, by contrast to that of chaotic percolation modeling, the operators have multiple, individually alterable, indices -- they are not simply on-or-off binary signifiers.

m-valued-logic Hilbert space and nonlinear perspective
But, even so, this is a bit too simple-minded, yet, to provide a path to m-valued-logic Hilbert space. The string theory underlying the multiplexing for the holographic generation of sounded-forms still assumes self-identical strings, which is not the case under uTm logics. This is a BIG problem because any string in any 3-space has simple identity, when what needs to be represented is uTm classes of identity transparency bridging multiple sheets in the layered transparent. Just as the theory of linear perspective in painting was a necessary prerequisite to the mathematically codified space and time which set the stage for the advent of modern science, so a theory of “nonlinear perspective” in Musculpt is a prerequisite to temporal operations on m-valued-logic Hilbert space which will set the stage for the post-scientific era. Higher dimensional geometrical figures, also known as polytopes, are not enough. Any n-polytope in n-dimensional Hilbert space still has a 1T2 logic configuration -- meaning, for instance, that it has simple identity. When you wear 3-space-colored glasses, so to speak, to view uTm identity transparency, you see this transparency, for instance, as quantum tunneling and relativistic wormholes. Is this a hint about how to approach the multiplexing problem?

The notations of the formalized language called Musculpt must be written within a 3-space representation that simulates features of m-valued-logic Hilbert space with a significant degree of verisimilitude, if the form of notation is to further the purposes of the user. Wishing to represent features of m-valued-logic Hilbert space in 3-space, we must look for “shadows” of these features in the “cave” of 3-space, for these “shadows” are the basis of what we are calling a “nonlinear perspective”. During the Renaissance, Dürer, da Vinci, and Alberti developed a string theory of perspective by literally tying strings in a square meshwork across a frame, and then viewing architecture and landscape through this mesh. In linear perspective, on the 2-space picture plane of post-Renaissance painting, parallel lines converge to a vanishing point on the horizon, whereas in 3-space they remain parallel. In representing 3-space on a surface, a feature characteristic of 3-space is transformed: parallel lines converge. In nonlinear perspective, instead of this convergence, parallel lines have something like an “overtone series”: fiber-bundles (number pair arithmetics representing skew products) of sister parallels relative to the uTm-generated laminates of Hilbert space. uTm logics, mapped across n-dimensional Hilbert space, laminate it into orders of logical domains by degree of identity transparency, that is, make it a “layered transparent” like Rice Pereira imagined. This is another way of arriving at a multi-sheet model of the universe, a way corresponding to Andrei Sakharov’s description of the lamination of Novikov dust under influence of self-organizing gravitational fields. A 3-space “geometrical shadow” of this uTm-generated lamination is the scale-level of multiscale dynamic systems, the nestedness, that is, characterized by fractals. In the mid-1950s, R. L. Brown and J. G. Bennett, in developing a geometry where entities are not assumed to be identical to themselves, where, that is, they can simultaneously be both other and same, arrived at the notion of “pencils of skew-parallels” mapped in an n-dimensional manifold with null-vectors. This is the geometry required by fiber-bundle arithmetics and uTm logics (Grassmann algebras, Riemann surfaces, and hypernumbers beyond square-root-of-minus-one are also required); this is the geometry we need in order to arrive at a better understanding of the diversity inherent in any truly non-coercive unity, of the distincte unum prerequisite to authentic identity transparency. But, when Brown and Bennett attempted to describe higher dimensional “diversely identical skew-cubes”, they were limited to schematic diagrams with dotted lines or to matrix algebra notation, in the same way that Coxeter was limited in giving his account of regular higher dimensional polytopes.

notation for the theory, the theory itself
Since, in the present case, the notation for the theory is the theory itself, the theory cannot be written before it is used; it cannot, that is, be read before it evolves into a natural language. Renaissance artists could work with a string-mesh frame and make significant progress in developing linear perspective. The process of creating nonlinear perspective is more techno-based, and therefore relatively difficult for individuals to approach. Nonetheless, significant contributions have been made, and not only by composers of avant-garde art music. The central problem is to identify “shadows” of laminated m-valued-logic Hilbert space, and find ways to represent those “shadows” in the “cave” of 3-space with a significant degree of verisimilitude. Kandinsky once taught that a given geometrical shape should, can or does take on only a certain color. What does it mean to maintain that a triangle is or can be, say, only yellow? Was this just a crazy idea, or was it an attempt to represent a higher dimensional “shadow” in 3-space? Mondrian believed that, in painting, there should or can be only vertical and horizontal lines. Is this just another crazy idea produced by a man who assiduously studied esoteric spiritual geometries? Or did he somehow realize that multivalued skew-parallel relations are apprehended by the everyday mind as if rectilinear? Is color -- that is, wave length -- a way the brain uses to encode skew-parallel geometries so as to make them at least minimally meaningful for the limited capacities of the everyday mind? Was this what Kandinsky started to understand? Ellsworth Kelly frees shape from its background. This is the 2-dimensional case of Gauss’ concern as he created metric geometry. Kelly’s liberated shapes, once saturated with color, define their own spaces. This was part of Goethe’s concern as he developed his theory of color. It is by such means as these, in the evolution of Musculpt as a formalized language, that the multiple “facets”of a given acoustico-optical phoneme-morpheme can be associated with the u cross m factors necessary to represent a proposition in a given uTm logic. What would it look like, this Musculpt? It might look like most anything. The abstract films of Gordon Belson surely provide an impression of one possible development. The visual phenomena -- dynamic polymorph cinerama -- associated with autogenic brain discharges and abreaction are also suggestive. What would it sound like? Again, most anything. Not only Stockhausen. What about, say, Lucia Hwong?

Another contribution to identification of “shadows” in the “cave” of 3-space comes from the optical explorations of artist James Turrell. Homogeneous low illumination dissolves the visual object into idioretinal light and brings the “Novikov dust” of visual space to the forefront as Ganzfeld. These particles of “dust” are hard to resolve, and there is a good reason why. Optical physicist, Rudolf K. Luneburg, has demonstrated that there is no absolute localization in binocular visual space. The vanishing point to which parallel lines converge in linear perspective actually vanishes! Why? Why is there no absolute localization? Luneburg discovered that visual space is a non-Euclidian Gaussian metrical space, exhibiting properties consistent with Einstein’s Special Relativity theory, namely Lorentz contraction. Objects get shorter the faster our eyes see them! Because of the variable properties of the metric, and because of Lorentz contraction, no point, no particle of idioretinal “dust”, can be absolutely localized in visual space. This is a deep, dark “shadow” in the “cave” of 3-space. The shortening of the object is an indication that ponderable space is being foreshortened, topologically warped, by operator-time. Operator-time changes the order of uTm logics at limiting values of dynamic variables (like velocity). The point under consideration, which cannot be absolutely localized, has a different location relative to each of the laminations composing m-valued-logic Hilbert space. Every point, then, has its complement of shadow points, just as every line has its “overtone series” of skew-parallels. These shadow points provide a certain perspective on the notion that the center is everywhere: given transfinite sets of laminations under uTm logics, any point exists at all points. These points are also suggestive concerning possible quantum-dot architectures of q-bitic processors using uTm logics for quantum computing. A way to get some feeling for the world thus constituted is to climb into one of artist Yayoi Kusama’s polka-dot-universe tactile environments. While there, ask yourself, What is the relation between skew-parallels and shadow points?

pilot fixation, operator-time, and quantum tunneling with skew-parallels
Fighter pilots climb into flight simulators for many reasons, one of which is to get some hands-on sensory experience of pilot fixation syndrome (which Yayoi Kusama, by one means or another, appears to have much explored). As the pilot increases his velocity flying at low altitudes, the quantity of percepts he receives accelerates proportionately. If this continues to a critical point, cognitive overload sets in and there is the onset of pilot fixation syndrome. With this onset comes a progressive diminishment of time-rate perception and distance recognition. Implied in the phenomenology of pilot fixation syndrome is a limiting velocity of cognition of percepts, a baud rate of consciousness. As that limiting baud rate is approached all the laws of Special Relativity apply, as the work of Luneburg in part demonstrates and explains. This syndrome is another “shadow” in the “cave” of 3-space. Topological changes in perceived time and space indicate the approach to a lamination boundary in m-valued-logic Hilbert space, which, in the realm of logical cognition, envelops any unreal “real world” 3-space. Topological transformations of Musculpt acoustico-optical phoneme-morpheme forms, then, would represent uTm propositions in the layered transparent of m-valued-logic Hilbert space. This is to say that a static n-dimensional form with uTm-logic properties maps (via Grassmann algebras and pencils of skew parallels) as time, movement, and transformation in 3-space (of course, with further elaboration under General Relativity, the more correct statement would be of 3-space). Consequentially, one feature of our nonlinear perspective is form in process (more generally, form of process). In the realm of hypernumber arithmetics it would, thus, be appropriate to speak of the acceleration of both cardinal and ordinal quantities. In the realm of uTm logics it would be appropriate to ask what is the relationship of topological genus to u. All of this is of the essence of operator-time. Operator-time, simply put, is any operation in m-valued-logic Hilbert space which gives rise to time-like properties in 3-space.

Let us again return to the binary logic tree. Every switch (decision) has two branches (one leg coming in and two branches going out in the shape of a “Y”), and only one branch can be taken at a time: this is a 1T2 logic switch. As 1 goes to u and 2 goes to m, not only does the number of branches sitting on top the leg increase without bound, but the number of those branches that can be simultaneously taken also increases without bound. If we now put on our 3-space-colored glasses and view such switches as quantum tunneling and relativistic wormholes, does the view we get provide a hint about how to approach the multiplexing problem? This problem is a problem in holography: all the information of the whole is contained in each part. Non-simple identity is a given, as is the case with skew-parallels and shadow points. The shadow points can be regarded as the multiple tips (Hydra’s heads) of a branched microtubule or optical fiber which has split into a pencil of skew-parallels. When the (1T2 to uTm) logic “split” into the pencil of skew-parallels takes place, the microtubule, by definition, is no longer in 3-space, because the properties of 3-space exist in conformity with 1T2 logic, with which, following the “split”, the microtubule no longer abides. To thus “split” is to engage in quantum tunneling. To enter the tunnel is to disappear from the purview of those wearing 3-space-colored glasses. Is there a way this process can be simulated in 3-space with sufficient verisimilitude as to make our nonlinear perspective practical of realization?

If a laser beam x number of angstroms below the visible spectrum intersects a laser beam x number of angstroms above the visible spectrum, a point of light appears at the intersection with no visible source or attachment. Enough such points and we have a shape floating in 3-space. Set the points in motion by turning laser beams on and off in coordinated fashion and the shape becomes a moving flowing form that can be viewed from positions at all 360 degrees around it. Change the wave lengths of the lasers and the form is a colored form. Add musical point-field densities, interval fields, tonal time-gestalts and suddenly we have an autopoietic Musculpt phoneme-morpheme -- if, that is, the multiple acoustico-optical variables are associated (metareferenced) with factors such as those discussed above. The invisibility of the projecting laser beams is a simulation of the tunneling effect which occurs with the 1T2 to uTm logic “split”, thus making it possible for us to begin to “paint” using nonlinear perspective. The micro-physics transpiring within the laser devise and its projected beam (e.g., the collective behaviors of the involved electrons, electron parcels, and electron clouds) is an expression of the uTm logics, fiber-bundle arithmetics, and Grassmann algebras associated with the pencils of skew-parallels involved. It is by such means as this that the coherent radiation generated by superconductant DNA communicates genetic information via microtubules. It is by such means as this that intraneuronal-DNA-generated coherent radiation projects neural holograms. It is by such means as this that the reference level of biological clock-CNS-immune system interaction is maintained. Start working with projected Musculpt phonemes-morphemes as topological notation of propositions in uTm logics and a language will evolve through use, a language one dialect of which may even be the generative metalanguage of the brain.

symorphonic planning, fractal entrapment, and m-valued exchange
In the realm of public policy, however, other uses will be found. Instead of the city planning office submitting to the mayor its new land-use prescriptions, changed zoning ordinance regime, and associated tax structure alterations under the revised version of the comprehensive mega-urban-region development master plan, it will submit a Musculpt “symorphony” recorded in VirFut Q-Pro (the Musculpt planning application software which utilizes uTm logics: Virtual Futures Q-bitic Projection). Given a “symorphonic” language called Musculpt, which allows notation of propositions in uTm logics, a far more sophisticated approach to formulating the strategic plan that will succeed in all possible futures becomes practical. Each scenario is a 1T2 future history. Already, in the standard SRI methodology, the scenario logics, the key decision factors, the time-lines depicting flow of events, and so on are frequently represented graphically. These graphic techniques could be generalized with Musculpt held in mind as a guiding inspiration. Using these continuously evolving graphic techniques, each scenario then could be boldly etched on an acetate sheet or computer screen. Similarly, the strategic plan developed relative to each scenario could be graphically explicated. If the scenario plan sheets were then stacked via computer simulation (by adapting already available multiscale modeling and CAD software), techniques analogous to those of projective geometry could be utilized to discover invariants among plans of the whole stack, which, when projected to a reference sheet, would constitute the plan that will succeed in all possible futures. Riemann long ago developed the techniques to accomplish this. The Riemann surface is a multi-sheeted hyperspace based on m-valued functions. M-valued functions are not m-valued propositions, but they were the foundation from which uTm logics emerged.

Implementation of public policy initiatives relative to the projective all-possible-futures plan on the reference sheet cannot be undertaken with linear, Cartesian-Newtonian, 1T2-logic type mechanisms of public administration. This is impossible because policy indications expressed in uTm propositions cannot, in principle, be translated into any possible set of 1T2 propositions. There is fundamental incommensurability between orders of logical-value. Mechanisms of public administration possessing inherent uTm properties must be found if the all-possible-futures plan is to be implemented. One discovery algorithm for finding uTm mechanisms of public administration involves looking closely at categories of 1T2 administrative behavior which demonstrate persistent failure. One such category is “jurisdictional disputes”. As mega-urban regions, for instance, have evolved over time -- due to such forcing factors as population growth, inter-regional migration, mechanization and scale enhancement of agricultural production and resultant rural-urban migration -- the jurisdictions from earlier stages of urban evolution have persisted, giving rise to frequent jurisdictional disputes during formulation and implementation of development plans. Councils of government, ad hoc superordinate commissions, and by-issue referenda have been among the 1T2 approaches to this jurisdictional problem -- most of these approaches involving addition of another level of hierarchy to the administrative apparatus. The jurisdictions in dispute generally are defined, at least in part, geographically: a given jurisdiction has a physical border which is extremely difficult to change, often requiring a referendum to do so. Compare this circumstance to one of the practical applications of the strategic contingency theory of organizational adaptation which grew out of the rotational logic of Sun Tzu: the case of the Viet Cong political infrastructure.

fractal boundaries and public administration
The underground Viet Cong bureaucracy which administered the war effort against American forces in South Viet Nam did not use the static hamlet, village, city, district, and province borders of the Government of South Viet Nam. The involved administrative areas were often called by the same names as those of the GVN, but the frequently changed VC borders were constituted according to the tasks requiring execution within them. When the required tasks changed, the borders changed accordingly. Such borders have some properties in common with fractal boundaries, in that the configuration of the border is correlated (in just the manner a mathematical function is defined) with x number of dynamic bureaucratic variables associated with functioning of the involved multiscale system (in the VC case: the nested administrative jurisdictions of hamlet, village, inter-village, city, district, inter-district, province, inter-province and military region). When the time evolution of boundaries defining partitions of an administrative hierarchy is a function of many variables, it is an m-valued function in just the sense understood by Riemann. When a whole society with an animistic tradition has a several-thousand-year history of thinking with rotational logic, computers are not required to develop applications of m-valued functions. By end of the war, however, the rigid communist hierarchy was imposed on the Southern animists by Northern cadres using the non-rotational Western 1T2 logic of Marxist-Leninism. (The logical form of dialectical thought is not rotational; it is syllogistic.)

fractal drums are quantal drums
Consider that the “x number of dynamic bureaucratic variables” to which configuration of VC borders was tied is directly analogous to the “invariants among plans of the whole stack”, discussed above, which, when projected to a reference sheet, constitute the plan that will succeed in all possible futures. A long and detailed argument could be made that Viet Cong strategic contingency planning was in large measure implemented via modulation of the, at issue, x number of dynamic bureaucratic variables, through means such as configurational modulations of borders. But, for that to have been the case, the borders defining partitions of the administrative hierarchy would have had to have had only a virtual on-the-ground presence, as the m-valued functions determining the configurational changes would have absorbed most of the conscious attention of the strategic planner. Such “appearing” borderlessness had a long tradition in Southeast Asia, going back at least to notions of Kingship at Angkor and more likely to the Bronze Age jurisdictions of the tribal Kings of Fire and Water, in whose honor bronze drums with fractal nesting geometries etched upon their tympana were sited throughout the region, and possibly as far away as Japan. Beyond being a metaphor and trance inducing (via rhythm entrainment across scale levels of the organism: quantum clocks within clocks within clocks) invitation to paranormal (for us) perceptions, the fractal drum was an element of cosmological metareference by which the whole Bicameral World was politically organized. Borders between cultural and language spheres were not lines on the ground, but were in constant flux and drummed into the atmosphere as acoustic waves. One can reasonably speculate that the bronze drums -- which tradition holds to have been sited along with stone xylophones under energy cascade events, waterfalls, in Southeast Asia -- were placed at the frontier, and that their acoustic output served as a kind of band-pass filter. The infrasound acoustic-wave signatures of the local landforms (a big part of Songlines -- Nature’s Musculpt! -- and used, for instance, as demonstrated by Cornell University’s Neurobiology Labs, as navigational aids by homing pigeons) were likely part of a given bronze drum’s design criteria. Such a bronze drum would have been acoustically designed to establish resonance uniquely with the infrasound signatures of its specific intended site. In a very real way, the sound output of the drum was a map of the space it inhabited. Multiple drum sitings basically “maintained” the “boundaries”. Exchange took place here, near these sites, near the maintained band-pass fractal boundaries: economic exchange, identity exchange, trade in ritual artifacts infused with cosmic metareference -- artifacts like “soul cloths” blazoned with nested triangular meshworks. Every activity, every artifact, was m-valued, served multiple purposes, helped establish a synergistic superintegration into the natural surround.

currencies on fractal boundaries
In seeking modern applications of m-valued Abelian functions (Abel was the initial discoverer of what Riemann later developed), it is instructive to note that another factor is, in part, defined in relation to borders: money, currencies, monetary exchange units. To configure a boundary according to variations of an m-valued function is to imbue that border with attributes relative to the area bounded: its configuration changes according to what transpires within its determinations. The border is a correlation, not a mere severance. Koch curves based on fractal dimensions also have this property. Such curves are scale-level nests within scale-level nests, one-dimensional “sheets” within one-dimensional “sheets”. A Koch curve is a one-dimensional Riemann surface: the Abelian functions behind the Riemann surface are also behind the Koch curve. If that is the case, there is no problem in having currencies on Koch curves, instead of on 1T2 boundaries as we have them today. Multiple indices can be stacked on a currency base. Financial derivatives do that already. So there is no problem with stacking m-values on a currency base, each value being relative to one of the “invariants among plans of the whole stack”. The possibility exists to use m-valued currencies on Koch curves as uTm mechanisms of public administration, such that the additional incentives and sanctions associated with the m-values stacked on the currency bases become free-market feedback devices predisposing aggregate behaviors in conformation with indications of the plan that will succeed in all possible futures. Quantum computing will facilitate such approaches and bring into being Virtual Futures Q-bitic Projection.

These ideas began to develop
Thirty years ago, exactly
Moving over these same streets,
Sitting in the same parks,
Smelling the same smells,
Hearing the same tonal discourse
And popular song. The concepts could
Not truly congeal, without the act
Of once again
Struggling with them
Under the same sensorial drive.

-- January-June, 1998
Ho Chi Minh City.


Return to:
•Top
•The Saigon Papers -- Abstracts
•Home page