Library: Why I Must Attack Albert Einstein

By Lyndon H, LaRouche, Jr.

Fusion Energy Foundation
July-August 1984

PDF here (2.6 Mb)

EDITOR'S NOTE: An "Einstein debate" emerged after the 100th birthday celebration of Einstein in 1979, in which the name Einstein has been repeatedly misused solely to strengthen cultural pessimism and the antiwar movement. This article contrasts sharply with the so-called Einstein debate, and treats with scientific precision the question of morality in scientific work.
The origins of the ugly Einstein debate are as follows: In November 1980, an essay by the West German physicist Unsold pinned on Einstein the blame for atomic weapons, and therefore all modern warfare. Writing in the official publication of the German Science Association, Unsold vented his anger over the just-concluded "Einstein Year" celebration: "[H]ardly anyone dared to remind us/' he wrote, "that Einstein's name is also closely associated with atomic bomb. . . . Much has been said in 1979 about the 'responsibility of scientists/ But of Haber's poison gas and Einstein's atomic bomb . . . we hardly heard a single word." (The year 1979, it should be noted, was the year NATO made the bilateral decision to station middle-range missiles in Western Europe.)
The green and peace movements immediately put Unsold on their lecture circuit, giving him the opportunity to spread his ideas on "scientists' responsibility" throughout West Germany. His speeches against Einstein soon provoked an international outcry, including charges that Unsöld's attack was motivated by antisemitism. The British magazine Nature, for example, reported April 16,1981 that "Unsold, a theoretician, now 75, appears to have begun his attack on Einstein at a [university] symposium . . . when he said that Einstein was guilty of crimes, no less serious than those of Hitler. . . . Unsold has been charged with antisemitism." Why the peace movement would so zealously rely on Unsöld's view of "scientists' responsibility/' becomes clear from the following citation from the above-mentioned Unsold article against Einstein:

Man's psyche apparently possesses two regions, which operate according to quite different rules. . The deliberative, goal-oriented and critical activity of our "I" resides ... in the neocortex, . . . Opposed to this is the genetically more ancient limbic area, or brain stem, which is where the instincts, the "Id," reside. This latter portion has remained unchanged since the stone age. Thus we understand that, even in humans with extremely differentiated thinking and investigative capacities—people with "exceedingly high intelligence"—the old stone-age instincts are lurking in the background, instincts primarily aimed at acquiring power, acclaim, etc., and which would not hesitate to destroy anything which ‘could get in the way.

In Unsold's twisted and antihuman view of man—as in that of the green and peace movements—there is no place for moral or statesmanlike activity, and no notion of how science and technology enable mankind to not only survive, but to develop. The Einstein debate spurred by Unsold promoted nothing useful scientifically; it served only to promote the European peace movement's simple-minded formula that "Science = Weapons."

With great personal reluctance, I find myself obliged to attack certain features of the work of the late Albert Einstein publicly. This reluctance bears upon Einstein as a man, not as a physicist. Although I am not a physicist or mathematician as such—chiefly because I early abhorred, morally, certain leading features of contemporary textbook and classroom mathematics—I am not awed by Einstein's reputation in science. I know enough of the absolute fundamentals of scientific work to know with certainty that important aspects of Einstein's work depend upon childishly outrageous blunders of assumption and method. What I like about Einstein is that, although he permitted himself to be used and corrupted to a certain degree, he drew the line beyond which he would not permit himself to be used for corrupt purposes. For that latter reason, and for reason of certain important issues on which Einstein was morally on the right side, I would prefer to defend him, than to be obliged to attack his memory.
My motives for attacking Einstein's memory are eminently, urgently practical ones, reasons he would admit are of an obligatory moral as well as practical character. Briefly, the threat of a new general war, this time probably a thermonuclear war, and the threatened collapse of the world's economy—unless a technological revolution intervenes—require a very special kind of "crash-program" effort in development of three interrelated areas of scientific investigation and technological applications. These three areas are: (1) controlled thermonuclear fusion and related aspects of relativistic physics; (2) a general, radical revision in the theory of quantum electrodynamics, with emphasis on the need for a comprehensive and coherent doctrine of coherent radiation—new, rigorous distinctions between energy and work; and (3) revolutionary breakthroughs in biophysics, centered upon control of aging of tissues within the whole processes of human bodies, a fundamental breakthrough in the physical definition of the word life. These three breakthroughs cannot be accomplished without throwing overboard the axiomatic notions of a statistical theory of heat, axiomatic notions embedded in much of Einstein's work, and the root of every major error in his work.
In these matters, my own special variety of competence lies both in my mastery of empirical principles of economic science, and a life dedicated chiefly to mastery of what is best described as "the third level of scientific hypothesis," what Plato's writings define as the notion of an hypothesis of the higher hypothesis. In my own case, my susceptibility to the Platonic (or, Neoplatonic) viewpoint was an outgrowth of a childhood and youthful saturation with matters of theology, most emphatically that of the Gospel of St. John. It was consistent that during my 13th and 14th year, I should have been won totally to the methodological outlook of Gottfried Leibniz, most emphatically the Leibniz of the Leibniz-Clarke correspondence and the Monadology. This theological point of entry into scientific work has been no defect, as the instances of St. Augustine, and the founding of modern science by the 15th-century Cardinal Nicholas of Cusa best indicate the connections to be noted.
What turned me away from mathematics, as I encountered taught mathematics in the textbooks and classrooms of my youth, was the recognition that the lattice structures of a logically consistent mathematical edifice depend upon the validity of the axiomatic and postulational assumptions which underlie all mathematical systems. It has always appeared morally indefensible to me to assert that anything is true mathematically merely because of plausible empirical consistency with mathematical schemas. If the underlying assumptions are in error, then the entire edifice of existing mathematics collapses. Perhaps, at any given point in progress of knowledge, it may not be possible to settle these problems respecting underlying assumptions, and scientific work must not be halted merely because we know some more or less pervasive defect to exist in given mathematical physics. Yet, at the same time, it is morally wrong, and ultimately destructive of scientific work, to pretend that the existing mathematics is self-evidently right as to principles when it is demonstrable that some underlying assumptions are of a dubious character.
This doubt proved most fruitful. The Wiener-Shannon doctrine of "information theory" derived from the statistical theory of heat, expresses the most immoral features of existing scientific opinion, depending most directly upon assumptions which are provably absurd, assumptions conclusively proven absurd- long before the work of Boltzmann, Cibbs, et al. Negentropy, it appeared to me during late 1947 and early 1948, when I first encountered the Wiener-Shannon dogma, is characteristically the quality of living processes. Life as an active, efficient principle, must be adduced directly, empirically, from living processes. It was my preferred argument then, and still today, that the professor who undertakes to discover whether or not life is possible, from the standpoint of the statistical theory of heat, or the mechanistic standpoint otherwise expressed, is posing actually the question whether he himself exists to have the power to express an opinion on any matter of inquiry. Therefore, I was led through the work of Nicholas Rashevsky on mathematical biophysics, to challenging Rashevsky's methodological assumptions. This led ultimately to a year of wrestling with Georg Cantor's notion of transfinite orderings, a vantage point which made the essential, underlying features of the work of Bernhard Riemann directly accessible. My own fundamental discoveries in economic science, dated from 1952, were the result of that.
Energy unquestionably exists, to the effect that increase of measurable energy-flux density of processes is the proper first-approximation measure of work accomplished by thermodynamic processes. Yet, "energy" and "work" are not the same thing. Work produces energy, and the conversion of energy into work is the crux of the matter. It is the comparison of the work gained with use of energy against the work required to produce energy in the form required, which is the essential definition. This recognition, and its bearing upon the measurability of technology as such, was the basis for my original discoveries of 1952, a discovery which has undergone a radical improvement in depth and scope during the recent five years—chiefly due to my collaboration with Uwe Parpart Henke, Dr. Jonathan Tennenbaum, and others, who have enabled me to locate my earlier conceptions within the broader range of fundamentals of mathematics and mathematical physics.
The most recent developments in my own work began during 1980. The LaRouche-Riemann method of economic forecasting has proven itself the only competent forecasting method in existence today, but there are shortcomings within the present form of the forecasting practice, such that the method is of the highest accuracy presently available for short-term general forecasting, but not satisfactory to the same degree for short-term forecasting of subsectors of the same general economic process. Therefore, a constant improvement, refinement, in the program has been characteristic of the work since it was launched in December 1978.
The direction of these continuing refinements took a wrong turn during mid-1980, a wrong turn I recognized to be disregard for the deeper implications of the "delta" in Leibniz's formulation of the differential calculus. This, I warned my associates then, obliges us to emphasize the fact that the notion of a quantum-value in physical processes is nothing but Leibniz's notion of the fallacy of "infinite divisibility," one of the points upon which he based his (accurate) argument, that Newton's form of the calculus was useless and false to physical reality. To solve certain tasks of refinement in economic analysis, I concluded, it is indispensable to brush aside prevailing, accepted interpretations of the quantum-notion and to derive the necessity of this notion from the same basis as Leibniz's approach, rejecting the assumptions underlying what is called quantum mechanics. When-my associates failed to effect quickly enough the breakthrough of the form I saw necessary, I mobilized myself to set the required solution into motion, demanding that we examine the matter from the standpoint of a rigorously synthetic-geometrical approach to construction and interpretation of conical (complex) functions.
This program began, during 1981, with an attack on the simplest phenomenon of all conical functions: the determination of the correct, well-tempered values for the musical scale as an elementary exercise in differential geometry, as completed by Jonathan Tennenbaum and Ralf Schauerhammer during autumn 1981. This led, further, Tennenbaum's reconstruction of Minkowski's doctrine of special relativity through use of paired cylindrical functions, in respect to which I insisted this must be corrected by an additional, crucial step, of substituting conical functions for the cylindrical. This led to Tennenbaum's discovering a fresh view of Gauss's arithmetic-geometric mean. Through the collaboration with Tennenbaum, I pointed out that the view of generalized elliptic functions, as subsumed by Gauss's derivation of the arithmetic-geometric mean, was the basis for both Riemann's famous 1854 habilitation dissertation, "On The Hypotheses Which Underlie Geometry," and the proper basis for defining both the principle of the quantum and Leibniz's "delta."
The outgrowth of Tennenbaum's continuing work on this matter gave us a much more powerful apparatus than I had previously employed for economic science. This, and its general implications, I reported to the July 3-4 [1983] conference of the International Caucus of Labor Committees in Reston, Virginia. The included judgment is that a "general theory of relativity," as distinct from "special relativity/' does not exist, that the search for a unified field within the scope of a supposed general relativity is a result of wild, unrecognized fallacies embedded in an incorrect formulation of what is called "special relativity." This argument does not depend upon any complex analysis of the matter; the errors are entirely of the most elementary kind, the most primitive errors of assumption, which therefore admit of direct, simple demonstration.
I summarize the bare essentials of the case which I presented to that conference, beginning with a definition of the notion of an hypothesis of the higher hypothesis.

The Three Levels of Hypothesis

In scientific work, there are three levels of hypothesis:
(1) Simple hypothesis. The underlying assumptions of prevailing scientific knowledge are assumed to be valid, both for scientific work generally, and also for the particular area of inquiry to which some experimental hypothesis is addressed. The assumption of consistency with existing structures and underlying assumptions of scientific work, especially mathematical physics, is the basis for design and testing of the experimental hypothesis.
(2) Higher hypothesis. This is an experimental hypothesis addressed to the question whether evidence requires us to overthrow one or more of the fundamental assumptions underlying contemporary scientific work. A successful higher hypothesis produces a greater or lesser scientific revolution, and, by implication, greater or lesser technological revolution.
(3) Hypothesis of the higher hypothesis. This presumes that a succession of scientific revolutions represents an orderable series of higher hypotheses, on the condition that the succession correlates with an increase in the potential per capita power of society over nature. This poses the question, whether a succession of higher hypotheses meeting that requirement is demonstrably the result of some common principle of discovery. In other words, is there some principle of discovery which can be successively applied to successive scientific revolutions to generate the next scientific revolution in that series? Experiments which test hypothetical principles of discovery of this sort define the notion of an hypothesis of the higher hypothesis—the third level of hypothesis.
There is a current of modern science, beginning with the discovery of the isoperimetric principle by Cardinal Nicholas of Cusa during the 15th century, which insists that all of the fundamental questions of scientific knowledge exist for comprehension only on the third level of hypothesis. This current of science is typified by Cusa, Leonardo da Vinci, Johannes Kepler, Gottfried Leibniz, the Carnot-Monge Ecole Polytechnique, Karl Gauss, and Bernhard Riemann—a current sometimes identified in English literature as "continental science. "This is the current to which this writer adheres.
This adherence takes the practical form today of the writer's specifications for design of a needed "crash program/' both to implement the President's strategic doctrine enunciated first on March 23, 1983, and to cause that work in military technology to spill over efficiently into the world's civilian economy, to foster a general explosion in economic growth. The designs proposed by this writer are modeled, as a matter of reference, on the combined military, scientific, and educational work of the Ecole Polytechnique under Lazare Carnot and Gaspard Monge. Otherwise, the writer situates within that model of reference the question of a governing administrative-methodological principle approach to make such a social instrument of "crash-program" work effective for the specific objectives in view today.
The importance of this approach is most readily demonstrated from a military standpoint. In opposition to those "systems analysts" whose influence has ruined the defences of the United States, military technology defines a domain of accelerating technological attrition. The best measures deployed today produce countermeasures, counter-measures which require more advanced measures to overcome them. The succession of measures and countermeasures so defined is sometimes named "technological attrition," and is sometimes called an "arms race." There is no alternative to such an "arms race," but to prepare to lose the next war.
The same principle of "competition" exists in the non-military economy. However, one may ignore this principle of "competition," on the assumption that a nation may survive national economic bankruptcy, but might not survive losing a war. Hence, it is the unfortunate reality of modern history, that great advances in technology of civilian economies have often been a by-product of mobilization for wars. It is not that war is the indispensable instrument of progress—usually it is not; it is that nations refuse to do what they should have done in pursuit of peace, until the hot breath of war is upon their necks.
Technological attrition converges upon the notion of successive scientific revolutions, at least, successive technological revolutions. The distinction between the two is that a technological revolution is a scientific revolution put into practice—too often, belatedly. The idea that there exists a "world-line" based on successively ordered series of scientific breakthroughs, or technological breakthroughs, is the implied feature of technological attrition, and therefore the implied feature of all "crash programs" resembling that which we have proposed. This represents the ideal case for direct application of the third level of hypothesis.

Conical Functions Defined

The fundamental fallacy of the work of Einstein—and many others—was his refusal to accept the fundamental principle upon which the preceding development of European science depended: the treatment of the implications of the five Platonic solids from the vantage point of Cusa's rediscovery of the isoperimetric principle: the principle that the action of circular rotation, Leibniz's Principle of Least Action, is the only form of action self-evidently existing in visible (Euclidean) space. All of Einstein's major errors are derived from this consideration, including his misinterpretation of Riemannian physics.
Briefly, circular action in a measureless, formless void creates a circular area of measureless extent. The repetition of this same action upon that circular area creates the straight line, and also creates the first degree of measure: division of circular rotation by one-half. This is the only definition of a straight line permitted within a rigorous mathematical physics. The same circular action repeated upon a semicircle creates a point. From circular action, and the line and point created by circular action (singularities), all forms constructible in visible (Euclidean) space are constructed, using no other means but the hereditary principle of construction from the starting-point of circular action. No axioms or postulates are permitted in rigorous mathematical physics, or geometry.
The limitations upon construction in visible space are two. First, only five kinds of regular polyhedra can be constructed in visible space—the five Platonic solids. All of these solids (4, 6,8,12, and 20 sides respectively) reduce to one elementary such solid, the 12-sided dodecahedron whose sides are equal, regular pentagons. The pentagon and the dodecahedron are both constructed on the basis of a harmonic characteristic called the golden section. Nothing can be constructed in visible space except by reference to the unique feature of the golden section. The second, ultimately identical limitation, is the fact that certain classes of occurrences within visible space cannot be constructed within visible space, those constructions which depend upon transcendental functions—including the regular heptagon.
However, all existences within visible space can be constructed as projections of continuous, conical functions upon visible space. These conical constructions have the elementary form of a self-similar spiral on the outer surface of a cone. This spiral has the 2-space projection of an Archimedes spiral whose characteristic proportion is the golden section. Each cycle of the spiral defines a circular cross-section of the cone. All existences in visible space are thus defined by transcendental conical functions. The conical self-similar spiral of the reflected continuous manifold is the only self-evident form of physical action in the universe (Figure 1).
This is the foundation of all rigorous forms of mathematical physics. This is the elementary root of the third level of hypothesis for mathematical physics.
Pacioli, da Vinci, and Kepler emphasized that all living processes have the morphological characteristics of growth and function of the golden section. Functions with such characteristics are negentropic functions.
Kepler proved, and that conclusively, that the Platonic harmonic system, as presented by Plato's Timaeus, is the basis for the universal laws of astronomy. With aid of corrections supplied chiefly by Karl Gauss, Kepler's astronomy is valid to the present date, whereas all opposed doctrines are not.
The most fundamental breakthrough in science after Kepler and Leibniz's, was the discovery of the arithmetic-geometric mean by Karl Gauss. Without this discovery, no fundamental discovery of post-1830 European sicence would have been possible (after Legendre, Fourier, and Poncelet). This is what Einstein rejects implicitly.
The most elementary complex variable is the stretching of a rotating radius-line as the radius rotates around the axis of a cone. The simplest case is that in which the radius increases by a fixed ratio as it rotates, such that, after each complete rotation, the radius has increased by the ratio of some fixed number. If the ratio of the radius's increase is "1," the result is a constant spiral on the outer surface of a cylinder, the ideal representation of energy. If the ratio is greater than "1," the result is a self-similar spiral on the outer surface of a cone, the ideal representation of work (Figure 2).
In the second case, the Gauss arithmetic-geometric mean follows immediately.
The first integral of such an elementary complex variable is the spiral-action (for an interval of time). The second integral (for our purposes here) is the definite integral of the spiral-action for one completed cycle of rotation: the volume defined by two successive circular cross-sections of the cone, at the beginning and end of that cycle. The characteristic of this volume is the ellipse defined by any diagonal cut of the volume by a plane (Figure 3).
If the volume is cylindrical, the spiral rotates half its rotation at the midpoint of the volume: The geometric and arithmetic means are coincident. The elliptical cross-section defines energy, but not work.
If the volume is conical, the spiral rotates less than half the distance along the central axis of the cone during the first half of the rotation. In this case, the geometric mean lies below the arithmetic mean; the two are not coincident.
Imagine the cone standing on a plane and project the boundaries of the spiral down to the plane. The distance between the two points on the plane defines the major axis of an ellipse in the plane. The vertex of the cone forms one focus of this ellipse (the position of the sun in the Earth's orbit). The semimajor axis of the projected ellipse has the same length as the radius of the circle that is located at the arithmetic mean on the cone. The semiminor axis has the length of the radius of the circle located at the geometric mean (Figure 4).
Let us imagine that the ellipse on the plane is projected from an ellispe located in a cone. We can generate a series of similar ellipses, each bounded by the arithmetic and geometric means of the previous ellipse. Repeat this operation a large number of times. The question is: When does one stop this recursive process? This is the kernel of Gauss's theory of elliptic functions.
At whatever point the recursive process ceases, the remaining volume defined by the two foci of the last ellipse defines a degree of rotation of the spiral generating the cone, and also defines a relative value for an interval of displacement along the central axis of the cone. In a universe whose metric is the speed of light, this will correspond to a wavelength, a frequency. If this is determined in some necessary way, we have Leibniz's "delta" and the notion of the quantum of action (Figure 5).
The physical significance of this was first established in available scientific literature, by Riemann's 1854 habilitation dissertation, "On The Hypotheses Which Underlie Geometry." Assume the prior self-elaboration of the universe as a whole, or some phase space to correspond to some well-defined number N, such that the singularity of the Gaussian elliptic-function series for the conical interval represents N+7. This means that action corresponding to a well-defined notion of work acts upon the universe (or phase space) as an entirety, such that that action is bounded uniquely, in scope and division of itself, by the relationship implied by values N+ 1 and N. This defines a smallest division of action, below which only a singularity in physical space can occur. That is a quantum of action, a value which varies relativistically as the universe evolves to higher states, or as the phase space evolves similarly.
That is what the quantum represents from the standpoint of the third level of hypothesis: a smallest wavelength of continuous action, below which only a singularity can exist.
This was the basis for Riemannian electrodynamics, in which retarded potential, rather than notions of the statistical theory of heat, is characteristic. This has been the underlying issue of the factional furor within physics for more than a hundred years.
However, this quantum can be measured empirically not only as a smallest wavelength of a continuous (for example, electromagnetic) function. Changes in the value of the quantum, relative to functional notions associated with N and N+7, also correspond to relativistic metrical changes in the characteristic rates of action within the phase space concerned, as Riemann specifies. This, from the standpoint, again, of the third level of hypothesis, defines relativistic physics.

Contrast to Newton and Maxwell

Any system for describing physical processes which is modeled upon the syllogistic system of Aristotle, eliminates representation of such forms of action as "create" and "cause" within the mathematical system itself. The use of the equal-sign or inequality-signs has the same function as the middle term in the Aristotelian syllogism. Hence, mathematics usually confronts us with the ludicrous spectacle that we speak of creation of the universe, and speak of and observe causal relationships in physical processes, but can find no expression of either in conventional mathematical schemas.
The paradox does not exist in a rigorous geometry of the sort we have indicated here. Circular action, as the mirror of conical-self-similar action, is the form of the verb "to create," and also the description of action congruent with the verb "to cause.'7 Creation and causation are one and the same, at least essentially. This requires, of course, that we cast aside all of the axioms and postulates of Euclid's Elements, or anything resembling them, and replace entirely the syllogistic lattice-work of deductive theorems by the "hereditary principle" of rigorous construction of synthetic geometry from the unique principle of circular (conical self-similar) action, Leibniz's Principle of Least Action.
Most of the formal fallacies which afflict mathematics and mathematical physics are derived not from physical experiments as such, but from substituting an axiomatically algebraic mathematical structure used to describe physical processes for those processes themselves. In a word, nominalism. Since such mathematics does not tolerate the existence of a creation, such as our universe, and prohibits specification of causation as a term of description, it should not be surprising that such mathematics is often inappropriate means for studying principles of causation in a created, existing universe. Such mathematics has merit as a language of description, but it is a fool's enterprise to attempt to wring out of such mathematics any evidence bearing upon causation: one would have better luck attempting to wring blood from a stone. When one uses such a language of mathematical description, one must be aware at all points of what this mathematics can and cannot accomplish, and not employ it for the sort of analysis which it prohibits on axiomatic principle.
The same general problem arises in connection with notions of probability. The same word, probability, has mutually exclusive meanings from the standpoint of Gauss on the one side, and Descartes or LaPlace on the other. In Gauss, it signifies the necessarily determined division of action according to principles of a conically defined continuous manifold. In LaPlace, it has a mechanistic-numerological interpretation. In the latter connection, we locate the intrinsic fallacies of assumption underlying popularized notions of a statistical theory of heat, and of related notions of statistical dynamics, quantum mechanics. There is no doubt that the action described probabilistically occurs as a phenomenon in more or less the form described. The issue is that of what sort of causal notion one might wring out of the two, mutually exclusive modes of description— Gauss's versus LaPlace's, for example. The latter prohibits incorporation of causation into mathematics: Lo and behold! Such mathematics argues from examining its own probable navel, that causation does not exist!
Admittedly, the sort of notions we have described for the third level of hypothesis do not provide us an elaborated physics. They provide only what that level of hypothesis is defined as providing: principles of discovery. However, the process of experimental refinement of such principles of discovery converges upon the underlying principles of lawfuness of the universe in general, and thus constitutes as much as we can know respecting the fundamental laws of that universe. Not only does this level of hypothesis define a method, it also defines as much as we can know respecting the lawful ontology of the universe as a whole. Not only is science fundamentally methodologically transfinite; the universe explored is itself ontologically transfinite.
This addresses an issue which much occupied German science at the beginning of this century, a shift from the ontologically transfinite standpoint of Cusa, Kepler, Leibniz, Gauss, Riemann, et al., to the only-methodologically-transfinite approach of German science at the turn of the century. This latter represented a limited concession to Helmholtz, Boltzman, et al., the leading enemies, together with Maxwell, Rayleigh, et al., of the rigorously geometrical approach to physics. The geometrical method was degraded from a method of physics, to a method for clever intuitions into matters bearing upon the interpretation of mathematical physics' problems.
The legitimate problem, which the purveyor of statistical mechanics cites against the mechanistic system of Descartes and Newton, is that action in the universe does not conform to the notions of one-on-one interactions among isolated particles in empty space. There are determinations which belong to the manifold as a whole, which override what might appear to be inferred from a mechanistic misinterpretation of space. Probability appears to fill the gap between the two, and, within limits, appears to provide an efficient guide to practice in those matters for which the mechanistic method fails otherwise.
The fallacies intrinsic to statistical mechanics generally, and quantum mechanics in particular, are, therefore, these.
(1) It overlooks the fact that physical reality cannot be constructed within visible space, but that this reality can be constructed only as projections of a continuous manifold upon the discrete manifold of visible space. Our sense-perceptual apparatus is such, that we distort the real universe (the continuous manifold) into the form of the visible (discrete) manifold of sense-perception. The result is as if a distorting mirror were everywhere embedded in space, such that we see only the distorted reflection, not that which is reflected. Therefore all inductive-empiricist method is intrinsically false as to principled features of the cause of phenomena.
(2) It assumes that least action is straight-line action as defined by a naive view of the discrete manifold as self-
evidentally reality, whereas the only real form of action in the universe is least action defined by the projection of self-
similar-spiral conical action as the isoperimetric principle of visible space. Thus, mathematical physics is made intrinsically incommensurable with the action causing the phenomena.
(3) It makes energy and work simply equivalent, and ignores the fact that all action is essentially negentropic work, congruent in principle with increase in the areas of conical cross-sectional circles defined by a self-similar, harmonic conical function. It confuses mere effects with work, and therefore distinguishes entropy and negentropy as a mere construct of such effects, rather than properly recognizing that effects are singularities of negentropic or entropic action as primary realities.
These problems vanish once the successive standpoints of Gauss and Riemann are adopted.
This does not signify that we can derive physics simply and directly from the third level of hypothesis. It is merely a method for effecting improvements in physics, and also for judging what is outrightly absurd in existing physics' doctrines respecting fundamental matters. It is simply a rigorous way of thinking about the universe, which means that a physicist employing such rigor is vastly superior to one of equal training lacking such rigor.

Economics and Physics

Popular opinion is so much conditioned to confusing economics with monetary doctrines, that the connection of economics to physics is simply overlooked or violently denied. It is forgotten that modern economic science was founded by Leibniz, who defined economics as "physical economy/' as did the founding fathers of the United States (for example, Hamilton's Report On the Subject of Manufactures).
What we measure, ultimately, in economy, is the relative increase or decrease of the power of a population to sustain its own existence. This is best described as the potential relative population-density of a society (economy). This measures man's per capita power over nature, and thus defines what changes in behavior correspond to an increase or decrease in man's knowledge of the lawful ordering of the universe.
Those changes in behavior which overcome effects of depletion of natural resources, or which advance mankind's potential relative population-density absolutely, are increases in technology. Thus, the net work accomplished by society is properly defined as the role of work in mediating advances in technology for the practice of the society as a whole. This form of work is intrinsically negentropic, corresponding to the simplest sort of ideal conical function indicated above. That is our proper definition of work, and the proper measure of technology's equivalence to work. This connection was the discovery the writer effected in 1952 on the basis of implications of Riemann's 1854 habili-tation dissertation.
In other words, increase of the potential relative population-density of an economy is the unique experimental authority for determining what are in fact valid scientific conceptions. Any purportedly scientific notion which contradicts such criteria is ipso facto scientific absurdity. Any notion, however correct, which cannot account for itself in these terms of reference, is to that degree scientifically illiterate.
This was understood by Leibniz, who developed thermodynamics from the vantage point of his development of economic science: his generalization of the implications of the heat-powered machine for increasing the power of an individual operative to perform work. The fact that two machines, consuming the same amount of coal per hour, contribute differently to an operative's power to accomplish work, is the basis for the notion of technology.
Technology, in turn, reduces, in the case of machines, to the Principle of Least Action: conical functions as we have indicated here. This is generalized for electromagnetic action. Although we have failed to solve this for chemistry and biology, the terms of mechanical and electromagnetic work-action have proven adequate even for biological processes. Today, we reduce the formal aspect of technology to electromagnetic equivalents, and measure increase in productive power per capita in such electromagnetic-geometrical terms of reference for measuring technology.
By correlating technology so defined with the work represented as increase of potential relative population-density, we correlate technology with economic growth, the latter properly defined. Thus, we prove that those principles of discovery generating successive scientific revolutions are consistent with man's increase of per capita power over the universe. That is the ultimate scientific experiment, upon which the authority of all scientific knowledge ultimately depends.
There is much talk of the function of morality in science, a matter which was of great concern to Einstein, but a conception which eluded his grasp, and which thus misled him and his associates into many immoral directions. Reason and love are inseparable qualities of the Logos. A love of reason (the Logos) expresses itself as a love for the improvement of the condition of mankind through technological progress, and a love for that potentiality within each human personality which corresponds to the power to develop, assimilate, and apply technological progress. This is loved not merely because it enables mankind to improve his material conditions of life, but because this improvement relies upon the development of those powers of the human individual which converge upon agreement with the Logos, with the divine.
The problem of Albert Einstein, in matters of science, is that he fell in with such political company as the most evil man of the 20th century, the late Bertrand Russell, as did the "Dr. Strangelove" of the Pugwash Conference, Leo Szilard.

Lyndon H. LaRouche, Jr., an economist, is a member of the board of directors of the Fusion Energy Foundation. He is currently a candidate for the Democratic Party presidential nomination.

Retrieved from
Page last modified on October 07, 2012, at 12:51 PM