Unquestionably, the most intriguing new finding that has emerged from the development of the theory of the universe of motion is the existence of an inverse sector of the universe that duplicates the material sector which has heretofore been regarded as the whole of physical existence. As might be expected, this finding has met with a cold reception by those scientists who adhere strongly to orthodox lines of thought. This is, in a way, rather inconsistent, as these same individuals have been happy to extend hospitality to the same ideas in different forms. The concept of an “antiuniverse” composed of antimatter surfaced almost as soon as antimatter was established as a physical reality; the hypothesis of “multiple universes” gets a respectful hearing from the scientific Establishment; and astronomical literature is full of speculations about “holes” that may constitute links between those universes-black holes, white holes, wormholes, etc.
It should therefore be emphasized that the theory of the universe of motion which identifies an inverse sector is not based on radical departures from previous thinking, but on concepts that were already familiar features of scientific thought. Actually, all that has been done in the extension of the new theory into this area is to take the vague concept of an antiuniverse, put it on a solid factual foundation, and develop it in logical and mathematical detail. Many of the conclusions that have been reached in the course of this development are new, to be sure, but they are implicit in the antiuniverse concept.
Observational identification of antimatter in our local environment shows that the observed universe and the antiuniverse are not totally isolated from each other; some entities of the “anti” type exist in observable form in our familiar physical universe. It is only one step farther—a logical additional step—to a realization that this implies that the complex entities of the observed type may have components of the “anti” nature. Once this point is recognized, it can be seen that the unorthodox conclusions that have been reached in the preceding pages are simply the specific applications of the antiuniverse concept.
For example, additions to the linear component speeds (temperatures) decrease the density of ordinary astronomical objects. It follows from the inverse nature of the “anti” sector, the cosmic sector, as we are calling it, that addition of speeds of the “anti” character increases the density. Similarly, addition of rotational motion in space to an atom of matter decreases the isotopic mass, while addition of rotational motion of the inverse type (motion in time) increases the isotopic mass. And so on.
The new theoretical development has merely taken the familiar idea of a universe of motion, and the equally familiar idea of existence in discrete units only, and has followed these ideas to their logical consequences, an operation that was made possible by the only real innovation that the new development introduces into physical theory: the concept of a universe composed entirely of discrete units of motion. With the benefit of this new concept, it has been possible to define the physical universe in terms of the two postulates stated in Volume I. The contents of this present volume describe the detailed development of the consequences of these postulates, as they apply to astronomy. Before concluding this description, and taking up consideration of some of the other consequences and implications of the findings, it will be appropriate to give further attention to the few, but important, direct contacts between the two sectors of the universe.
In one sense, the two primary sectors of the universe, the material and the cosmic, are clearly differentiated. The phenomena of the material sector take place at net speeds that cause changes of position in space, whereas the phenomena of the cosmic sector take place at net speeds that cause changes of position in time. But the space and time of the material sector are the same space and time that apply to the cosmic sector. For this reason, the phenomena of each sector are also, to some degree, phenomena of the other as well.
Some of the observable effects of this inter-sector relationship have already been discussed. In Volume I the cosmic rays that originate in the cosmic sector were considered in substantial detail, and in the preceding chapters of this present volume similar consideration has been given to the quasars and pulsars that are on their way to the cosmic sector. In these areas previously examined, we have been dealing with phenomena in which physical objects acquire speeds, or inverse speeds, that cause them to be ejected from one sector into the inverse sector, where the combinations of motions that constitute these objects are transformed into other combinations that are compatible with the new environment. In addition to these actual interchanges of matter between sectors, there are also situations in which certain phenomena of one sector make observable contact with the other sector because of this point that has just been brought out: the fact that the events of both sectors involve the same space and time.
As we have seen in the earlier pages, the dominant physical process in each sector is aggregation under the influence of gravitation. In the material sector gravitation operates to draw the units of matter closer together in space to form stars and other aggregates. When portions of this matter are ejected into the cosmic sector in the form of quasars and pulsars, gravitation ceases to operate in space. This leaves the outward progression of the natural reference system unopposed, and that progression, which carries the constituent units of the spatial aggregates outward in all directions, destroys the spatial structures and leaves their contents in the form of atoms and particles widely dispersed in both space and time. Meanwhile, gravitation in time has become operative, and as it gradually increases in strength it draws the dispersed matter into stars and other aggregates in time. These aggregates then go through the same kind of an evolutionary course as that followed by the aggregates in space.
As this description indicates, the basic physical units maintain the continuity of their existence regardless of the interchanges between sectors, merely altering their distribution in space and time. In the material sector they are distributed throughout the full extent of the three dimensions of the spatial reference system, but they move only through the restricted region of time traversed in a linear progression. In the cosmic sector these distributions are reversed. Contacts between the entities of the material sector and those of the cosmic sector are therefore limited. In view of the relatively low density of matter in the universe as a whole, a cosmic entity moving one-dimensionally through three-dimensional space will, on the average, have to travel a long way before encountering a material object. Nevertheless, some such encounters are continually taking place.
The key factor in this situation is the nature of the relation between space and time. Not until comparatively recently was it realized that such a relation actually exists. Even in Newton’s day these two entities were still regarded as being totally independent. The current view is that time is one-dimensional, and constitutes a kind of quasi-space which joins with three dimensions of space to form a four-dimensional space-time framework, within which physical objects move one-dimensionally. While this four-dimensional space-time concept is relatively new, the basic idea of space and time as the elements of a framework, or setting, for the activity of the universe is one of long standing. Indeed, it is so deeply embedded in physical thought that it is very difficult to recognize the existence of any alternative. The problem involved in making a break with this familiar habit of thought is illustrated by the fact that even in the first edition of this work, the postulates of the theory being described were still expressed in terms of “space-time.” Eventually, however, it was realized that space-time is actually motion.
Throughout the development of thought concerning this subject, it has been recognized by everyone that motion is a relation between space and time. The magnitude of the motion, the speed or velocity, has been expressed accordingly, in terms of centimeters per second, or some equivalent. The four-dimensional concept embraced by current science assumes that a totally different kind of relation also exists. In application to entities of a fundamental nature, such a duality is inherently improbable, and the development of the theory of the universe of motion now indicates that the assumption is erroneous. Our finding is that any relation between space and time is motion or an aspect of motion.
It is now evident that the concept of space-time employed in conventional physical theory, and carried over into the early stages of the development of the theory of the universe of motion, is a partial, and rather confused, recognition of the nature of the fundamental relation between space and time. This so-called “space-time,” a simple relation between a space magnitude and a time magnitude, is the basic scalar relation between space and time; that is, “space-time” is actually scalar motion. Fundamentally, this relation is mathematical. Its dimensions are therefore mathematical, or scalar, dimensions. From the mathematical standpoint, an n-dimensional quantity is simply one that requires n independent scalar magnitudes for a complete definition. It follows that in a three-dimensional universe there can be three scalar dimensions of motion.
The spatial aspect of one (and only one) of these can be represented geometrically in a reference system of the conventional coordinate type. Here we are dealing with three dimensions of space, but only one dimension of motion. The reference system is not capable of representing motion in the other two scalar (mathematical) dimensions. But the fact that the same space and time are involved in all types of motion means that there are some effects of the motion in these other dimensions that are observable, at least indirectly, in the reference system. The force of gravitation, for example, is reduced by a distribution over all three dimensions, and only a fraction of it is effective in the space of the reference system.
Use of the term “dimension” in both mathematical and geometric applications leads to some confusion. The term is usually interpreted geometrically, and many persons are puzzled by the introduction of scalar dimensions of motion into the physical picture. It has therefore been suggested that some different designation ought to be substituted for “dimension” in one of the two applications. However, all dimensions are inherently mathematical. The geometric dimensions are merely representations of numerical magnitudes.
Motion at a speed less than unity causes a change of position in space. The three-dimensionality of the universe applies to the spatial aspect of this motion as well as to the motion as a whole. The space involved in one of the scalar dimensions of motion can therefore be resolved into three independent components, which can be represented geometrically. Since no more than three dimensions exist, there is no basis, within three-dimensional geometry, for representation of the spatial aspects of the other two scalar dimensions of motion, except under certain special conditions discussed in the preceding pages. Motion at a speed greater than unity causes a change of position in three-dimensional time. If independent, this motion cannot be represented in the spatial reference system. However, if it is a component of a combination of motions in which the net total speed is on the spatial side of the neutral level, the temporal speed acts as a modifier of the spatial speed; that is, as a motion in equivalent space.
From the foregoing it can be seen that the universe is not four-dimensional, as seen by conventional science, nor is it six-dimensional (three dimensions of space and three dimensions of time), as some students of the Reciprocal System of theory have concluded. We live in a three-dimensional universe. Just how these three dimensions manifest themselves in any specific case depends on the individual circumstances.
Two physical entities make contact when they occupy adjacent units of either space or time. It is commonly believed that the essential condition for contact is to reach the same point in space at the same time, but this is not necessarily true. Objects located in the spatial reference system must be at the same stage of the progression-that is, the same clock time-in order to make contact, but this is only because there is a space progression paralleling the time progression recorded by the clock, and unless two such objects are at the same stage of this space progression they are not at the same spatial location. Two objects that are in contact in space are not usually at the same location in three-dimensional time. Likewise, objects that are in contact in time are usually at different spatial locations.
This fact that the spatial contact is independent of the time location accounts for the containment of the material moving at upper range speeds in the interiors of the giant galaxies prior to the explosions that produce the quasars. Since the components of this high speed aggregate are expanding into time at speeds in excess of the speed of light, it might be assumed that they would quickly escape from the galaxy. But the increased separation in time does not alter the spatial relations. The equilibrium structure in space that exists in the outer regions of the giant galaxy is able to resist penetration by the high speed material in the same manner in which it resists penetration by matter moving at less than unit speed.
The motions of cosmic entities in time are similarly restrained by contacts with cosmic structures, but these phenomena are outside our field of observation. The phenomena of the cosmic sector with which we are now concerned are the observable events which involve contacts of material objects with objects that are either partially or totally cosmic in character. Interaction of a purely material unit with a cosmic unit, or a purely cosmic unit with a material unit follows a special pattern. Where the structures are identical, aside from the inversion of the space-time relations, as in the case of the electron and the positron, they destroy each other on contact. Otherwise, the contact is a relation between a space magnitude and a time magnitude, which is motion. Viewed from a geometrical standpoint, these entities move through each other. Thus matter, which is primarily a time structure, moves through space, while the uncharged electron, which is essentially a rotating space unit, moves through matter.
Material and cosmic atoms, and most sub-atomic particles, are composite structures that include both material (spatial) and cosmic (temporal) components. Inter-sector contacts between such objects therefore have results similar to those of contacts between material objects. To an observer, such a contact appears to be the result of a particle entering the local environment from an outside source. These results are indistinguishable from those produced by an incoming cosmic atom. The contact will therefore be reported as a cosmic ray event. The cosmic atoms involved in these events are moving at the ordinary inverse speeds of the cosmic sector, rather than at the very high inverse speeds of the atoms that are ejected into the material sector as cosmic rays. The most energetic of the reported cosmic ray events therefore probably result from these random encounters.
One other cosmic event that has an observable effect in the material sector is a catastrophic explosion, such as a supernova or a galactic explosion, that happens to coincide with the time of the spatial reference system. The radiation received in the material sector from ordinary cosmic stars is widely dispersed in space, because only a few of the atoms of each of these stars are located in the small amount of space that is common to the cosmic star and the spatial reference system as they pass through each other. But a cosmic explosion releases a large amount of radiation in a very small space, just as an explosion of the material type releases a large amount of radiation in a very short time. We can thus expect to observe some occasional very short emissions of strong radiation at cosmic frequencies (that is, the inverse of the frequencies of the radiation from the corresponding explosions of the material type).
Both the theoretical investigations and the observations in this area are still in the early stages, and it is premature to draw firm conclusions, but it seems likely that the theoretical short, but very strong, emissions of radiation can be identified with some of the gamma ray “bursts” that are now being reported by the observers. A reported “new class of astronomical objects” is described in terms suggesting cosmic origin. These objects, says .he report, “emit enormous fluxes of gamma radiation for periods of seconds or minutes and then the emission stops.”289 Martin Harwit tells us that ”remarkably little is known about gamma ray bursts,”290 and elaborates on that assessment by citing an observers’ summary of the existing situation, the gist of which is contained in the following statement:
- Neither the indicated direction or coincidence in times of occurrence have yet established an association between these bursts and any other reported astrophysical phenomena. Even today, 1978, with 71 bursts cataloged, and with improved directional resolution available, the sources of these bursts remain unidentified without even a strong suggestion of the class or classes of objects responsible.
In addition to these events involving contacts between the entities of the two sectors, there are other phenomena which result from the fact that photons of radiation exist on the sector boundary, and therefore participate in the activities of both sectors. This is a consequence of the status of unit speed as the speed datum, the physical zero, as we called it in the earlier discussion. From the standpoint of the natural reference system, a speed of unity measured from zero speed, and an inverse speed of unity measured from zero energy (inverse speed) are equal to each other, and equal to zero. An object moving at this speed relative to the conventional spatial reference system, or to an equivalent temporal reference system, is not moving at all from the natural standpoint. The photons of radiation, which move at unit speed in the conventional reference system, are thus stationary in the natural system of reference, regardless of whether they originate from objects in the material sector, or from objects in the cosmic sector. It follows that they are observable in both sectors.
Because of the inversion of space and time at the unit level, the frequencies of the cosmic radiation are the inverse of those of the radiation in the material sector. Cosmic stars emit radiation mainly in the infrared, rather than mainly at optical frequencies, cosmic pulsars emit x-rays rather than radio frequency radiation, and so on. But these individual types of radiation are not recognized as such in the material sector because, as we found earlier, the atoms of matter that are aggregated in time to form cosmic stars, galaxies, etc., are widely separated in space. The radiation from all types of cosmic aggregates is received from these widely dispersed atoms as a uniform mixture of very low intensity that is isotopic in space.
This “background radiation” is currently attributed to the scattered remnants of the radiation originated by the Big Bang, which are presumed to have cooled to their present state, equivalent to an integrated temperature of about 3K, in the billions of years that are supposed to have elapsed since that hypothetical event occurred.
The Big Bang is one of the major features of the universe as it appears in modern astronomical theory. The next chapter will present a comparison of this astronomical universe with the universe of motion defined by the postulates of this work. It will be shown that, although the building blocks of the astronomers’ universe are observed entities—stars, galaxies, etc.—that exist in the real sense, the universe that they have constructed as a setting for these real objects is a purely imaginary structure that has no resemblance to the real physical world.
Inasmuch as science claims to have methods and procedures that are capable of arriving at the physical truth, it may be hard to understand how the astronomers, who presumably utilize scientific methods, could have reached such very unscientific results. But an examination of astronomical literature quickly shows just what has gone wrong. The astronomers have followed the lead of a modern scientific school whose methods and procedures do not conform to the rigid standards of traditional science.
Of course, this assertion will be vigorously denied by those whose activities are thus characterized. So let us see just what is involved in this situation. Aside from gathering information, the traditional way of extending scientific knowledge is by means of what is known as the hypothetical-deductive method. This method involves three essential steps: (1) formulation of a hypothesis, (2) development of the consequences thereof, and (3) verification of the hypothesis by comparing these consequences with the facts disclosed by observation and measurement. The nature of this process allows a wide latitude for the construction of the basic hypotheses. On the other hand, the constraints on item (3), the verification process, are extremely rigid. In order to qualify as an established item of scientific knowledge, a hypothesis must be capable of being stated explicitly, so that it can be tested against observation or measurement. It must be so tested in a large number of separate applications distributed over the entire field to which this item is applicable, it must agree with observation in a substantial number of these tests, and it must not be inconsistent with observation in any case.
It is important to bear in mind that a physical proposition of a general nature, the kind of a hypothesis that enters into the framework of the astronomers’ universe, cannot be verified directly in the manner in which we can verify a simple assertion such as “Water is a compound of oxygen and hydrogen.” Direct verification of a general relation would require an impracticable number of separate correlations. In this case, therefore, it is necessary to rely on probability considerations. Each comparison of one of the consequences of a hypothesis with observed or measured facts is a test of that hypothesis. Disagreement is positive. It constitutes disproof. If even one case is found in which a conclusion that definitely follows from the hypothesis is in conflict with a positively established fact, that hypothesis, in its existing form, is disproved.
Agreement in any one comparison is not conclusive, but if the tests are continued, every additional test that is made without encountering a discrepancy reduces the probability that any discrepancy exists. By making a sufficient number and variety of such tests, the probability that there is any conflict between the consequences of the hypothesis and the physical facts can be reduced to a negligible level. Just where this level is located is a matter of opinion, but the principle that is involved is the same as that which applies to any other application of the probability laws. Many positive correlations are required in order to establish a probability strong enough to validate a hypothesis. If only a few test can be made, the probability of validity remains too low to be acceptable.
To illustrate the effect of a small number of correlations with empirical knowledge, let us consider one of the coin tossing experiments that are used extensively in teaching probability mathematics. We will assume, for purposes of the illustration, that the participants have not been given the opportunity to examine the coin that will be used in the experiment. Thus there is a small possibility that this coin is a phony object with heads on both sides. If the first toss comes up heads, this is consistent with a hypothesis that a two-headed coin is being used, but clearly, this one case of agreement with the hypothesis does not change the situation materially. The odds are still overwhelmingly in favor of the coin being genuine. Not until a substantial number of successive heads have been tossed-perhaps nine or ten would the double-headed coin hypothesis be taken seriously, and a still longer run would be required before the hypothesis could be considered validated.
The effect of the number of trials, or tests, on the probability of the validity of a hypothesis is independent of the nature of the proposition being tested. Astronomical conclusions are subject to the same considerations as any other hypotheses, including the hypothesis of the double-headed coin. But very few of the key features of the astronomers’ picture of the basic structure of the universe are supported by more than one or two correlations with observation. Some have none at all. The fact that the one or two cases of agreement between theory and observation, where they exist, does not add significantly to the probability of validity thus means that these crucial astronomical conclusions are unconfirmed. As scientific products they are incomplete. The final step in the standard scientific procedure, verification, has not been carried out.
To make matters worse, many of the conclusions are not merely unverified. The processes by which they have been reached are such that a large proportion of them are necessarily wrong. The reason is that these conclusions rest, in whole or in part, on general principles that are invented. The status of invention as a source of physical theory was discussed in Volume I, but a review of the points brought out in that discussion that are relevant to the astronomical situation will be appropriate at this time.
Modern physical theory is a hybrid structure derived from two totally different sources. In most physical areas, the small-scale theories, those that apply to the individual physical phenomena and the low-level interactions, are products of induction from factual premises. Many of the general principles, those that apply to large-scale phenomena, or to the universe as a whole, are invented. “The axiomatic basis of theoretical physics cannot be an inference from experience, but must be free invention,”292 is Einstein’s contention.
There is a great deal of misunderstanding as to the role of experience in the first step of the scientific process, the formulation of a hypothesis, largely because of the language that is used in discussing it. For example, in describing “how we look for a new [physical] law,” Richard Feynman tells us, “First we guess it.”293 This would seem to leave the door wide open, and such statements are widely regarded as sanctioning free use of the imagination in theory construction. But Feynman goes on the stipulate that the hypothesis must be a “good guess,” and enumerates a number of criteria that it must satisfy in order to qualify as “good.” Before he is through he concedes that “what we need is imagination, but imagination in a terrible strait-jacket.”294
What Feynman calls a “good” guess is actually one that has a substantial probability of being correct. As he points out, there are an “infinite number of possibilities” if invention is unrestricted. The probability of any specific one of these being correct is consequently near zero. The scientific way of arriving at a reasonably probable hypothesis (the way that Feynman describes, even though some of his language would lead us to think otherwise) is to utilize inductive processes such as extrapolation, analogy, etc., to obtain the kind of an “inference from experience” to which Einstein objects. A hypothesis derived inductively-that is, an inference from experience-is, in effect, pretested to a considerable extent. For instance, an analogy in which a dozen or so points of similarity are noted is equivalent to an equal number of positive correlations subsequent to the formulation of a hypothesis. Thus the inductive theory has a big head start over its inventive counterpart, and is within striking distance of proof of validity from the very start.
But inductive reasoning requires a factual foundation. Inferences cannot be drawn from experience unless we have had experience of the appropriate nature. In many of the fundamental areas the necessary empirical foundations for the application of inductive processes have not been available. The result has been a long-standing inability to find answers to many of the major problems of the basic areas of physics. Continued frustration in the search for these answers is the factor that has led to the substitution of inventive for inductive methods.
A similar situation exists throughout most of the astronomical field, where normal inductive methods are difficult to apply because of the scarcity of empirical information and the unfamiliar, and seemingly abnormal, nature of many of the observed phenomena. The astronomers have therefore followed the example of the inventive school of physicists, and have drawn upon their imaginations for their hypotheses. Application of this policy has resulted in replacement of the standard scientific process of theory construction by a process of “model building.” This process starts with a “free invention,” a “castle in the air,” as H. L. Shipman describes it. Beginning with “a small, neat castle in the air,” he says, you “patch on extra rooms and staircases and cupolas and porticos.”295 The result is not a theory, an explanation or description of reality, it is a model, something that, as Shipman explains, is merely intended to facilitate understanding of the real world. ”The model world exists only in people’s minds,”296 he says.
The fatal weakness of this kind of a program, based on invention, is that inventive hypotheses are inherently wrong. The problems that they attempt to solve almost invariably exist because some essential piece, or pieces, of information are missing. This rules out obtaining the answer by inductive methods, which must have empirical information on which to build. Without the essential information the correct answer cannot be obtained by any means (except by an extremely unlikely accident). The invented answer drawn from the imagination to serve as the basis for a model is therefore necessarily wrong.
Of course, the erroneous invented theories, or models, cannot meet the standard tests of validity, and the same process of invention has been applied to the development of expedients for evading the verification requirements. Not infrequently these are employed to evade actual conflicts with the observed facts. Chief among them is the ad hoc assumption. When the consequences of a hypothesis are developed, and it is found that they disagree with observation in some respects, instead of taking this as disproof of the validity of the hypothesis, the theorist uses his ingenuity to invent a way out of the difficulty that cannot be tested, and therefore cannot be disproved. He then assumes this invention to be valid. Like the invented theories themselves, and for the same reasons, these inventions that take the form of ad hoc assumptions are inherently wrong.
Another of the expedients frequently employed to justify acceptance of a hypothesis whose validity has not been, or cannot be, tested is the “There is no other way” argument that we have had occasion to discuss at a number of points in the preceding pages. No further comment should be necessary on the usual form of this argument, but we often meet it in a somewhat different form in astronomy. There are many astronomical phenomena about which very little is known, and only one or two correlations with observation are possible, as matters now stand. There is a rather widespread impression that, under the circumstances, if a hypothesis is consistent with observation in these instances, its validity is established. Here the argument is that the hypothesis has been tested in the only way that is possible, and has withstood that test. The fallacy involved in calling this a verification can be seen when it is realized that the limitation of the testing of a hypothesis by reason of the unavailability of more than one or two tests is equivalent to discontinuing the coin-tossing tests of the double-headed hypothesis after the first or second toss. The truth is that the increase in the probability of validity of a hypothesis that results from a favorable outcome of one or two tests is insignificant, regardless of the reasons for the limitation of the testing to these cases.
What the current practice amounts to is that instead of proof the astronomers are offering us absence of disproof. Shipman makes this comment about the situation in one of the poorly tested areas:
To a great extent this picture [of stellar evolution] is based on limited models, blind faith, and a few observed facts.297
“Blind faith” may be appropriate in religion, but it is totally unscientific. One of the most unfortunate results of the reliance on absence of disproof is that it favors departures from reality in the construction of theories. The farther a hypothesis diverges from reality, the less opportunity there is for checking it against established facts, and the more difficult it is to disprove. By the time the speculation reaches such concepts as the black hole, all contact with reality has long since been lost.
For example, examination of the case in favor of the black hole as the explanation of the x-ray source Cygnus X-l, the object that is supposed to provide the best observational evidence for the existence of a black hole, reveals that this case is argued entirely on the basis of what this object is not. It is not a white dwarf, so it is claimed, because it is larger than the accepted unverified hypothesis as to the nature of the white dwarf stars will permit. It is not a neutron star, because, for the same reason, the observations conflict with the accepted unverified hypothesis as to the nature of the hypothetical neutron stars. “It is difficult to explain Cygnus X-1 as anything but a black hole,”298 says Shipman. In less credulous times, the inability of an investigator to find a viable explanation for a phenomenon would have been regarded as an indication that his job is still unfinished. But now we are expected to accept the best that he can do as the best that can be done.
In justice to this author, however, it should be noted that, although he accepts the existence of the black hole as “probable” on the strength of the foregoing argument, he evidently has some qualms about giving unreserved support to such an excursion into the land of fantasy, because he goes on to say:
Black holes are, so far, entirely theoretical objects… It is very tempting, especially for people who like science fiction, to succumb to the Pygmalion syndrome and endow their model black holes with a reality that they do not yet possess.299
It is, of course, true that the opportunities for gathering factual information are severely restricted in astronomy, where experimentation is not possible and observation is limited by the immense distances and very long times that are involved in the phenomena under consideration. The structure of astronomical theory thus rests on a very narrow factual base, and it is to be expected that more than the usual amount of speculation will enter into astronomical thinking. But the presence of this speculative component in current thought is all the more reason for taking special precautions to maintain a strict distinction between those items that have met the test of validity and those that are still unverified. In order to preserve the contact with reality, it is particularly important to avoid pyramiding unverified results.
Here the demands of science collide with the interests of the scientists, especially the theorists. Advancement of theoretical knowledge is a slow and difficult task. Few of those who undertake this task ever accomplish anything of a lasting nature, other than minor modifications of some features of previous thought. But the professional scientists of the present day are under intense pressure to produce results of some kind. Financial support, personal prestige, and professional advancement all depend on arriving at something that can be published. As expressed among the university faculties, “Publish or perish.” So the theorists concentrate their efforts mainly in the far-out regions where there are only a minimum of those inconvenient facts that are the principal enemies of theories, and they fill the scientific literature with products that cannot be tested because they have too few contacts with physical reality. It is the pyramiding of these untestable hypotheses that has produced the imaginary universe of modern astronomy that we will examine in the next chapter.
To the extent that the theorists make any attempt to justify their wholesale use of imaginary entities and phenomena in the construction of their models, they rely on the contention that “there is no other way”; that the amount of factual information available for their use is totally inadequate to provide the foundation for theoretical development. This is a specious argument. It serves the purpose of the individual whose primary purpose is to find something that he can publish, but it makes no contribution toward the advancement of knowledge. On the contrary, to the extent that the imaginary results are accepted, it places obstacles in the way of real advances.
Furthermore, the lack of factual information is not nearly as acute as the astronomers depict it. It is true that the amount of information about individual phenomena is often quite limited, but this is not peculiar to astronomy. It is common to all areas of inquiry, and science has found ways of overcoming this handicap. For example, information in several areas may sometimes be pooled. The concept of “energy,” which has played an important part in the development of physical theory, did not emerge from the study of any one individual area. It was derived by the process known as abstraction, involving the use of data from many such areas. It would have been equally possible for the astronomers to have abstracted the property of “extremely high density” from a number of different astronomical phenomena, and to have examined it in the light of the large amount of factual information thus collected. This might well have resulted in the discovery of the true cause of this high density before it was brought to light by the theory of the universe of motion.
Such considerations are now no more than academic in application to astronomy, since it has been demonstrated in the preceding pages that the physical principles developed from the postulates that define the universe of motion are capable of dealing with the whole range of astronomical phenomena. But one of the things that many scientists have envisioned is the eventual application of scientific methods and procedures to the solution of the problems of some of the non-scientific branches of thought that have long been mired down in confusion and contradiction. Before anything of this kind can be accomplished, it will obviously be necessary for the scientific profession itself to return to the traditional methods and procedures that are responsible for its record of achievement. The black holes, the quarks, the Big Bang, and similar fantasies are the products that are publicized as the fruits of scientific research in the media, and the ordinary individual cannot be expected to realize that the remarkable accomplishments of science over the past several centuries have not been made by such flights of fancy, but by a steady application of the traditional methods of science to one problem after another, testing each answer as it is obtained, and building up a solid and stable structure of theory brick by brick. If science is to be applied to economics, for example, it will have to be in this slow, careful, and painstaking way. Economics already has too many of the economic equivalents of the black hole.