The Fundamentals of Science in the Twenty-First Century

Principal Address to the Third Annual NSA Conference,
University of Utah, Salt Lake City, August 18, 1978

As you’ve noticed, it took quite a little while for the CBS crew to set up this evening, and on that account we’re running at least a half an hour late. So I’m going to omit the first half hour of what I was going to say... It’s unfortunate, because that will include some of my most shady jokes. But I’ll try to take up from that half hour period. Frank took you back into history quite a little way, but just to do him one better, I’m going to go still farther back.

Five thousand years ago, when the invention of writing on clay tablets by the Sumerians first gave the human race an opportunity to leave a permanent record of its thoughts and actions, there was already in existence a rather sophisticated science of astronomy. The priests, who were the scientists of those days, were not only familiar with elementary astronomical facts, such as the apparent movements of the sun, moon and planets, but they had also advanced to the point where they were able to predict eclipses and to calculate the length of the year to within about a half hour of its present accepted value. The premises upon which these calculations and others of the same kind were made were the fundamentals of the science of that day, in the sense in which I am using the term now, that is, they were the most basic of the principles that were used by the science of that day.

These principles were originally derived by a simple application of what we now call inductive reasoning; that is, they were generalizations from experience. And that is the most reliable method of arriving at scientific principles, fundamental or otherwise, but unfortunately, it is limited by the amount of empirical information that’s available, and by the extent to which that information has been analyzed. So the result is, that an inductive science, such as that of the ancient peoples, has a tendency to fall behind the progress of empirical discovery, and ultimately it acquires a rather embarrassing accumulation of unsolved problems. Now that was the situation in Egypt, in Babylonia and in the Far East about three thousand years ago.

The time was clearly ripe for some new approach, and that was provided by a remarkable group of thinkers who flourished in Greece during the Golden Age of that country’s history. The source of order in the universe, these men said, was mind, and the proper way of arriving at general principles was to apply insight and reasoning. The result of that change in policy was to concentrate attention on the causes of physical phenomena rather than on the phenomena themselves. Where the Egyptians saw only the fact that a rock falls if it’s released from a height, the Greeks looked for the cause of the fall. Now they reasoned that everything must have its natural place, so the rock in falling is merely seeking its fixed natural place. In this way, by providing an explanation for what happened, they remedied the chief defect of the previous inductive theories. Similarly they reasoned, as professor Meyer indicated, that while the earth is obviously imperfect, the heavens are perfect. And all heavenly motions must then take the perfect form, that of a circle. So the orbits of the planets are undoubtedly circular.

Now observation and experiment were definitely relegated to a secondary position by the Greeks, but they were not disregarded altogether. So when the observations showed that the planetary orbits are not exactly circles, it was recognized that there was an awkward discrepancy that we have to do something about. But one of the strong points of an inventive science, such as that of the Greeks, is that it can easily accommodate new discoveries simply by more invention. Greek method of deriving scientific principles by pure invention is that it lends itself readily to the assimilation of new information by means of more invention: so they assumed that the planets move in small circles, called epicycles, and these epicycles them move around the main planetary orbit. Then, when further observational refinement disclosed still more discrepancies, those could be taken care of in exactly the same way, merely by adding more epicycles.

This Ptolemaic theory of planetary orbits is typical of inventive theories in general. And since we see it in a historical perspective, by taking a look at this Ptolemaic theory we can get an idea of the general characteristics of inventive theories. The first point that we need to note is that that theory was mathematically correct, within the existing observational limits, the then existing limits. That is a general characteristic of all inventive theories, because they’re invented for that specific purpose. They’re specifically designed to fit mathematics that are already known. The second significant point is that that theory, the Ptolemaic theory was conceptually wrong. The interpretation of the mathematics was wrong. That, again, is a general characteristic that applies to all invented characteristic of invented theories because of the circumstances under which they’re invented. As many observers have pointed out, long-standing problems in science do not continue to exist because of a lack of competence on the part of those who are trying to solve them, nor do they continue to exist because of a lack of methods by which to go about solving them. They continue to exist because some piece or pieces of information that are essential, are missing. In the case of the Ptolemaic theory, there were two such pieces of information: the Greeks did not realize that the planets revolve around the sun rather than around the earth, and they did not know that there is a force of gravitation controlling those movements. Without those two pieces of information, neither the Ptolemaic theory, nor any other theory that was invented to explain the mathematics could have been correct. Now that is a general characteristic of inventive theories. And I am stressing it at this time, because it will be important later on in other connections. If the information is available, if all the essential information is there, then there’s no need to invent a theory. Then we can obtain it by inductive means. If the essential information is not there, then any theory we invent cannot be conceptually right.

In view of the practically unlimited opportunities for making additional ad hoc assumptions to meet any situation that may arise, an inventive science never reaches the kind of a situation that causes the downfall of inductive sciences. At any given time there may be a few items for which plausible explanations have not yet been invented, but there is never the large accumulation of unexplained phenomena that characterizes an inductive science that has fallen behind the progress of empirical investigation. However, the freedom to meet new requirements by adding more and more ad hoc assumptions, or epicycles, leads to a fate of a different kind. The time ultimately comes when such a system of theory simply has too many epicycles.

In the meantime, even though the fundamental theories in current use are inventive, the accumulation of empirical information and the construction of inductive generalizations of a lower rank continues. Ultimately, a point is reached where the principles derived inductively are sufficiently broad in their scope to challenge the premises of the prevailing inventive theories. The Greek system reached this point about 500 years ago, and science then reverted to the inductive status, discarding inventive concepts such as the perfection of the heavens and the natural places of physical entities in favor of principles formulated by such men as Kepler and Newton through inductive reasoning from observed and measured facts.

With the benefit of all the empirical information accumulated during the approximately 2,500 years since the demise of the earlier inductive systems of the ancient civilizations, the new inductive science was a vastly improved product, and it scored some remarkable successes. At one time its practitioners were quite confident that a complete understanding of the universe was within their grasp. But here, again, the inherent inability of an inductive system of theory to keep pace with the progress of empirical discovery asserted itself. Eventually, Newtonian physics was confronted with a series of discrepancies for which it had no plausible answers. Another reversal of policy took place, and the inductive science of Newton and his contemporaries was replaced by a science based on invented principles, just as the first inductive sciences were replaced 3,000 years earlier by the inventive system of the Greeks.

When an idea or system of ideas gains general acceptance and becomes a familiar feature of current thought, its origins recede from view, and it is quite likely that many a reader may be reluctant to believe that the basic theories of modern physics—the relativity theory, for instance—belong in the same category as the Ptolemaic theory of astronomy. But all of them belong in the category of pure inventions. The originators of the modern theories do not deny this; indeed, they emphasize it. Einstein, for example, saw the general acceptance of his theories in just the way that I have described: a victory of inventive science over inductive science. In his opinion, pure invention is the only way in which true fundamental principles can be derived. Einstein was highly critical of Newton’s attempts to derive such principles inductively. He said this:

Newton, the first creator of a comprehensive, workable system of theoretical physics, still believed that the basic concepts and laws of his system could be derived from experience.... the tremendous practical success of his doctrines may well have prevented him and the physicists of the eighteenth and nineteenth centuries from recognizing the fictitious character of the foundations of his system.

Einstein’s own view was that the “basic concepts and laws of physics” (what I am calling the fundamentals) are “in a logical sense free inventions of the human mind.” He elaborates this view in these statements taken from the book The World As I See It:

Since, however, sense perception only gives information of this external world of “physical reality” indirectly, we can only grasp the latter by speculative means.
The theoretical scientist is compelled in an increasing degree to be guided by purely mathematical, formal considerations in his search for a theory, because the physical experience of the experimenter cannot lift him into the regions of highest abstraction.
The axiomatic basis of theoretical physics cannot be extracted from experience but must be freely invented...

There is a rather general tendency to assume that Einstein and the other architects of modern science were not actually as casual about the background of their theories as these words would indicate; that their basic principles must have been anchored to something solid at some point. But this is not true. As Rudolf Carnap puts it, these theories were “constructed floating in the air, so to speak.” Einstein gives us enough information about some of his concepts to make it clear that when he talks about “free invention” he means exactly that. For example, the propagation of radiation plays a very important part in his theories, and his comments about the explanation that he invented to account for the mechanism of propagation and its relation to space are therefore very significant. In one of his books he tells us that the formulation of a theory to account for this phenomenon is a very difficult task, and he concludes with this statement:

Our only way out seems to be to take for granted the fact that space has the physical property of transmitting electromagnetic waves, and not to bother too much about the meaning of this statement.

The point of all this is that the invented theories of present-day science have exactly the same logical standing that the Ptolemaic theory of astronomy, the “natural place” theory of gravitation and the other theories invented by the Greek scientists had in their day. They are mathematically correct but conceptually wrong.

This statement may seem to be in direct conflict with the many confident assertions in scientific literature to the effect that the fundamental theories of modern physics are established beyond the shadow of a doubt. But if one examines the basis for these assertions, one finds that the evidence that is cited is purely mathematical. What has been established is that the theories produce the correct mathematical results. Like all other inventive theories, they have been specifically designed to produce these correct results. But none of them is unique. In each case there are alternatives that produce the same result. And, as Richard Feynman points out, there is no scientific criterion by which we can choose between any two of these alternatives “because they both agree with experiment to the same extent. So two theories, although they may have deeply different ideas behind them, may be mathematically identical, and then there is no scientific way to distinguish them.” He goes on to say, “Every theoretical physicist who is any good knows six or seven theoretical representations of exactly the same kind of physics.”

What Feynman does not say is that these comments apply only to invented theories; they have no relevance to theories derived by induction from factual premises. The kinetic theory of gases, for instance, is an inductive theory. It explains gas laws in terms of the motions of the molecules of which the gases are composed. No one knows a half dozen other representations of these gas laws that are equally correct, or even one such alternative. Because it is tied in to experience—physically as well as mathematically—the kinetic theory is unique. It is both mathematically and conceptually correct. Inventive theories in general, including modern theories such as relativity and the quantum theories, are mathematically correct but conceptually wrong. This is not because of any errors in their construction, but by reason of their inherent nature.

Whether there is any net gain in using inventive theories during times when the scientific community would otherwise have no theories at all to account for some of the important observed phenomena is an interesting philosophical issue. Inventive theories are not actually necessary. The mathematics, which always antedate the theories, could be used equally well without any theoretical explanation. So the issue boils down to the question: Is a wrong explanation better than no explanation at all? There is a widespread tendency, dating back at least to Francis Bacon, to answer this question affirmatively, the argument being that a plausible explanation, even if wrong, will suggest some lines of further investigation that may be productive. On the other hand, it is easy to see that insistence on adhering to Aristotle’s inventive theories was a serious impediment to scientific progress, particularly in the latter years of the ascendancy of Greek science. It can logically be deduced that insistence on adhering to the modern inventive theories is having a similar effect today.

In any event, the fact that now needs to be recognized as we approach the twenty-first century is that we have once more arrived at the kind of situation that developed in the Middle Ages. The currently accepted fundamental physical theories derived by pure invention have come to be overloaded with epicycles, while coincidentally the development of inductively-based theory has caught up with the empirical discoveries, so that the way is now open for a return to the firmly-based inductive type of science.

The imminence of another policy reversal could easily be deduced from nothing more than a consideration of the times involved in the cycle of reversals just described. The first inductive sciences prospered for thousands of years before they were overthrown by the Greek inventive science. The first inventive science then endured for about 2,500 years before the second of the inductive sciences, the one commonly associated with the name of Newton, took over. The accelerating pace of science is evident in that only about four hundred years later this vastly improved inductive science was replaced by the second inventive science, the one now in vogue. Almost a hundred years more have elapsed. On the basis of a continuation of the same accelerating trend, it would be safe to predict, even without the benefit of any additional information, that another reversal of policy is now due. The fundamental principles of twenty-first century science probably will be those of a third inductive science—rather than the inventive concepts of twentieth-century physics.

But we do not have to depend entirely on inferences of this kind, as there is plenty of direct evidence leading to the same conclusion. The epicycles already have multiplied to the point of absurdity. The history of the quantum theories, for example, consists of a long series of modifications and conflicting interpretations which have made the theoretical structure practically unintelligible. Feynman, who should be in a position to assess the situation, tells us flatly, “I think I can safely say that nobody understands quantum mechanics.”

The situation with respect to atomic structure is similar. The most popular pastime in physics today is inventing properties by which the characterize quarks, the elusive particles of which the constituents of the atom supposedly are constructed. No one has ever seen, or otherwise observed, a quark, or anything that could be a quark. Indeed, one of the most urgent objectives of the theorists is to produce a plausible theory that will justify asserting that quarks are inherently unobservable. Nevertheless, we are told just what kinds of quarks can exist, and what their properties are: properties with such interesting names as color and charm.

In order to put this situation in the proper perspective, we should realize that while quarks have never been observed, the particles that are supposed to be constructed of quarks have never been observed either. Of course, these particles, the hypothetical constituents of atoms of matter, are called by familiar names, such as “electron.” But as we saw earlier the properties that a particle must possess in order to play the part of the hypothetical electron in the atom are altogether different from those of the electron that is observed experimentally. There is actually no adequate justification for calling them by the same name. As Professor Herbert Dingle points out, we can deal with the electron as a constituent of the atom only if we ascribe to it “properties not possessed by any imaginable objects at all.”

This question of atomic structure provides a good example of the difference between induction and invention. Such men as Newton and Einstein recognized the difference very clearly. Newton emphasized that he did not employ invention (“hypotheses non fingo—I invent no hypotheses”), while Einstein condemned Newton’s inductive approach. But both procedures start in the same way—with a hypothesis—and this has confused the issue for many individuals. The difference lies in what happens when the hypothesis has been tested and found to be wrong. The Newtons then either discard the hypothesis, or modify it drastically. The Einsteins invent something that eliminates the discrepancy so that they can retain the original hypothesis.

When it was first discovered that atoms disintegrate under appropriate conditions, and emit particles in so doing, the hypothesis that the atom is constructed of such particles was very plausible. But, as we have seen, when this hypothesis is put to a test it fails, because the emitted particles are not capable of forming an atom. the inductive scientists, the Newtons, then have to abandon the hypothesis of an atom composed of particles, and try to formulate some other hypothesis. But the inventive scientists, the Einsteins, add some epicycles—they simply assume whatever is necessary to make the particles fit the requirements—and they retain the original hypothesis. This is the situation that exists today. Present-day theorists are obsessed with the idea that they must continue to subdivide matter until they come to an elementary unit. So they invent atomic constituents; they invent forces, such as the “strong force” to hold these invented constituents together; they invent quarks from which to construct the invented constituents, and there is even a suggestion that it may be necessary to invent a sub-quark—the so-called superstrings of infinitesimal length and zero width. The particles of physics are rapidly approaching the status of the fleas in the popular little verse:

Big fleas have little fleas
Upon their backs to bite ’em.
The little fleas, still smaller fleas,
and so on, ad infinitum.

When we reach the point where further sub-division cannot be accomplished without invention, as is now the case with the atom, this tells us that the atom is not composed of smaller units of matter, but is composed of some other more fundamental entity. We will take up the question as to the identity of this entity shortly.

In the meantime, let us return to the question of inventive versus inductive science. While the position of the prevailing inventive science has been deteriorating, a large number of individual advances in different physical fields have extended a solid framework of inductive theory far beyond the level at which it stood in the early twentieth century. Scientific knowledge at that time was too limited to provide the necessary foundation for an inductive theory of the far-out regions into which observation was beginning to penetrate. This was the reason, of course, why inventive science gained the ascendancy. A few of the essential building blocks were already in place. The discrete nature of the units of radiant energy had been demonstrated, radioactivity had been discovered, electric current had been identified as a movement of electrons, and so on. But an immense amount of additional information had to be accumulated. That information is now available, and the final addition to the inductive structure needed to make it capable of dealing with the entire body of current empirical knowledge as it now stands has been provided by a new theoretical development. This development is the subject of my published works, and those of my associates; its basic outlines will be presented in the next three chapters.

As often happens in scientific research, this theoretical advance was an unexpected result of a project aimed at a totally different objective. This project, begun a half century ago, attempted to devise a way of calculating physical properties, or at least some of them, from the chemical composition. In some respects this is a rather unfavorable subject for investigation—it has had a great deal of attention from previous investigators, and the most promising lines of approach have been rather thoroughly combed over. On the other hand, it is problem for which an answer certainly exists, since the physical properties of different substances obviously are results of their chemical composition.

I started with the concept embodied in the periodic table of the elements: the idea that the principal properties of these elements depend on the two variables represented vertically and horizontally in the tables. The first real advance that I made, after many false starts, was a recognition of the fact that one of these variables assumed both positive and negative values, whereas the other was always positive. Then, after much additional time and effort had been applied, it became evident that there were three of these principal variables rather than only two.

While these efforts to establish the form of the mathematical relations were under way, I was also struggling toward an understanding of the meaning of the mathematics. A tie-in to physical reality was necessary if the results were to be conceptually correct. Here, again, my first efforts followed conventional lines of thought.

The prevailing view was, and still is, that the differences between the properties of the chemical elements are due to variations in the number and arrangement of the sub-atomic particles of which these elements’ atoms are assumed to be composed. My original course of procedure was directed toward accounting for the mathematical relations on this basis. Continued lack of success forced me to consider other alternatives. One of the possibilities that I eventually visualized was that some of the variability might be due to differences in the motions of the constituent particles rather than to differences in the atomic composition. This approach was likewise unsuccessful, but it did produce some indications that I was on the right track. These indications became stronger when I placed more emphasis on motions and less on composition. Eventually, the idea that some of the variability might be due to differences in the motions was discarded, and it was substituted by the idea that such differences are responsible for all of the variations.

This was the first really radical conceptual jump in the development of my thought, and it had some significant consequences. When the variability was ascribed entirely to differences in the motions, the existence of only three major variables made it quite clear that the motions must be motions of the atom rather than motions of many atomic constituents. Then, since inherent motion of the atom is almost certainly rotation, the number three naturally suggested rotations around the three perpendicular axes. The magnitudes of the three major variables could then be identified with the speeds of the three rotations. On this basis, the entity of which atoms of matter are composed, according to the conclusions reached earlier, is motion, and the atom is simply a combination of motions. The concept of an atom composed of subatomic particles now had to be discarded.

With this understanding of the general nature of the atomic structure, the stage was set for the final inductive step of the original project. Among the mathematical expressions that I had derived during the twenty years or more that I had already been working on the project were some interesting expressions relating certain physical properties of the elements directly to their atomic numbers. What I now had to do was to put these expressions in terms of motions. This was another long, and often frustrating, task. But after several more years in which I examined every possibility that I could think of, plausible or implausible, it finally dawned on me that one of the most intriguing of the mathematical expressions that I had formulated, one that related the inter-atomic distances of the elements in the solid state to their atomic numbers, could be very easily explained if there were a general reciprocal relation between space and time.

If anyone who encounters this idea for the first time finds it rather weird, I can understand their reaction. It struck me that way too. My first impression was that the idea of the reciprocal of space was conceptually absurd. But when I took a closer look at this concept, I could see that it was not so unreasonable after all. The only relation between space and time of which we have any definite knowledge is motion. And in motion, space and time are reciprocally related. So I examined further the consequences of such a relation. I found, much to my surprise, that it led directly to simple and logical solutions for at least a half dozen longstanding problems of physical science.

Anyone who has ever done research work will understand that this is the kind of a breakthrough that we visualize in our most rosy dreams, and, of course, it called for the initiation of a full-scale investigation to see just how far this clarification of the physical picture would extend. By the time of my first publication, in 1959, I had been able to formulate a set of postulates, incorporating the reciprocal concept. I could show that the principal features of the major subdivisions of physical science could be obtained by pure deduction from these postulates, without the aid of any supplementary assumptions or any information from experience. In the years since the initial publication, scientists in all parts of the world have joined in the effort. The scope of the deductive system has been increased to the point where we can predict that it will ultimately achieve the objective that Newtonian science once envisioned: It will encompass the entire physical universe.

For those who shudder at the thought of having to subject their scientific beliefs to a complete overhaul, I want to say that even though the new theoretical system rests on a different foundation, in most instances it arrives at the same conclusions as conventional theory. I would estimate that ninety percent of what now passes for scientific knowledge is incorporated into the new system either just as it stands, or with nothing more drastic than a change in the language in which it is expressed. Another five percent or so retains the mathematics in the existing form, but alters the interpretation. Not more than five percent of conventional scientific thought has to undergo any significant change, and these major reconstructions are confined to the far-out regions: the realms of the very small, the very large and the very fast, the same regions in which conventional science is encountering its most serious problems.

On first consideration, it may seem strange that totally different basic premises would lead to much the same results in so many cases. There is, however, a very simple explanation. The ninety percent of present-day science that is incorporated into the new deductive system without significant change is not derived from the general principles invented by Einstein and other modern physicists. It is derived empirically. The theories included in the ninety percent are the inductive theories of lower rank than the fundamental principles I have been discussing. What the new system of theory does in these areas is to provide a general theoretical basis for the empirically-derived relations, something that has never been available before.

As I pointed out in the discussion of the Ptolemaic theory, the construction of an inductive theory is impossible if some essential piece of information is missing. When observation and measurement were extended into what I have called the far-out regions, Newtonian science lost the battle to Einstein and his inventions because the essential piece of information that would have enabled understanding the situation in these far-out regions was not available. We have now identified it.

The piece of information that has been missing until now is the reciprocal relation between space and time. By applying this relation we have been able to construct a new inductive science on a specific and definite basis. Our problem now is to bring this development to the attention of the scientific community. Here we encounter the same obstacle that always faces innovators. Those who take a superficial look at the new development see only the fact that it challenges some popular ideas. They hold up their hands in horror and say: “These people disagree with Einstein. They must be crazy.” I have yet to find any law of science that prohibits disagreeing with Einstein, but be that is it may, since this is such a common reaction, let us look at the situation and see just where this disagreement lies.

Einstein changed the course of science by developing his two theories of relativity—first the special theory, published in 1905, which applies only to uniform translational motion, and more than a decade later the general theory, which applies to accelerated motion. Peter Bergmann makes this comment about the relationship between the two:

It is quite true that the general theory of relativity is not consistent with the special theory any more than the special theory is with Newton’s mechanics—each of these theories discards, in a sense, the conceptual framework of its predecessor.


 

So it is impossible to agree fully with both the general theory and the special theory. Actually, few front-rank scientists have much confidence in the general theory in spite of the lip service that is paid to it by the scientific community at large.

Bryce De Witt, one of the leading investigators in the gravitational field, which the general theory is supposed to cover, said categorically, “As a fundamental physical theory general relativity is a failure.” P. W. Bridgman predicted that “arguments which have led up to the theory and the whole state of mind of most physicists with regard to it may some day become one of the puzzles of history.” Thus, while we must concede that we disagree with the general theory on many counts, this is not much out of line with the most advanced scientific opinion.

Whether or not we disagree with the special theory, on the other hand, depends on just how this theory is defined. Bridgeman comments that there is a tendency to “define the content of the special theory of relativity as coextensive with the content of the Lorentz equations.” P.K. Feyerabend, a prominent philosopher of science, puts it in this manner:

It must be admitted, however, that Einstein’s original interpretation of the special theory of relativity is hardly ever used by contemporary physicists. For them the theory of relativity consists of two elements: (1) the Lorentz transformations; and (2) mass-energy equivalence.

On this basis, we do not disagree with the special theory at all. We are in full agreement with both the Lorentz equations and the mass-energy equivalence. The conclusions that so many physicists have reached in accepting the mathematical relations and rejecting Einstein’s interpretations are the same conclusions that I have previously noted as applying to all inventive theories. Such theories are all mathematically correct and all conceptually wrong. Thus, if anyone actually examines the situation, instead of merely reacting emotionally, he will find that we disagree with Einstein’s relativity theories only in the same way that general scientific opinion also disagrees with them.

But we do not accept all of the unsubstantiated inferences that are currently being drawn from these theories, because our new development enables us to distinguish valid from invalid inferences. The existence of speeds greater than that of light is an outstanding example.

Earlier we examined the case of a particle accelerated to a very high speed by a presumably constant electrical force: its acceleration decreases at a rate which will reduce it to zero at the speed of light. Since Newton’s relation between force, mass and acceleration is merely a definition of force, the decrease in acceleration at high speeds must be due either to an increase in the mass or to a decrease in the force. There is no physical evidence to indicate which alternative is correct. Einstein simply assumed an increase in the mass. Our theoretical development now indicates that he made the wrong choice, and that the decrease in acceleration is actually due to a decrease in the effective force.

At the time Einstein made his choice there was nothing to indicate that it makes any real difference which of these alternatives is correct. Either one leads to some kind of a speed limitation. It is not likely, therefore, that Einstein gave the matter any extended consideration. But since our new development now indicates that speeds in excess of that of light definitely do exist, a review of the situation is obviously required. If Einstein’s assumption of an increase in mass were correct, the limit at the speed of light would be absolute, as the mass would be infinite at that speed. But on the basis of our finding that what actually takes place is a decrease in the effective force, the limit is not on the speed, but on the capability of the process. All that the experiments actually show is that it is impossible to accelerate a physical object to a speed greater than that of light by electrical means, a conclusion that we also reach theoretically. But this does not preclude acceleration to higher speeds by other means, such as powerful explosions.

By accepting Einstein’s denial of the existence of speeds greater than the speed of light as gospel that cannot be challenged, modern science has closed the door on the answers to some of the most significant problems of the present day. It is this mistake that has caused astronomy to become more fantastic than science fiction, with its neutron stars, black holes, white holes and all of the other extravagances. I have noted recently that quark stars have now joined this list. When the reciprocal relation between space and time is recognized, the need for all of this fictional science, as we may call it, is eliminated. The phenomena of the far-our astronomical regions can be explained on the same matter-of-fact basis that applies in our everyday world.

International Society of  Unified Science
Reciprocal System Research Society

Salt Lake City, UT 84106
USA

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer