Another Conference: New Directions in Modern Cosmology

Poster for the conference After the succsessful AG Meeting, next week we will be at another conference, titled “New Directions in Modern Cosmology“, to which Prof. Pavel Kroupa was invited. It will take place at the Lorentz Center in Leiden (in the Netherlands) from September 27 through October 1st. As the title suggests, the workshop is about the increasing amount of observational challenges of LCDM cosmology. From the website:

This workshop concentrates on the discussion of recent cosmological observations which present challenges to the standard LCDM model. These observations include: the large scale flows, the sizes and amplitude of galaxy large scale structures, the systematic effects biasing the analysis of CMB data and the lack of large-angle correlations, the anisotropy of the Hubble flow, the evolution of galaxy size, and the failure to find the sub-halo building blocks left over from the primordial fluctuation spectrum.  Last and not least, it is disturbing that in the LCDM model 95% of the Universe have not been observed ‘directly’.

While each of these observations can be seen as an anomaly that the model would possibly explain, the bulk of them calls for a more careful analysis of the model foundations, particularly the amount and role of dark substances.

Note also our recent contribution “The standard model of cosmology” where similar problems and issues are raised independently.

As the topic is cosmology, the workshop mainly aims at a better understanding of the large-scale structure of the universe. However, it is becoming more and more apparent that the smaller scales will play an important role in the understanding of our cosmos, too. Thus, small scale/local group cosmology is also represented at the conference (there even is a talk about planet formation). Pavel Kroupa will talk about “Local Group galaxies as critical tests of the contemporary cosmological model and its failure”, while Marcel S. Pawlowski will present a poster.

The diversity of speakers and participants will guarantee some interesting discussions, with the aim of making further steps towards an improved cosmological theory.

by Anton Ippendorf, Pavel Kroupa and Marcel Pawlowski (23.09.2010): “Another Conference: New Directions in Modern Cosmology” on SciLogs. See the overview of topics in  The Dark Matter Crisis.

The standard model of cosmology

At the conference “Zooming in: The Cosmos at High Resolution” Prof. Matthias Bartelmann from Heidelberg University gave a didactively beautiful presentation about the standard cosmological model for the general public.

Here is his argument why it is believed that this model is an excellent description of the universe:

In 1915 Einstein published his Theory of General Relativity. About two years later, Friedmann simplified the field equations by assuming the universe is homogeneous and isotropic. That is, he assumed the universe looks the same wherever you are (homogeneity) and whichever direction you are looking (isotropy), as long as one does not look at the details. This is quite consistent with our observations today.  The resulting Friedmann equations are the very basis of the modern standard cosmological model. Most solution to the Friedmann equations predict an expanding or a contracting universe, depending on some set of initial numbers, such as the total amount of matter in the universe.

In 1925 Edwin Hubble found that distant galaxies expand away from us at an ever larger rate with increasing distance. This implied that only the expanding solutions of Friedmann’s equations were allowed as solutions for our universe.

If the universe is expanding, then it must have had a very dense hot past. This is a similar situation as with gas that does not exchange heat with its surroundings: When this gas is compressed it heats up and it cools down again when it is relaxed. This then lead to the realisation that under the very hot and dense early conditions Helium and Lithium must be produced from the abundant Hydrogen nuclei by very much the same processes that take place in stars, where such conditions are still found today. In fact, about 25 per cent of the matter ought to end up being Helium. This Helium cannot come from stars, as most stars keep their synthesised Helium. The expectation from Big Bang nucleosynthesis that there ought to be 25 per cent Helium has been confirmed (although not perfectly). The hot dense phase also implies that there was a special time, about 400 000 years after the Big Bang,  when light stopped interacting with the matter. This happened when the matter cooled down sufficiently to allow the protons to capture electrons such that neutral atoms formed through which the photons could easily pass. This event should then lead to a background radiaton, which we should see today in the form of these same photons being at a very low energy because of the expansion of the universe (the light waves get “stretched” with the expansion). And yes, this background radiation has been found and is called the cosmic microwave background radiation (CMBR).

It also became clear from the observation of the CMBR that at about 400 000 years after the Big Bang the universe was already extremely homogeneous. One place looked just like the other. But, the light could not have travelled more than a distance of 400 000 light years, which was much smaller than the extend of the universe at that time. Therefore, one part of the universe was not able to “know” at which temperature the other part was at. But, given they are so similar, they must have “known” of each other. It is in fact impossible to explain why all parts of the universe have very similar properties if there was no interaction between them. And so Inflation was invented as a hypothetical “trick”, according to which the universe was very much smaller just after the Big Bang such that each part was able to exchange information with every other part, but then explosively increased in size to the time when the photons decoupled from the matter, i.e. when the CMBR became observable.

Later measurements of this CMBR showed that it is indeed amazingly equal in all directions, but that it shows miniscule fluctuations. These fluctuations can be interpreted to be the early seeds from which galaxy clusters and galaxies formed at later times. With this interpetation these fluctuations in the CMBR can only be mapped to the present-day distribution of matter if there is Dark Matter. Without the existence of a new form of matter, which only interacts gravitationally, the normal matter would not be able to fall together to form the galaxies we observe today given the miniscule fluctuations in the observed CMBR. The Dark Matter increases the gravitational force within the over-dense regions and thereby speeds-up structure growth just right. If there were no Dark Matter, then the fluctuations in the CMBR would have to have been many orders of magnitude larger than is actually observed for them to be able to grow to the galaxy clusters and galaxies of the present-day universe.

From Einstein’s field equations it can be derived that light travelling past galaxy clusters is bent. The observed gravitational lensing effect indeed confirms that Dark Matter is needed in addition to the normal matter, in order to explain the observed bending of light using Einstein’s field equations. Also, the amount of Dark Matter needed from lensing is similar to that needed to make the structures grow just enough from the CMBR to account the observed structures in the form of galaxy clusters and galaxies.

The observations of distant supernovae of “type Ia” (most probably exploding white dwarfs) however unexpectadly showed in 1998 – 1999 that in fact the universe was larger, at a given time, than thought.  That is, the supernoave are further away. This is only reconciable with the expanding Friedmann model if the universe is forced to expand ever faster, i.e., if it is now just at the start of a new inflationary period.  This new inflation is taken to be due to Dark Energy, which is a form of energy of the vacuum such that it is forced to expand ever faster. This assumption is mathemtatically the same as introducing the cosmological constant Lambda (L).

The above is thus a brief description of the contemporary standard cosmological model, also called the concordance cosmological model, or the LambdaCDM (LCDM) model. CDM stands here for “Cold Dark Matter”, which means that the dark matter postulated to exist (see the arguments above) is composed of elementary particles with a large mass such that they have small velocities (they are “cold”) so that they can aggregate to galaxy-sized dark matter “halos” or clumps. The LCDM model can thus be seen to have success for accounting for many observed phenomena, even if not perfectly. Still, cosmologists state that we live in the era of high-precision cosmology because the many parameters that define the LCDM model can be measured precisely with modern methods.

But, it has at least three important problems:

I) It comes at a very heavy price! By insisting that Einstein’s field equations are valid and making some additional assumptions, cosmologists have to postulate the existence of (1) inflation, (2) Dark Matter, (3) Dark Energy. None of these are understood. Together, Dark Matter and Dark Energy amount to about 95 per cent of the energy content of the universe. That is, the mathematical description of the universe, in terms of Einstein’s equations, implies that this mathematical modell is based nearly exclusively only on stuff we have absolutely no clue about. Imagine an astrophysicist trying to publish a research paper on a star the properties of which depend to 95 per cent on pieces of physics which are completely unkown. Furthermore, Dark Energy is vacuum energy but with a special type of equation of state. This equation of state “tells” the vacuum how to react when it is put under pressure, for example. In the LCDM model it means that the energy density is always the same, in each volume element of space. Since the universe is expanding this implies that ever more energy is created with the creation of space, essentially out of nothing. But perhaps this is due to energy slipping into our universe from a higher dimensional space-time? Perhaps. But then energy conservatio, which has been a fundamental principle in physics so far, becomes an issue in general.

Also, astrophysicists actually do not understand how supernovae of type Ia (SNIa) work. There are variations of SNIa which are corrected for by using an observed correlation between the total energy and the time dependence of the luminosity of SNIa, but this correlation is not understood theoretically. This makes the usefulness of SNIa as standard candles for distance measurements questionable.This was aklso highlighted by Prof. Wolfgang Hillebrandt in his plenary talk at the same conference, where he explained that many SNIa seem to happen before theChandraseka-mass is reached, that there are several different ways to make SNIa and that this results in a possible huge scatter in the luminosities. Reducing this scatter artificially by using the supernovae as standard candles will thus have unknown consequences for cosmology.

II) There are unsolved problems at the fundamental level. An excerpt from the abstract of a paper by Prof. Tom Shanks published in 2005:

We note that the standard LCDM cosmological model continues to face fundamental problems. First, the model continues to depend wholly on two pieces of undiscovered physics, namely dark energy and cold dark matter. Then, the implied dark energy density is so small that it is unstable to quantum correction and its size is fine-tuned to the almost impossible level of one part in ~10^102; it is also difficult to explain the coincidence between the dark energy, dark matter and baryon densities at the present day. Moreover, any model with a positive cosmological constant also creates fundamental difficulties for superstring theories of quantum gravity. …

III) The model makes sufficiently good predictions to be testable as to how galaxies ought to look like, how the dark matter ought to be distributed in galaxies, around galaxies, and also in groups and clusters of galaxies. Unfortunately, this is where the model breaks down completely. Galaxies are simply not describable as objects that are dominated by dark matter. One could postulate the existence of a Dark Force which makes sure, by some principle we do not understand at all, that the Dark Matter always adjusts itself to do what the normal matter wants it to do. In this case we would have Einstein’s field equations with

  1. inflation,
  2. Dark Matter,
  3. Dark Energy and the
  4. Dark Force.

 

Summarising the above problems:

Assuming Einstein’s field equations to be vaild with some additional assumptions (which we will deal with in the future) leads to the following situation concerning the mathematical modelling of our Universe, given the observational data:

  1. The early expansion history seems to be wrong, unless inflation (which is not understood) is added.
  2. Given this, structure formation is not understood (it happens too slowly) requiring the addition of Dark Matter (which is not understood).
  3. With these additions, it turns out that the universe is expanding at the wrong rate today, unless Dark Energy (which is not understood) is added.
  4. Given all of the above, it turns out that on scales smaller than a few Mpc the observed structures to not correspond to the mathematical expectation, requiring a Dark Force to couple the baryons to the Dark Matter in some unknown way.

At this point it might be useful to reconsider gravitational physics to see if a simpler solution might not be available after all.

One aspect of this failure of the standard model is embodied in the Fritz Zwicky Paradox (see FZ Paradox I and FZ Paradox II). The usual argument invoked to avoid this failure is to say that the physics of normal matter is too badly understood to be able to test the model on the scales of galaxy groups and below. It is usually stated that the processes governing star formation and gas physics are far too complex. However, these are false statements, because the physical processes of normal matter are well enough understood to make this test of the standard cosmological model stringent on galaxy scales.  Sometimes one also hears the statement that the standard “model” is only a fit to the data and that it cannot be used to make predicitions. But this would then mean that the standard model of cosmology were not physics.

The standard model of cosmology thus fails because galaxies do not work (this is the title of a future contribution in this series), and, ignoring this, it is nevertheless unsatisfying because of problems I and II above.

by Anton Ippendorf, Pavel Kroupa and Marcel Pawlowski (19.09.2010): “The standard model of cosmologyin “The Dark Matter Crisis – the rise and fall of a cosmological hypothesis” on SciLogs. We thank Joerg Dabringhausen for useful comments on the first version of this text. See the overview of topics in  The Dark Matter Crisis.

Our contributions at the annual meeting of the "Astronomische Gesellschaft"

This coming week we will have the annual meeting of the “Astronomische Gesellschaft” (the German astronomical society) in Bonn. The conference with the topic “Zoomin in: The Cosmos at High Resolution”  starts on Monday (September 13) and will last until Friday. You can find the schedule here. In the afternoons there are a number of splinter meetings on different astronomical topics. For those participating in the meeting and interested in the Dark Matter Crisis, we would like to point out some of our contributions. Of course we will also be around during the week, so feel free to approach us for discussions.

  • Jörg Dabringhausen will present a poster with the title: “The failure of dark-matter cosmology – Towards a new paradigm of structure formation”, based on our recent paper. This poster can be downloaded now. Feel free to discuss the poster/paper with him or any of us.
  • Marcel Pawlowski will talk about “The Disc of Satellites: The Origin of Counter-Rotating Tidal Debries”, showing that the MW satellites are distributed in a disc-like structure that is not expected from cosmology, that this structure is rotationally supported and that a tidal origin of the MW satellite galaxies is plausible, in contrast to a cosmological one. The talk is part of the Magellanic Clouds splinter meeting, and is scheduled for Wednesday, September 15 at 17:20.
  • Right after that, at 17:40 on Wednesday, Pavel Kroupa gives a talk titled “Local Group tests of concordance cosmology” about our recent paper at the same splinter meeting. As this is the last talk of the day, we will be available for discussions afterwards.

by Anton Ippendorf, Pavel Kroupa and Marcel Pawlowski (08.09.2010, “Our contributions at the annual meeting of the Astronomische Gesellschaftin “The Dark Matter Crisis – the rise and fall of a cosmological hypothesis” on SciLogs. See the overview of topics in  The Dark Matter Crisis.

II. The Fritz Zwicky Paradox and its solution

What is the Fritz Zwicky Paradox?

In our previous contribution we gave three historical examples of previous failures of Newtonian mechanics or dynamics. These failures implied quantum mechanics, special and general relativity. While not evident at start, each of these break-throughs lead, many decades later, to very major technological advances with industries worth trillions of dollars today. In the present era physics is experiencing the fourth failure. But how does the failure arise?

The Fritz Zwicky Paradox is one aspect of the fourth failure. The fourth failure means that cold or warm dark matter particles, if evidence were to be found which may be interpeted to be due to some particle decay (e.g. by the Alpha Magnetic Spectrometer or AMS), cannotplay a significant role in cosmology: If additional massive particles would be found to exist outside the Standard Model of particle physics, then they must be rare or too short lived to affect the mass budget of a galaxy. That is, concordance cosmology cannot be the correct description of the universe.

This statement goes massively against the widely held “believe” that the dynamics of the universe is driven largely by these particles, because in the presently widely  accepted cosmological model such (heavy) elementary particles are understood to be dominating all forms of matter.

But why do we have this “standard” or “concordance” dark-matter-based cosmological model?

It rests on two fundamental assumptions:

First: The Theory of General Relativity is correct on galactic and cosmological scales.

Second: All matter is created during the Big Bang.

This leads to the concordance cosmological model according to which the cold or warm dark matter particles clump in an expanding universe gathering up the normal matter which we experience. The formation and evolution of galaxies are defined, in this model, by the collisions of myriads of small dark-matter clumps, some of these contain gas leading to the formation of stars. The concordance cosmological model is thus, strictly logically, tied up with very specific predictions concerning how galaxies look like, and also which types of galaxies ought to exist. And this is where the Fritz Zwicky (FZ) Paradox spoils the show:

The FZ Paradox arises due to two observations which famous CALTECH astronomer Fritz Zwicky made:

1) 1937: he notes that galaxies must be about 500 times heavier in the Coma galaxy cluster than judged from their light emission (e.g. Zwicky 1937). This is his famous conjecture that there must be cold or warm dark matter.

2) 1956: he notes that when galaxies interact (e.g. when they collide), the expelled matter can re-condense in regions and form new smaller (dwarf) galaxies. This is his famous conjecture that tidal-dwarf galaxies can form out of the collisional debris of other galaxies (Zwicky 1956, p.369, point 4).

Astronomical observations have traditionally been interpreted as being strongly supportive of  both of these Fritz Zwicky Conjectures:

The first conjecture is supported by stars moving much faster about their galaxy than would be the case if there were no additional (dark) matter to hold them in (see galaxy rotation curves). And, it is also supported by the motions of galaxies in galaxy clusters, as noted by Fritz Zwicky. The overall cosmological evolution also supports this, but only if it is additionally speculated that Dark Energy exists. One needs to postulate the existence of Dark Energy to pull the matter apart again through an accelerating expansion in order to match, roughly, the large-scale structure that is observed. Without the speculative Dark Energy the clumping of Dark Matter would proceed too fast, and the cosmological model, based on the above two fundamental assumptions, would fail. Most cosmologists would also claim that Dark Energy is suggested by supernova Ia (SNIa) distances.

The second conjecture has been observed many times in real galaxies, and also in computer simulations. The formation of tidal dwarf galaxies is an inherent, strictly necessary, process coming from energy and momentum conservation. Young tidal dwarf galaxies are small disk galaxies with star-formation, and there is no viable physical mechanism which destroys them (as shown by Recchi et al.2007 and Kroupa 1997), except if they end up on a plunging orbit through the central region of their host galaxy. But this is very unlikely. The typical tidal dwarf galaxy finds itself on an orbit about the larger post-merger host galaxy, and it may loose its gas because of ram-pressure stripping. A young tidal dwarf galaxy can thus evolve to a dwarf elliptical galaxy if it remains quite close to a larger host galaxy. Tidal dwarf galaxies that are lost to “outer intergalactic space” remain dwarf galaxies with gas and continue forming stars, just like any other small galaxy (Hunter et al. 2000).

Calculations:

Assuming both Fritz Zwicky conjectures to be true, Okazaki & Taniguchi (2000) calculated the number of tidal dwarf galaxies that should exist in a standard cosmological universe. This can be done because the frequency with which galaxies of different types (e.g. spiral galaxies, elliptical galaxies) interact can be computed from the standard-cosmological merger tree. This merger tree is a logical outcome of the dark-matter standard cosmological model, and tells us how the small dark matter clumps coalesce to form the larger dark matter halos within which galaxies such as the Milky Way reside. Modelling of star-formation processes then tells the astronomer which dark halos host what type of galaxy.Okazaki & Taniguchi set up rate equations, for example  S + S  ->  S0 + ndE, where S is a spiral galaxy (containing gas), S0 is a lenticular galaxy and ndE are n dwarf elliptical galaxies of tidal origin.

The merger tree also tells us how many dark-matter dominated satellite galaxies each larger galaxy must have, and how they are distributed.

The result, obtained by Okazaki & Taniguchi, is that the standard cosmological model makes so many tidal dwarf galaxies that all dwarf elliptical galaxies in galaxy groups and galaxy clusters are easily accounted for. There is therefore no room for the expected dark-matter dominated dwarf galaxies.

This can be stated differently: Assuming there are two types of dwarf galaxy:

Type A is the traditional dark-matter dominated galaxy, which the standard model expects to be there in huge numbers as a result of the merger tree. For example, the Milky Way galaxy ought to have thousands of dark-matter satellites swirling about it.

Then there is the type B dwarf galaxy, which is the tidal dwarf galaxy.

We can go to a telescope and observe the dwarf galaxies in galaxy clusters and around our own Galaxy and in our Local Group. It then turns out that Okazaki & Taniguchi calculate that the number of type B galaxies is enough to account for all observed dwarf galaxies in these environments.

One could now argue that type A and type B co-exist. That is, that there are two types of dwarf galaxy. But this argument is difficult to uphold by the simple fact that all observed, that is real, dwarf galaxies in galaxy clusters and in the Local Group of galaxies, look virtually identical in the sense that one cannot identify two distinct populations of dwarf galaxies. That some dwarf galaxies appear as gas-rich (i.e. dwarf irregular galaxies), and others as gas-poor, elliptical dwarf galaxies, is easily understood through the former getting rid of their gas when closer to a larger galaxy (the gas is essentially blown away through the large galaxy).

Therefore, there seems to be only one kind of dwarf elliptical galaxy. Since it is not possible to destroy the tidal dwarf galaxies, and their formation is an established fact of standard physics, one comes to the Fritz Zwicky Paradox:

By assuming that dark matter drives structure formation we end up knowing that all dE galaxies ought to be of tidal origin. But, astronomers have identified all known dE galaxies as dark-matter dominated sub-structures, which are predicted by the dark-matter based concordance cosmological model to be present in very large numbers. Thus a contradiction emerges, because a given dE galaxy cannot be both types at the same time, and the observed satellite galaxies are way too few to account for both satellite galaxy populations predicted by the concordance cosmological model.

The dark-matter universe thus leads to two types of dwarf galaxy populations, each supposedly being abundant, but only one type is observed. In fact, the number of observed dwarf galaxies is nicely consistent with the number expected if they are all tidal dwarf (type B) galaxies. On the other hand, the existence of dark-matter dominated galaxies (of type A) strictly depends on the postulated existence of cold or warm dark matter and the number of such dwarf galaxies is vastly larger than the number of observed dwarf galaxies. This is the “missing satellite problem“, whereby the community has agreed to have solved this problem by stating that there are partially unkown physical processes which keep most small dark matter halos dark. According to Fritz Zwicky’s second conjecture, this agreement is falsified because tidal dwarf galaxies must also exist, thereby leaving essentially no room for the type A dwarf galaxy, making the missing satellite problem catastrophic.

 

Additional independent arguments:

There are in fact other arguments that, independently, suggest that dwarf galaxies in galaxy groups or galaxy clusters are much more naturally interpreted to be tidal dwarf galaxies. These are put together in this poster: Dabringhausen et al., 2010 (presented at the Annual Meeting of the German Astronomical Society, September 2010 at Bonn University). The poster is based on the more extensive research paper by Kroupa et al. (2010).

In brief: the additional arguments pointing at the dE-type galaxies as being tidal dwarf galaxies are as follows:

The satellite galaxies of the Milky Way are arranged in a huge disk like structure which is grossly incompatible with the more spherical distribution expected if the satellite galaxies were to be dark matter dominated dwarfs (of type A). And, the major galaxies in the Local Group show a very well defined bulge-mass versus number of satellites correlation. This correlation is not expected to exist in standard dark matter cosmology, because the infall of low-mass dark-matter dwarf galaxies (of type A) is a stochastic (random) process and is not related to the star-formation at the very centre of the huge hosting dark matter halo. In fact, we know that the same type of dark matter halo, if it were to exist, hosts large galaxies with and without bulges, while the number of dark matter satellite galaxies depends on the mass of the dark matter host halo only. And, last not least, all attempts to understand the brightness of the observed satellite galaxies in terms of how they form stars in a dark-matter satellite-halo fail rather massively: if the dark-matter satellite halos were there, then a more massive halo ought to, on average, contain a brighter satellite simply because it can hold more of the gas before it disperses due to internal and external physical processes. But, the real satelltie galaxies, when interpeted to be in dark matter sub-halos, fail to comply with this very basic and robust prediction. The masses of the satellite dark-matter halos are therefore unphysical.

 

How can one solve the Fritz Zwicky Paradox?

All three observed results (disk-like arrangement, the correlation and no dark-matter satellite halos) are instead naturally explained if the satellites are tidal dwarf galaxies (type B). And this is trivial to understand: in a galaxy–galaxy interaction matter is expelled along special directions, and the tidal dwarf galaxies that form in these tidal tails then retain a common sense of orbit forming a disk-like structure. And, bulges form in galaxy interactions, leading to a natural correlation between the bulge mass and the number of satellites. This correlation is a measure of the strength or gas-richness of the galaxy–galaxy encounter (more gas => more star formation and more satellites). That the real satellite galaxies appear to be dark matter dominated then naturally emerges if an observer sees the satellites through Newtonian eyes, but if the satellites in truth obey non-Newtonian dynamics and if they are affected by the Galactic tidal field. This is nicely discussed by McGaugh & Wolf (2010).

 

The history of the early Milky Way

For the Milky Way galaxy this means that about 10-12 Gyr ago it had a major encounter with some other young galaxy. This encounter happened nearly perpendicularly to the present-day disk of the Milky Way, because the other galaxy came from a minor filamentary structure which has been identified by Metz et al. (2009). This encounter, being very gas rich as both the Milky Way and the other galaxy were very young, lead to gas-rich tidal tails within which star formation must have happened. Also, this encounter lead to gas falling towards the center of the young Milky Way, triggering a major episode of star formation there. Today we see the ancient remnants of this early violent event, in the form of the central bulge of the Milky Way and the ancient satellite galaxies that still orbit the Milky Way at large distances and which are arranged in a disk-like structure.

 

What does the Fritz Zwicky Paradox imply for fundamental physics?

Given that the observed dwarf galaxies are best identified as tidal dwarf galaxies, the internal motions of stars are not compatible with Newtonian/Einsteinian dynamics. Actual observational evidence for this has been found: three tidal dwarf galaxies for which observations have been obtained, show the same rotation curve behaviour as normal galaxies. That is, at larger radii, the stars and gas are orbiting about the galaxies too fast (Gentile et al. 2007). Because tidal dwarf galaxies cannot have much dark matter (they are too small to capture it, even if it were to exist), then these motions of the stars can only be explained by violating the first fundamental assumption above. That is, General Relativity needs to be either discarded, or modified, in order to explain the motions of bodies on the scales of galaxies and beyond.

This is the “fourth failure of Newton“. Indeed, degrading cold or warm dark matter as being insignificant for cosmology, or even as not existing at all, leads to rich new gravitational (non-Newtonian/Einsteinian) dynamics (e.g. MOND or MOG), and to a new and vastly superior understanding of galaxies. This is the subject of our research paper (Kroupa et al. 2010). For those interested, Prof. Milgrom provides a Pedagogical Review of MOND and Prof. McGaugh has prepared the MOND Pages. A gateway to Prof. Moffat’s MOGis found here. Other non-dark-matter approaches via higher-order curvature theories have been discussed, among others, by Capozziello et al. (2004) and Martins & Salucci (2007).

Currently very exciting research is being done in this “non-dark-matter” field, albeit with the very serious difficulty of small funding because virtually all public funding for cosmology is allocated to support research in the standard dark-matter-based or concordance cosmological model. But galaxies are, as far as the data allow us to tell, only composed of normal matter. Given the failure of the dark-matter ansatz, it becomes apparent that Dark Energy is most likely unrealistic too (see “Is LambdaCDM or standard cosmology a 4th order speculation, and ought it be further researched ?“). Research on alternative cosmological models will certainly lead to new physics, but a deep understanding may only emerge once the physics of the vacuum is understood.

by Anton Ippendorf, Pavel Kroupa and Marcel Pawlowski (08.09.2010, “The Fritz Zwicky Paradox and its solutionin “The Dark Matter Crisis – the rise and fall of a cosmological hypothesis” on SciLogs. See the overview of topics in  The Dark Matter Crisis.

I. The Fritz Zwicky Paradox: the fourth failure of Newton (MOND, MOG, etc)

The radical conclusion that Cold- or Warm-Darm-Matter cosmology ought to be discarded as a viable description of physical reality would imply, as a strict logical process, that this physical reality must be non-Newtonian in a certain physical regime which is found on galactic scales and beyond. This goes hand-in-hand with discarding Newtonian dynamics in these regimes.

Indeed, changing dynamics away from Newtonian dynamics is actually already a very well established tradition in physics and results from the desire to understand how objects with “mass” move about and influence each other in our four-dimensional “space-time” world.

Before proceeding, the distinction between mechanics and dynamics in physics ought to be clarified: Mechanics is the science “concerned with the set of physical laws governing and mathematically describing the motions of bodies and aggregates of bodies geometrically distributed within a certain boundary under the action of a system of forces.” Dynamics, on the other hand, is the study of the “causes of motion and changes in motion”. In what follows we do not distinguish these two, but interchangeably refer to Newtonian dynamics and Newtonian mechanics (or in short just Newton) as meaning the classical mathematical description of the motions and changes of motions. These are Newton’s three laws and his 1/r^2 law of gravitation.

Now, lets take a time machine and go back about 120 years. On arrival we find that after centuries of applying pure Newtonian mechanics to describing natural phenomena highly successfully indeed and after a brilliant history of applying this theoretical knowledge to practical utilisation (industrial revolution, planetary motions, long-range travel with trains pulled by steam engines, building ever bigger and more powerfull machines), problems had begun to emerge:

 

The first failure of Newton (on small spatial scales)

The first famous example was the break down of Newtonian mechanics on small scales.

The historical events leading to the full-fledged theory of (non-Newtonian) Quantum Mechanics we know today appear to share some parallels with our problem at hand (speculative dark matter versus non-Newtonian gravitational dynamics):

As noted here:

“In 1802, William Wollaston in England had discovered that in fact the solar spectrum itself had tiny gaps – there were many thin dark lines in the rainbow of colors. These were investigated much more systematically by Joseph von Fraunhofer, beginning in 1814. He increased the dispersion by using more than one prism. He found an “almost countless number” of lines.”

Johann Balmer, a math and Latin teacher at a girls’ school in Basel, Switzerland, had done no physics before, and made his great discovery (in about 1885) when he was almost sixty. He decided that the most likely atom to show simple spectral patterns was the lightest atom, hydrogen. Angstrom had measured the four visible spectral lines to have wavelengths 6562.10, 4860.74, 4340.1 and 4101.2 in Angstrom units (10-10 meters). Balmer derived the empirical formula to describe the wavelength of spectral lines in the Hydrogen atom.”

These were thus the first empirical hints for quantisation of energy.

In 1901 Max Planck solved the long standing unsolved problem of how the shape of the black body energy spectrum comes to be.

“At first Planck considered that quantisation was only ‘a purely formal assumption … actually I did not think much about it…’; nowadays this assumption, incompatible with classical physics, is regarded as the birth of quantum physics and the greatest intellectual accomplishment of Planck’s career.”

Balmer’s empirically derived formula (with a constant that had no explanation at that time) was later explained by the Nils Bohr model of the hydrogen atom of 1913 which showed that electrons were arranged in discrete shells about the nucleus.

Albert Einstein’s explanation of the photoelectic effect in 1905 further gave strong evidence for the quantum nature of matter and energy.

These discoveries show how the long-cherished concept of a Newtonian world broke down fundamentally on small scales. The (nowadays famous) scientists of the day had bits and pieces of partially computable theoretical understanding (e.g. Planck’s spectral energy distribution, Bohr’s atomic model, Balmer’s spectral line rules). But the Theory of Quantum Mechanics we know today did not exist yet. This theory was developed largely in the late 1920s by Werner Heisenberg and Erwin Schroedinger.

Note that this failure of Newton implies that particles can “magically” appear at a place where Newton would not have allowed them to. A physicist stating this may be referred to as being nuts, but tunneling is in fact an established physical process of quantum mechanics which also happens to be quite fundamental to important technologies we are using without even being aware of it.

Thus we had the following historical development away from Newtonian mechanics to Quantum Mechanics:

  • 1802 – spectral lines exist – William Wollaston.
  • 1885 – Balmer lines – Johann Balmer.
  • 1901 – Black body spectrum explained by quantisation of energy – Max Planck.
  • 1905 – Photoelectric effect as a quantum effect – Albert Einstein.
  • 1913 – mathematical model of Hydrogen atom based on quantised states – Nils Bohr.
  • late 1920s – development of the mathematical theory of quantum mechanics – Erwin Schroedinger and Werner Heisenberg.

 

The second failure of Newton (at large velocities)

A famous case of the break down of Newtonian mechanics occured at large velocities. This was first hinted at by the inexplicable discovery that the speed of light is constant independent of the velocity of the light source (the Michelson-Morley experiment). And directly related to this, after a major research effort to constrain its properties, the Ether had to be abolished as a physical reality. The outcome, in 1905, was the Non-Newtonian Theory of Special Relativity by Albert Einstein. As we all know, this theory was not the final word in physics.

 

The third failure of Newton (at large gravitational accelerations)

The precession of Mercury’s perihelion is a famous example of the break-down of Newtonian dynamics within the SolarSystem. We read:

 “The anomalous rate of precession of the perihelion of Mercury’s orbit was first recognized in 1859 as a problem in celestial mechanics, by Urbain Le Verrier.”

and

“Einstein showed that general relativity agrees closely with the observed amount of perihelion shift. This was a powerful factor motivating the adoption of general relativity.”

In strong space-time curvature, space and time are distorted such that locally clocks go faster and lengths are contracted. This leads to objects moving on different trajectories than an observer interpreting the world through Newtonian eyes would expect. The Theory of General Relativity was able to precisely account for Mercury’s perihelion shift, and also correctly predicted the deflection of light by the Sun.

 

The fourth failure of Newton (at small accelerations)

Today we have a similar situation as in the first case above, but now in the realm of very small accelerations (i.e. vanishing space-time curvature), but the job of finding the underlying mathematical theory is not yet done:

The Fritz Zwicky Paradox arises under the hypothesis that (1) Newtonian/Einsteinian dynamics is valid on galaxy-cluster scales (Zwicky 1937) and that (2) tidal dwarf galaxies are formed in galaxy encounters (Zwicky 1956). Both together lead to a logical inconsistency. This Fritz Zwicky Paradox can only be  solved by discarding the assumption that Newontian dynamics is valid (Kroupa et al. 2010).

Note that Bekenstein’s TeVeS theory includes Einstein’s Theory of General Relativity, Milgrom’s dynamics and Newtonian dynamics as limiting cases in the relevant physical conditions (e.g. TeVeS gives normal Newtonian dynamics on Earth, but leads to MOND dynamics in the outer regions of galaxies and to General Relativity near black holes).

Today it is known that MOND (Modified Newtonian Dynamics, e.g. Milgrom & Bekenstein 1984) or some other modification of gravity such as MOG (Modified Gravity, Moffat 2006) are the preferred descriptions of galactic dynamics in the weak field limit (small curvature case). Does this mean that gravity per se differs from the Newtonian law (i.e. space-time curvature as resulting from General Relativity), or does it mean that the principle of equivalnce between inertial and gravitating mass is violated in the regime of very small space-time curvature (see Bekenstein & Milgrom 1984)? We do not know.

 

Concluding Remarks

Thus, in the past it was realised that Newtonian mechanics and dynamics broke down on small spatial scales, at high velocities and at large gravitational accelerations. Today we are learning that it also breaks down when the accelerations are small. Note this progression of the time derivative of physical quantities describing our four-dimensional world (scale=zeroth derivative, velocity=first derivative and acceleration=second derivative). Does this mean something? No idea.

But, it is clear that Newton’s mechanics was found to not work when technological advance, based on Newtonian mechanics, allowed us to probe large velocities and small spatial scales. The new theories allowed us to build better machines (e.g. telecommunication being based on Einstein’s theories and computers and measuring devices being based on quantum mechanical concepts). The theoretical insights into the properties of matter via quantum mechanics and relativity gave us access to an unprecedented control over matter and to unimaginable amounts of energy.

Today we can probe regions of physics where space-time curvature is extremely small finding that, again, Newtonian mechanics fails. This may mean that the Theory of General Relativity needs an extension or that we do not yet understand what “space-time” and “mass” are nor how they are fundamentally related. Perhaps it just boils down to the problem of us not understanding the vacuum.

The four failures of Newtonian dynamics can be rephrased as follows: The real physical world, when interpreted strictly through Newtonian eyes, misbehaves badly on small spatial scales, at high velocities and at large and small accelerations. Clearly, Newtonian mechanics and dynamics are a useful, but limited, mathematical description of the physical world.

Similarly as above we can state that (some) scientists of the day have bits and pieces of partially computable theoretical understanding (e.g. galactic rotation curves, the Bullet Cluster, tidal dwarf galaxies, the Local Group of galaxies, modified dynamics).  But a full theory of gravitation and mass is still lacking.

We can thus rest in comfort that in the realm of low accelerations dynamics is MOdified rather than Newtonian and that we live in a world without Cold or Warm Dark Matter. But, the fundamental theory underlying MOND or MOG (the final word in physics explaining space-time and mass) is yet to be discovered. This is a great present-day chance for mathematical physicists to get to play.

 

Exciting Times – Our Times

The present epoch is an extraordinarily exciting time indeed: How was it to live, as a physicist, during the legendary time of the theoreticians Einstein, Bohr, de Broglie, Planck and Heisenberg and Schroedinger? Well, perhaps much like today which is, perhaps, the time of Milgrom, Bekenstein, Moffat, Blanchet, Zhao, Famaey and others perhaps yet to emerge.

by Anton Ippendorf, Pavel Kroupa and Marcel Pawlowski (01.09.2010, “The Fritz Zwicky Paradox: the fourth failure of Newton  (MOND, MOG, etc)in “The Dark Matter Crisis – the rise and fall of a cosmological hypothesis” on SciLog. See the overview of topics in  The Dark Matter Crisis.