Question C.I: What are the three best reasons for the failure of the LCDM model? I: Incompatibility with observations

Summary:

The development of the concordance cosmological model (CCM) over the past 40 years is based on the addition of at least three unknown (“dark”) physical phenomena (inflation, cold dark matter, dark energy), in an attempt to make Einstein’s field equation account for the distribution of matter on galactic and larger scales. None of these are understood nor experimentaly verified today. While these may constitute true discoveries of new physics, much as in the spirit of the past when for example Neptune and the neutrino were postulated to exist based on not understood observations, these dark additions also have a parallel in the Ptolomaic model which is based on a series of complex additions to circular motions in order to provide a calculation tool for the Solar System prior to the discovery of Kepler’s and later Newton’s laws. On close scrutiny the latter analogy appears to be the favourable one because the CCM is not able to account for the observed distribution of matter on scales of 10Mpc and less, where a massive computational effort by many groups has been able to quantify the theoretical distribution of matter. Meanwhile, new dynamical laws have been discovered which are extremely successful in accounting for the appearance and motion of matter on galactic scales and above. At the same time, it is emerging that the CCM is not unique in accounting for the large-scale matter distribution nor for Big Bang Nucleosynthesis nor for the cosmic microwave radiation. This suggests rather unambiguosly that our understanding of gravity is not complete. This conclusion, obtained purely from astronomical data, is nothing else but the statement that we do not have a good physical theory of matter, mass, space and time nor do we  know how and if they can be unified. 

 

Background:

As introduced in the previous contribution to The Dark Matter Crisis, Question A: Galaxies do not work in LCDM, sociology and majority views, PK was recently contacted by a few people, and here are excerpts from some of the questions asked and the replies. These help to illustrate some of the issues at hand. The questions are

A) So the LCDM model fails on scales smaller than about 8 Mpc?

B1) What is a galaxy?

B2) What is a galaxy? (Addendum on the relaxation time)

C) What are the three best reasons for the failure of the LCDM model?

I: Incompatibility with observations (this contribution)

II: MOND works far too well !

III: Fundamental theoretical problems

 

D) What about the Bullet cluster?  And what about the Train-Wreck cluster Abell 520?

E) Why is the main stream community so reluctant to  go along with accepting the failure of LCDM?

This contribution deals with Question C, which may be taken to be central to The Dark Matter Crisis, while upcoming contributions will concentrate on the remaining questions.

 


 

The three best reasons for the failure of the LCDM model: 

They can be summarised in three categories. Here is category I. Categories II and III can be found in seperate contributions as outlined above.

I) Incompatibility with observations:

Failure upon failure requires multiple dark additions:

The logical framework we are discussing (see also The standard model of cosmology) rests on assuming Einstein’s General Theory of Relativity(GR) to be a true description of the interconnection of matter, mass-energy and gravity. This is indeed a very well motivated assumption, because experiments in laboratories have confirmed that on earth GR is valid to extremely high precision. But also peculiarities about the orbits of Mercury (perihel shift) or pulsars are very well explained with GR.

However, this assumption fails when galaxies or the universe as a whole are considered, unless new, unknown physics is postulated to be dominant on this scale.

Assuming GR to be valid implies that the universe begins in a highly compact state. However, the Universe is observed to be flat already “at the next instant” after the Big Bang. Also, every part of the universe we know has the same physics, and so must have been in causal contact which is impossible if the universe expanded similarly as is seen today.

So inflation (I) is introduced (leading to the GR+I model) as a mathematical trick to inflate the universe by a factor of at least 10^78 in volume in an incredibly short time (about 10^-32 seconds) such that all parts in the observable universe were in causal contact before inflation and such that it is flat by the time it can be observed for the first time (the cosmic background radiation field). But then structure formation still does not work, because the structures (large-scale filaments, galaxy clusters, elliptical galaxies) are observed to evolve too rapidly with time; they appear too early after the Big Bang.

Cold dark matter (CDM) is introduced which helps the structures to grow in the GR+I+CDM model, and it needs to be cold or at best warm to account for the growth of structures. The idea here is that this hypothetical matter does not interact with photons and so can decouple and begin to re-arrange itself gravitationally long before the normal (baryonic) matter decouples from the photons.  The dark matter particles must be moving slowly after the Big Bang, that is, they must have a large mass (e.g. WIMPs), as otherwise they would not be able to clump through gravity to make the seed dark matter halos within which the later galaxies emerge in the model. Hot dark matter would be constituted by light particles that can move at a speed close to that of light and they would not be able to clump to make the halo seeds, in this  GR+I+dark matter model.

In the GR+I+CDM model the seed halos accrete more dark matter and other seed halos. The waxing dark matter halos are able to accrete normal matter, which is cooling as a result of cosmic expansion. From this normal matter stars form making the first galaxies.

Note that the model requires that there is about five times more dark matter than normal matter, and we emphasize that a candidate for a DM-particle has not been found in a laboratory so far. Its existence thus remains speculation at the present.

But, even this GR+I+CDM model, which already relies on most of the universe being made up of unknown ingredients (I+CDM) still does not fit the data: By studying distant standard-candle explosions (supernovae of type Ia) it has been found that the universe is today already larger than it ought to be. In fact, its expansion seems to be accelerating. This can be obtained in the GR+I+CDM model only by introducing another field similar to inflation (I) but which is just now becoming active or important thereby ripping the universe apart at an ever increasing rate. This is described mathematically in the equations by the cosmological constant Lambda, and is also referred often to as a dark energy (DE), such that the currently complete standard model becomes the GR+I+CDM+DE=LCDM model. In this model, which is described by about 14 parameters, the universe consists of 95% dark unknown stuff (dark matter plus dark energy, next to inflation).

This DE is a constant energy density, that is, every cubic centimeter contains the same amount of DE. Because the Universe expands (in fact at an ever increasing rate), DE increases (at an increasing rate). Thus, energy is not conserved in this model, while energy conservation is usually considered a fundamental principle in physics. One can postulate that this is OK, since the universe may not be an isolated system, and in any case, energy conservation in GR is a difficult problem. But this is equivalent to postulating an unkown “(dark) outside (DO)” (perhaps in higher dimensions) to solve the energy crisis (implying that the LCDM model would be GR+I+CDM+DE+DO). Prof. John A. Peackock writes at the end of section 1.5 of his book on “Cosmological Physics” (1999, Cambridge University Press):

In effect the vacuum acts as a reservoir of unlimited energy, which can supply as much as is required to inflate a given region to any required size at constant energy density.

Assuming this GR+I+CDM+DE+DO model to be valid (the DO assumption linked to energy non-conservation is usually never mentioned) , one can compute how the universe evolves (see Precision Cosmology below). Large research groups in many places (e.g. Potsdam, Heidelberg, Munich, Zurich, Durham, Swinburne) are doing just this. The computations are relatively straight-forward, because they are Newtonian, that is, GR need not be used. The only real complication comes in through the star-formation and gas physics which happen on very small scales of less than a pc. While the claim by the cosmological community is that star formation is a badly understood process, observations of star formation in the Milky Way and nearby galaxies actually give us a pretty good knowledge of what happens when gas turns into stars.

Some cosmologists would therefore erroneously state that the “stuff” that happens on galaxy scales is not more than gastrophysics, that is, not more than weather or just messy cooking, such that the whole model cannot be tested using observations of galaxies.

But, this is wrong, because a vast industry of computational cosmologists has been able to study the formation and evolution of galaxies, even down to satellite galaxies, well, even down to earth-mass dark-matter halos with a size of the Solar System that might be interacing wirth our Solar System (Diemand, Moore & Stadel 2005, Nature):

“We expect over 1015 to survive within the Galactic halo, with one passing through the Solar System every few thousand years”.

Statements on the galaxy-content of each of the dark matter halos are possible because whatever the small-scale baryonic physics does, it is nevertheless subject to conservation laws (conservation of energy and angular momentum), such that the galaxy-sized structures that emerge in the computer must obey these. And, as stated above, we do have good information on star formation.

 

Explicit tests:

It is therefore clear that in comparing the standard (=LCDM) model on scales of galaxies with the real universe,  we do not check if some gas cloud or star is at a particular position. What one does instead is to test the overall, generic properties of the systems under study. For example, we might wonder how many galaxies are rotating thin disk or spiral galaxies, or, how many galaxies are ellipticals without much rotation, or, how many satellite galaxies does a typical galaxy of some brightness have? Or, how are galaxies distributed in a region of space about 10 Mpc large?

The many parameters that define the GR+I+CDM+DE model have been measured precisely by now such that currently we live in the age of Precision Cosmology. These measurements come from the cosmic background radiation field and the observed distribution of matter on large scales. Simon White presents such evidence at the Dark Matter Debate. LCDM is thus extremely well constrained, that is, there is not much room left for changes in its parameters, assuming of course that the observations and the data analysis has been done correctly. So, within the precisely constrained LCDM model, one can now use super-computers to calculate how matter arranges itself on various scales. For example, one can calculate (the cosmologists often say “predict” although this is not a correct usage of the word) how dark matter halos are distributed in the sheets and filaments, and how structures such as the Local Volume (about 10 Mpc across) or the Local Group of Galaxies (about 1.5 Mpc across) appear in the model.

(NB: the word “predict” is nowadays usually used by cosmologists to mean “calculated” – the true meaning  of “predict” is, however, to quantify some phenomenon before it is observed. Thus, Tidal Dwarf Galaxies (TDGs) are predicted in MOND to lie on the Tully-Fisher relation before the observations verifed this to be the case. In LCDM TDGs cannot be on the Tully-Fisher relation if the Tully-Fisher law is defined by the dark-matter dominated “normal” disk galaxies.)

A truly massive effort documented in countless papers

(for example, arXiv:1101.0816, arXiv:0905.1696, arXiv:0711.2429, astro-ph/0501333, astro-ph/0502496, astro-ph/0503400 all argue that the satellite galaxies can be explained within LCDM)

has been going on for more than ten years to calculate how galaxies emerge and evolve, and how their satellite galaxies are distributed. Clearly this massive effort proves that it is in fact possible to calculate the appearance of structures on galactic scales in the LCDM model.

Today it is clear that this GR+I+CDM+DE model makes precise predictions as to what the generic properties of galaxies are and how they are distributed on scales of 10 Mpc and less. Most if not all of the papers conclude that the particular problem they are approaching can be solved in LCDM, while usually other issues are neglected (e.g. many of the papers deal with the spatial distribution and the number of the satellite galaxies, but ignore their disk-like arrangement about the Milky Way; or they invoke various star-formation and energy-feedback algorithms to attempt to solve the missing satellite problem but then the resulting model is not checked for consistency with galaxies in the Local Void – see below).

The overall portrayal of the status of LCDM by the members of the community involved in this work is then that the LCDM model is quite fine. LCDM is so successful on large scales, that problems on smaller scales tend to be seen as not being major.  This is somewhat surprising, since these issues arise from the best observational data that we have. This is the data on the Local Group and the Local Volume, i.e. data on objects in our cosmic neighborhood.

Concerning the accuracy of the LCDM-model, it is important to understand that a good description of data by a model does not prove that the model is correct. A model or theory can in fact always only be tested and (possibly) falsified, but can never be proven as a matter of principle.

And, by demonstrating that a mathematical model fits on certain scales does not mean that there may not be very different models that also fit just as well.

A historical case in point is the Geocentric or Ptolomaic model: it survived for millenia because it fit the overall world view (We are The Center and there was a Creation Event) and because it was able to account for the observations well, even though it needed to be enhanced through the addition of epicycles and other mathematical artefacts: “It was accepted for over a millennium as the correct cosmological model by European and Islamic astronomers.” The Geocentric (or more correctly the Ptolomaic) model is a good albeit complicated calculation tool for the Solar System (you could still use it today), with which rather precise predictions are possible concerning the positions of the planets and occultations. But, as we know today, it is an unphysical model, and it was ultimately rejected because of modern data (after 1600’s) from beyond the Solar System as well as Galileo’s observations (and it is well known how difficult if not impossible it was for Galileo to convince his peers of what his telescope was showing).

Given the present fact that the LCDM model fits well on large scales, that it is a useful calculation tool, and given the deep implications for our understanding of cosmology and fundamental physics which go along with it, if it were physical (rather than just a mathematical tool), it is important to test the LCDM model. This is similar to back in Galileo’s times: To check on the claims of the dominant viewpoint (perfect celestial bodies moving around the Earth) anyone could build a telescope and test the Ptolomaic model. For example, if the Sun has spots it is not perfect, or if stars show parallax then the Earth moves about the Sun!

We have been performing tests of the LCDM model by taking the observational data and the model calculations published until 2010 by many groups (e.g. those mentioned as astro-ph contributions above). The results of these tests are published in Kroupa et al. 2010. It turns out that LCDM fails on every test performed.  For completeness it ought to be mentioned that we are also testing the leading alternative MOND.

 

1) For example:  Too many dark-matter-dominted satellite galaxies expected

Local DM density from Via Lactea 2 simulation by Diemand et al. 2008.

Local DM density from Via Lactea 2 simulation showing thousands of subhalos. (Source: Diemand et al. 2008).

Structures grow hierarchically.  That is, larger dark-matter structures arise from the collisions of smaller ones. The smaller dark-matter structures do not dissolve typically, but orbit about the emerging large dark-matter halo. Each dark-matter halo is thus full of thousands of dark-matter satellite halos. If each satellite halo would be able to host star formation then our Milky Way, Andromeda and other similar galaxies would have many thousands of satelltie galaxies. But only two-dozen have been found.  This missing satellite problem is an old problem (Klypin et al. 1999; Moore et al. 1999), having arisen as soon as computers became powerfull enough to do CDM calculations with higher resolution.

To solve it, most of the satellite dark halos must somehow be forbidden to make stars. This problem is not fully solved, as is disussed in our research paper, but cosmologists would argue that they are well aware of the problem and it is not new. Many groups claim to have solved it, although they argue amongst each other which of the many proposed solutions is the correct one, some of them being mutually exclusive (e.g. Kazantzidis et al. 2004Kravtsov et al.2004; Tollerud et al. 2008; Koposov et al. 2009).

And, given  the stochastic nature of the merging processes, the number of bright dark-matter dominated satellite galaxies calculated in LCDM to be around each major galaxy such as the Milky Way can vary: some dark matter host halo may acquire a few more than another one.

 

2) For example:  Tidal Dwarf Galaxies (TDGs)

The Tadpole galaxy showing the formation of new dwarf galaxies along its tidal taii. Image credit: NASA, the ACS Science Team and ESA

The Tadpole galaxy showing the formation of new dwarf galaxies along its tidal tail. Image credit: NASA, the ACS Science Team and ESA

 

Structures grow hierarchically. That is, larger structures arise from the collisions of smaller ones. In such collisions between disk galaxies with gas, new dwarf galaxies can be born from the gas expelled during the collision (visualise this by water being ejected from a pool when a stone falls in: the ejected water grows through surface tension into larger drops as it flies through the air). Observed examples of such collisions are the Antennae or the Tadpole galaxy. The matter is expelled in the form of tidal arms or tidal tails.

Indeed, in the Tadpole a number of new dwarf galaxies can be beautifully seen along the tidal arm. A recent research paper has studied the occurrence of such systems of tidal-tail star clusters and dwarf galaxies in the real universe (Mulla et al. 2011, ApJ). There are many other research papers reporting the observed formation of tidal dwarf galaxies in interacting galaxies.

Formation of dwarf galaxies in the numerical simulation of a galaxy interaction (Bournaud et al. 2007).

Final situation of the movie of the formation of TDGs in a galaxy interaction by Wetzstein.

Movie of the formation of TDGs in a galaxy interaction by Wetzstein, Naab & Burkert (2007), taken from the TDGBonn2009 conference website.

 

The number of such tidal-dwarf galaxies (TDGs) formed  has been calculated within the LCDM model. This is possible because one can compute the typical rate with which galaxies collide in the model. This number of TDGs turns out to be so high that all dwarf elliptical (dE) galaxies should be such TDGs (Okazaki & Taniguchi 2000). And it has been shown that the TDGs do not disappear – they do not dissolve because of the star formation activity within them (Recchi et al. 1997) nor can they be destroyed as they orbit around the host galaxy (Kroupa 1997; Klessen & Kroupa 1998). TDGs survive and dynamical friction is ineffective for TDGs with mass smaller than about 10^8 solar masses so that they do not fall back onto their parent galaxy.

Further, nature uses other ways of making even more dwarf galaxies: A disk galaxy plunging through a galaxy cluster is known to be stripped off its gas by the hot intra-cluster gas. A Japanese group has recently found star-formation in dwarf-galaxy sized objects behind such a disk galaxy from gas stripped from the host galaxy (Yoshida et al. 2008).  They refer to these dwarf galaxies as “fireballs”.

TDGs and fireballs are objects that look like galaxies although they cannot contain much dark matter. They form as dwarf irregular gas-rich galaxies, and are rotationally supported dwarf disk galaxies, because the orbiting gas in the tidal arm is accreted onto the self-generated gravitational potential. Once formed, such a galaxy would remain like a dwarf irregular galaxy if it escapes (Hunter, Hunsberger & Roye 2000, ApJ), or if it remains in orbit about the parent or host galaxy, it will evolve to a dwarf ellitpical (dE) or dwarf spheroidal (dSph) satellite galaxy: Because of tides acting on it from the host, its gas is channelled to its central region where it forms stars and/or the gas is stripped through ram-pressure from the hot gas around the host galaxy.

Thus Okazaki & Tanigushi’s work shows that if LCDM is right, then so many TDGs are produced as to account for all known dE galaxies, not even counting the fireballs.

This is catastrophic for LCDM because it means that there would be no dark-matter dominated dwarf galaxies (TDGs cannot have dark matter because they cannot capture many dark matter particles as shown by Barnes & Hernquist 1992, Nature). Note that the absence of dark matter is nicely consistent with the observation that dwarf elliptical galaxies indeed are not dark matter dominated (Toloba et al. 2011).

We thus have an empirical confirmation of Okazaki & Taniguchi’s work: dE galaxies indeed do not require a dark matter content.  This leads to the Fritz Zwicky Paradox which we have dealt with on two past occasions: I. The Fritz Zwicky Paradoxon and II. The Fritz Zwicky Paradoxon and its solution.

Now, a convinced LCDM cosmologist might just trivially reply: If we define a galaxy to be a gravitationally bound system with dark matter, then, clearly, for dE “galaxies” to be galaxies, the dark matter must be at larger radii where it does not affect the motion of the stars. This is clever: we can simply move the invisible dark matter away from the central region where it should really be so it cannot be detected through the motions of the stars. And the physics responsible for this expulsion of dark matter is largely unknown, but is speculated to be due to the energy radiated by stars (stellar feedback) or orbiting gas clumps (note that all attempts to show this actually works have failed unless unrealistic assumptions are made).  But that is Ok, because small-scale physics is gastrophysics or weather anyway. That this scientific trick is actually being applied can be seen in this research paper.

 

3) For example: The Local Group and the similarity of disk galaxies

The Local Group of Galaxies is dominated by two major disk galaxies that are similar and which are in similar massive dark matter halos and which have similar satellite galaxy populations.

How likely is the formation of such a group with its generic properties according to the GR+I+CDM+DE model? It turns out to be very unlikely: less than 0.01 percent !!

This statement is based on the LCDM models of Milky-Way (MW) type galaxies in comparison to real galaxies. It results from selecting, in a computer model of the universe, those dark matter halo masses which are of similar mass to that of the Milky Way galaxy and then calculating the fraction of galaxies that are of similar brightness as the Milky Way or Andromeda.

Noteworthy here is that in their abstract Libeskind et al. (2009), who did just such cosmological computations, suggest MW-type galaxies with a similar satellite population occur often, about 35 per cent of the cases. This would suggest that the case of a group consisting of two such major galaxies and their satellite systems (like the Local Group) is not a rare case at all!

This apparent contradiction can be explained by noting that Libeskind et al. quote in their abstract numbers for a subsample of halos that also have to fulfill a number of other criteria. These must not be omitted when calculating the probability to find a galaxy like the MW inside a halo of the appropriate mass. Let us have a closer look:

Consider all halos with a mass similar to that of the MW, i.e about 10^12 Solar masses. Only about 10 % of these halos contain a galaxy of the brightness of the MW. About 14 % of these galaxies fulfill a second condition, namely that they have least 11 dark-matter-dominated but luminous satellites, like the MW. And it is 35 % of these simulated galaxies, which also fulfill a third condition, namely that the dark-matter-dominated lumninous satellite galaxies of the host galaxies move in a similar way to the ones of the MW. Thus, the probability for a halo with 10^12 Solar masses to fulfill all three conditions (like the MW) is 0.4 %.

Now consider the halo of the Andromeda galaxy. It is of similar to that of the MW and observations of the Andromeda galaxy show that this halo fulfills at least the first two conditions mentioned above. The probability for that is 1.4 %.

Finally, consider the Local Group. With just the masses of the two dominant halos given (namely the halo of the MW and the halo of Andromeda), the probability to have them populated with the two observed galaxies is 0.4 per cent times 1.4 per cent = 0.01 per cent (see Kroupa et al. 2010 for details).

These small likelihoods come about because in LCDM a present-day dark matter halo is build-up from a very large number of mergers of somewhat smaller halos, a process that is known as hierarchical merging. The galaxy that lives inside the halo is the result of how much and in what way the normal matter gets into the halo. A halo can be in a dense region suffering many encounters and mergers, it can be fed by one or many filaments, or it can be in a quite sparse region.

To stand a chance to actually get a Milky-Way type galaxy, a recent study by Agertz, Teyssier & Moore (2011) of forming disk galaxies in the GR+I+CDM+DE model needs to perform a trick. The authors select a cold dark matter halo which did not suffer many mergers: Near the beginning of their Section 3 they state:

The halo has a quiet merger history, i.e. it undergoes no major merger after z= 1, which favours the formation of a late-type galaxy.

But while this is an exception in the GR+I+CDM+DE model, which is a severe problem for this model in which galaxies strictly form through hierarchical merging, Prof. Mike Disney has shown in his paper with the title “Galaxies appear simpler than expected” (Disney et al. 2008, Nature). They write:

More generally, a process of hierarchical merging, in which the present properties of any galaxy are determined by the necessarily haphazard details of its last major mergers, hardly seems consistent with the very high degree of organization revealed in this analysis. Hierarchical galaxy formation does not explain the commonplace gaseous galaxies we observe. So much organization, and a single controlling parameter which cannot be identified for now, argue for some simpler model of formation.

Thus, real disk (or “late-type”) galaxies, which comprise nearly 80% of all galaxies, show far less variation than would be expected from simulations of galaxy formation within the LCDM-model.

Or to put it differently: The fact that the two dominant galaxies of the Local Group are so similar is apparently nothing exceptional, whereas the LCDM-model suggests that two dark matter halos of similar mass are expected to harbour very different baryonic galaxies. We call this the Invariant Galaxy Problem.

4) For example:  The Disk of Satellites (DoS)

The distribution of

The distribution of “classical” (yellow) and faint (green) satellite galaxies around the Milky Way (blue). From Kroupa et al. (2010).

How are the satellite galaxies distributed generically?

For example, can the disk of satellites of the Milky Way be accounted for by the LCDM model? And, why do the 13 newly found satellites with very different discovery histories using very different observing techniques with very different biases end up defining the same disk as the 11 bright classical satellites?

Our MW has a diameter of about 50kpc. Around it are about 24 satellite galaxies (about eight more are likely to be discovered on the Southern hemisphere since the North- and South-Galactic distributions must be about symmetric).

The 11 brightest of these, which have been largely found using photographic plates and are known since a long time, are in a disk-like structure with a diameter of about 500 kpc and a thickness of about 50 kpc (the Disk of Satellites, DoS). This disk of satellites sits nearly perpendicularly on the 50 kpc sized MW disk.

What is strange is that the 13 newly found satellites, all of which are very faint and so can only be discovered using computer-aided digital sky surveys to find the few faint stars that belong to a new satellite galaxy, are also distributed in this same DoS (Figures 4 and 5 in Kroupa et al. 2010). Now, why on Earth are these very faint satellite galaxies, which have completely different discovery histories and methodology, distributed just like the bright ones? The SDSS survey volume used to find the very faint satellites is a cone on the Galactic sky on the Northern celestial hemisphere. This is why some astronomers would erroneously say that the very faint satellites follow an anisotropic distribution because the survey volume is not spherical.

This is wrong. It is wrong because if the survey volume were to be the cause for the observed anisotropy of the very faint satellites, then we would need to ask why the survey volume happens to follow the DoS of the bright satellites? In fact, this is not the case, because the survey volume is a more or less roundish cone rather than a stripe on the Galactic sky.

The only physically plausible reason as to why the very faint and the classical, bright satellites define the same DoS independently of each other is that they are related to each other. That is, they are correlated in phase-space. Such a high degree of correlation is simply not possible in the standard LCDM model, because in the model most of the satellite dark matter halos fall-in individually and orbit on independent orbits about the MW. At best a small group of satellites could fall-in together, forming, for some time, a phase-space correlated sub-population. But this amounts to not more than a handfull of satellites, if at all. In order for such a group to remain identifiable as a DoS in the MW halo, the group would have to have fallen in only recently, at most a few Gyr ago. Such group infall has been postulated by a few authors to explain the DoS, but this proposition fails because the required diameter of such a group must be less than 50 kpc in order for the group to remain in a DoS which is about 50 kpc thick. Metz et al. (2009) have, however, shown that such groups of dwarf galaxies do not exist in the real universe.

 

5) For example:  Contradictions between LCDM research groups

The missing satellite as well as the DoS problem are very serious issues for LCDM which is why an impressive effort is undertaken world-wide to seek solutions within LCDM. It turns out that different research groups “solve” the satellite problem of the Milky Way independently of each other. But, it also turns out that the solutions are mutually exclusive, that is, they contradict each other.

For example,

Deason et al. (2011, MNRAS, in v1 of the electronic preprint) suggest a solution: In their abstract:

We attribute the anisotropic spatial distribution and angular momentum bias of the satellites at z=0 to their directional accretion along the major axes of the dark matter halo. The satellite galaxies have been accreted relatively recently compared to the dark matter mass and have experienced less phase-mixing and relaxation — the memory of their accretion history can remain intact to z=0.

On the other hand, Nichols & Bland-Hawthorn (2011, ApJ) also solve the satellite problem but ignore the spatial anisotropy (i.e. DoS) problem. In their abstract we read:

This model of evolution is able to explain the observed radial distribution of gas-deficient and gas-rich dwarfs around the Galaxy and M31 if the dwarfs fell in at high redshifts (z~3-10).

This example demonstrates that only small aspects of the whole problem can be solved. For example that the satellite galaxies of the milky Way are gas poor can be understood if their gas was stripped, but this then requires them to have fallen in a long time ago. On the other hand, to explain the DoS the satellites must have fallen in recently …

It demonstrates that an overall consistent solution within LCDM does not seem possible.

 

6) For example:  The internal propertis of satellite galaxies

A major failure comes from the internal properties of the satelllite galaxies. There are two tests:

A) As stated above, the law of energy conservation is a fundamental property of physical systems. For satellite galaxies this implies that dark matter sub-halos (that end up hosting visible satellite galaxies) which are more massive or heavier can gather more normal matter and will therefore make more stars and will therewith appear brighter. On average! It needs more energy to remove gas from a more massive satellite dark halo. A heavy dark matter sub-halo may also suffer a catastrophic encounter with the host dark matter halo central density if it is on a very radial orbit such that it may loose a large fraction of its own dark matter halo. But, on average, a heavier dark-matter halo must host a brighter galaxy. How much brighter a satellite galaxy with a heavier dark matter halo is can be calculated in LCDM.

Many groups have done this automatically when they study satellite galaxies in their LCDM models. In our paper (Kroupa et al. 2010) we tested this correlation between model satellite brightness and the dark matter halo mass for all existing recent models, and every single one of the models shows a significant, i.e. pronounced, relation: Heavier model cold dark matter halo masses host brighter model satellite galaxies.

One can also measure the dark matter halo masses of the real satellite galaxies by observing the motion of their stars: the stars move more quickly than they should and this can be explained either with non-Newtonian dynamics (e.g. MOND or MOG), or with a cold dark matter halo in the standard LCDM model.

The result is that the real satellite galaxies all have the same cold dark matter halo masses, although they have a vast range of brightnesses. Each weighs about 10^9 solar masses, and the mass in stars ranges from 10^3 solar masses to about 10^7 solar masses.  We calculated the probability that this would be the case if the LCDM model were correct, and this probability turns out to be so small that the LCDM model needs to be discarded just based on this one test alone!!

That is, the dark matter halo masses of the satellite galaxies are unphysical. There are no dark matter halos around the satellite galaxies.

B) The second independent “internal-satellite property” test: Ignoring the previous result: if the satellites were embedded in their dark matter halos, each of which weighs about 10^9 solar masses, then the visible, bright satellite galaxy which sits deeply inside this dark halo, would be typically well-shielded against external tidal fields, especially so if the satellite galaxy is more than 100 kpc away from the Milky Way.

The stars in each of the satellites are observed to move with about 7 km/s = 7 pc/Myr (compare: the Sun moves with about 220 km/s about the Milky Way centre). Thus, in 100 Myr each star has moved once through a typical satellite galaxy, since the visible parts of them are only a few hundred pc large. So after a few 10^8 yr the stars in a satellite galaxy are fully mixed throughout the visible satellite.

If, at some time, there was a structure in the satellite, this structure would smear apart in a few 10^8 yr. Since the satellites are about 10^10 yr old, and since they orbit around the Milky Way in about 10^9 yr, essentially all of them should look very smooth and round. For example, globular clusters are smooth and round because of this same reason – the stars phase-mix away any possible sub-structure within the time it takes for a typical star to move through the cluster (about a million years) or satellite galaxy (a few hundred million years).

The Hercules dwarf galaxy, a satellite of the Milky Way. From Kroupa et al. (2010).

The Hercules dwarf galaxy, a satellite of the Milky Way. Fig. 6 from Kroupa et al. (2010).

But, it turns out that many of the satellite galaxies look squashed, disturbed, distorted, asymmetrical and some have bumps in them. An example of a satellite more than 100 kpc away is provided in Fig.6 of Kroupa et al.(2010). And as another example Palma et al. (2003) document the peculiar morphology of Ursa Minor dSph satellite. Fornax, the brightest satellite, which lies about 140 kpc away from the Milky Way, has an off-center density maximum, appears squashed and has a twisted inner appearance (Demers et al. 1994). In the Dark Matter Debate Simon White put Fornax forward as a satellite which fits the LCDM model best of all satellites in terms of its derived dark-matter mass. But clearly, this is not consistent with its complex inner structure. These are only three examples of many.

Thus, the satellites cannot be contained in the large and relatively massive (10^9 solar mass) dark matter halos which are a central feature of the LCDM model.

So again we arrive at the same conclusion, but using entirely independent evidence: the dark matter halo masses of the satellite galaxies are unphysical – the stellar motions have nothing to do with cold dark matter halo masses.

But, if the satellite dark matter halos are unphysical, then the whole LCDM model collapses, because the Milky Way would violate the requirement by the model to have dark matter dominated satellite galaxies.

 

7) For example:  dwarf galaxies in LCDM don’t match real dwarf galaxies

Understanding the properties of dwarf galaxies in general is another major problem in LCDM. For example, Sawala et al. (2011) write in their abstract:

We present cosmological hydrodynamical simulations of the formation of dwarf galaxies in a representative sample of haloes extracted from the Millennium-II Simulation.

[. . .]

The dwarf galaxies formed in our own and all other current hydrodynamical simulations are more than an order of magnitude more luminous than expected for haloes of this mass.

 

8) For example:  the downsizing problem

Related to the above problem, in LCDM dwarf dark matter halos form first and later some of them merge such that massive dark matter halos appear later than the dwarf ones.

Since baryons follow the dark matter a prediction (using this term correctly) of LCDM was that dwarf galaxies should be older on average than the bright massive galaxies. This turns out to be completely wrong.

What the observations told us is that dwarf galaxies are typically younger than the massive galaxies.  This is referred to as the downsizing problem.

In order to solve this problem, the LCDM  community needed to invent mechanisms that delayed the formation of stars in dwarf halos while enabling massive galaxies to emerge quickly after the Big Bang. One possibility would be to postulate that in dense environments stars formed quickly as the massive elliptical galaxies were assembling, while dwarf halos in less-dense regions could not form stars because the gas was still too hot or was heated from various sources.

A problem with this ansatz is that actually denser regions have a higher star-formation activity since there is more denser gas. This leads to local heating which is supposed to suppress star formation in order to allow disk galaxies to form without bulges: early supernovae heat up the gas which then needs time to cool. It will then slowly accrete onto the galaxy. So massive galaxies should not form so rapidly.

In low-density regions on the other hand, where the dwarfs ought to form, there are no major heating sources to stop gas cooling into the dwarf halos. So the dwarfs should form quickly.

The resulting models are messy, and usually they cannot account for the whole galaxy population. See for example Guo et al. (2011) and/or Firmani & Avila-Reese (2010).

Downsizing is not solved to this date.

9) For example:  disk galaxies are too bulgless

Even worse (for the GR+I+CDM+DE model), between 58 to 74 % of all real (observed!) disk galaxies do not have a bulge, as demonstrated only recently by Kormendy et al. (2010). They write in their abstract:

“We conclude that pure-disk galaxies are far from rare. It is hard to understand how bulgeless galaxies could form as the quiescent tail of a distribution of merger histories. Recognition of pseudobulges makes the biggest problem with cold dark matter galaxy formation more acute: How can hierarchical clustering make so many giant, pure-disk galaxies with no evidence for merger-built bulges”

But, calculations in the LCDM model have always been showing that the vast majority of galaxies have large bulges or are spheroidal altogether. For example, Piontek & Steinmetz (2011, MNRAS), write in their abstract:

“A mechanism to create bulge-less disc galaxies in simulations therefore remains elusive.”

This is because in the LCDM model, structures grow hierarchically by the merging of countless smaller sub-structures. When the different sub-structures merge, their kinetic energy is used up in heating the gas in these sub- structures. In the end, the gas cools down again by radiating the energy away and sinks towards the center of the whole object. But this resulting object assumes a spheroidal shape through these processes rather than forming a large thin rotationally supported disk. All computations show this to be the case. The only way to halt the infall of too much gas is to blow the object apart by supernovae or artificially enhanced star formation and stellar feedback. But, the models created to do this are unphysical.

For example, here is a recent state-of-the art research paper in which the authors state in their abstract  “Photometric decompositions thus match the component ratios usually quoted for spiral galaxies better than kinematic decompositions, but the shift is insufficient to make the simulations consistent with observed late-type systems” (Scannapieco et al. 2010).

 

10) For example:  Dark matter emergence problem: a conspiracy

Notwithstanding the above major problems within the LCDM model to account for most observed galaxies (well, for galaxies in general actually), there is yet another significant unsolved problem, namely the Dark Matter Emergence Problem:

In any galaxy, dark matter always only appears when the surface density of normal matter falls below a critical value, as demonstrated by Gentile et al. (2009, Nature). Now note that the acceleration is proportional to the surface density of a galaxy with Newtonian gravitation. Also note that the transition from Newtonian dynamics to MOND is set by a critical acceleration (and thus a critical density). It is interesting that according to Gentile et al.’s analysis, dark matter magically appears in real galaxies only below this critical density.

This is natural in MOND, but within the LCDM model there is not a single clue based on known physical principles that may give rise to this observed fact. Is this not a strong hint?

This is problem very closely related to the Conspiracy Problem, which has already been  discussed in Question A. It is an old problem of the LCDM model, and despite decades of computer simulations using ever improving and ever faster machines, absolutely no remedy is on the horizon.

 

11) For example:  Voids

Yet other tests include: how empty are the voids, notably in the Local Volume, and can the observed emptiness be reconciled with the sub-structures that must be there in the model? Again, this is a major problem unless other parts of the model are upset (e.g. one could force the void sub-structures to remain dark, but then the galaxies in the sheets would have wrong properties). This is emphasised by Peebels & Nusser (2010, Nature). They write:

“We conclude that there is a good case for inconsistency between
the theory and our observations of galaxies in the Local Void.
Conceivably, the local sample is atypical; this will be checked as
galaxy surveys improve. Perhaps survival of detectable galaxies is less
likely in the Local Void, although that is not supported by the
depleted state of dwarfs near large galaxies. Or perhaps we are learning
that growth of structure is more rapid than predicted in the
standard cosmology, more completely emptying low-density regions.”

They also emphasise that there are too many really large galaxies just outside the main sheet, writing:

“30% of the largest galaxies are more than 2 Mpc above the Local Sheet. If galaxy luminosities were randomly assigned, this situation would have a 1% probability, but the probability is less than this in the standard
picture of the cosmic web, in which more-luminous galaxies avoid less
dense regions. These three could not be dwarfs masquerading as large
galaxies; their circular velocities indicate the central masses of large
galaxies. That is, the presence of these three large galaxies in the
uncrowded region above the Local Sheet is real, and at well below 1%
probability it is an unlikely consequence of standard ideas.”

 

12) For example: A large fraction of normal matter is missing

The missing baryon fraction: Within the Big Band LCDM model we know precisely how many atoms of normal matter were produced during big-bang nucleosynthesis. These calculations are in good agreement with the fraction of Helium among all Hydrogen observed, but fail to account for the amount ot Litium observed. But, even worse, most of the atoms that ought to have been created just after the Big Bang have not yet been found. Astronomers do not know where the majority of normal (baryonic) matter is (e.g. McGaugh 2007) – it has gone missing.

 

13) For example: Big Bang nucleosynthesis . . . ?

Big Bang nucleosynthesis makes the wrong Lithium abundance, as has been emerging ever more clearly. See this recent New Scientist contribution, and quoting from it:

“One thing that everyone does agree on is that things are getting worse. “The lithium-7 problem is more serious than ever,” says Joseph Silk at the University of Oxford. “

Here is a research paper addressing this issue: Cyburt et al. (2010).

Concluding Remarks:  Either we live in a Bubble of Extreme Exception or  LCDM is wrong

We have seen that a massive computational effort by many groups has been able to quantify the distribution of matter on scales of 10 Mpc and less. The standard cosmological model turns out to not be able to account for the observed distribution of matter on these scales. Meanwhile, new dynamical laws have been discovered which are extremely successful in accounting for the appearance and motion of matter on galactic scales and above – see Question C.II: MOND works far too well (will be on line soon).

Especially noteworthy is that in all of the tests performed in our research paper LCDM fails. And, above many additional failures are documented.

The likelihood that the observed data can be modelled by the LCDM model is very small for each test. Taking the tests together implies that LCDM is completely ruled out, or, one may conclude that we live in a Bubble of Extreme Exception (the BEE hypothesis to save the LCDM hypothesis).

We thank Joerg Dabringhausen for useful comments. By Pavel Kroupa and Marcel Pawlowski (08.03.2011). See The Dark Matter Crisis.

Advertisements

Author: Prof. Dr. Pavel Kroupa

I am a Czech-Australian teaching and researching at the University of Bonn on dynamics and stellar populations. After studying physics at The University of Western Australia, Perth, I obtained my PhD from Cambridge University, UK, as an Isaac Newton Scholar at Trinity College. After spending eight years in Heidelberg I habilitated at the University of Kiel, Germany. I then took up a Heisenberg Fellowship and later accepted the position as a professor at Bonn University in 2004. I was awarded a Leverhulme Trust Visiting Professorship (2007, Sheffield, UK) and a Swinburne Visiting Professorship (2007, Melbourne, Australia). In 2013 I received the Silver Commemorative Medal of the Senate of the Czech Republic, and I took-up an affiliation with the Charles University in Prague in 2016. Pure innovative science can only truly thrive in non-hierarchical societies in which competition for resources is not extreme. Therefore I see the need for the German academic system to modernise (away from its hierarchies) and warn of academic systems that are based on an extreme competition for resources (USA), as these stifle the experimentation with new ideas.

7 thoughts on “Question C.I: What are the three best reasons for the failure of the LCDM model? I: Incompatibility with observations”

  1. Problems with the LCDM model“… our understanding of gravity is not complete. … in all of the tests performed in our research paper LCDM fails.” Milgrom has pointed out that cold dark matter (if it exists) would have to be weakly interactive with respect to the EM and strong interactions but also WEIRDLY INTERACTIVE with respect to gravitation. I claim that the Rañada-Milgrom effect shall revolutionize cosmology by about June of 2012 CE. This real-or-apparent effect is that the -1/2 in the standard form of Einstein’s field equations should be replaced by -1/2 + sqrt(15) * 10**-5. I now have two physical interpretations of M-theory; in one interpretation the Rañada-Milgrom effect is real and in the other interpretation the effect is apparent instead of real. See the posting “Gravity Probe B: patch effects or quantum gravitational effects?” at nks forum applied nks for my claim about the Gravity Probe B gyroscopes — gyroscopes OK but ignoring Milgrom’s ideas not OK.

  2. Massive high-z galaxy clusterA further challenge to the LCDM model is the massive galaxy cluster XMMU J2235.3-2557 which has been discovered recently by Jee et al. (2009, http://adsabs.harvard.edu/abs/2009ApJ…704..672J). It has already a total mass of about 6.e14 Msun at a redshift of z=1.4. For Gaussian initial conditions the probability is about 0.5 per cent to be detected in a LCDM model (Jee et al. 2009).
    Although some cosmologist are simply stating that this probability is still high enough to be no challenge for LCDM, this issue seems to be serious enough that they already try to add further adjustments to the LCDM model, just to be on the safe side. Thus, it has now been suggested that coupled dark energy models, i.e. an interaction between dark matter and dark energy, can significantly enhance the probability to observe very massive galaxy clusters at high z (Baldi & Pettorino, 2011, http://adsabs.harvard.edu/…745-3933.2010.00975.x).
    This basically means that after postulating several dark components another epicycle is just about to be added (dark matter -> dark energy -> inflation -> dark interaction), although the existence of its parent epicycles have by no means been proved so far.

  3. knowingHallo Pavel Kroupa
    You write, that we do not have a good physical theory of matter, mass, space and time nor do we know how and if they can be unified.
    This is very strange. What if the blacksmith would not know what is iron, the baker would not know what is flour and the carpenter would not know what is wood.
    Without this knowing you can forget your research! When You want to know more about of matter, mass, gravity, inertia, space and time, you may send me a question. I can help you to understand it.

  4. “… the dark matter halo masses of the satellite galaxies are unphysical.” Cold dark matter halos don’t really work but cold dark matter gravitational lensing works much better than cold dark matter halos. If the -1/2 in the standard form of Einstein’s field equations is replaced by -1/2 + sqrt(15) * 10**-5, then:
    (1) Milgrom’s MOND is a corollary by using scaling factors;
    (2) gravitational lensing matches the cold dark matter theoretical lensing;
    (3) the Pioneer anomaly, the flyby anomaly, and the Gravity Probe B so-called “misalignment torques” are explained.
    Also note that
    (((4 pi) + .11)**-4)/(sqrt(15) * 10**-5) = 0.9999417 … this implies that the “dark-matter-compensation-constant” has roughly the correct magnitude in terms of the cosmological constant.

  5. Joachim BlechleThe baker, the blacksmith and the carpenter certainly know their materials, although nuclear engineers do not seem to know theirs…
    What we mean with “we do not have a good physical theory of matter, mass, space and time nor do we know how and if they can be unified.” is that we do not know how inertial mass arises. For example, a proton’s mass consists to about 98 percent out of binding energy, and only about 2 per cent are contributed by the actual quarks. Can this be calculated from a fundamental theory? How do the 2 percent arise? And, how does the proton mass emerge from the background? This is, however, not our immediate research problem. We are testing existing gravitational theories (Einstein/Newton; MOND …) using astronomical data to constrain the gravitational framework.

  6. inertial mass-energy and astronomical data“… we do not how inertial mass arises. … This, however, is not our immediate research problem. We are testing gravitational theories (Einstein/Newton, MOND …) using astronomical data to constrain the gravitational framework.” If dark matter particles exist then is most of their inertial mass-energy being obscured or somehow cancelled? Does astronomical data lead inevitably to either a real violation or an apparent violation of the equivalence principle? Note that the cold dark matter depends upon neutralinos, axions, or other supersymmetric particles that are not only undetected but also not precisely predicted. I used the Pioneer anomaly data with the discrepancy factor (6± 1) * 10**-11 Hz and modified M-theory to deduce that the -1/2 in the standard form of Einstein’s field equations should be replaced by -1/2 + sqrt((60 ± 10)/4) * 10**-5. This quantum gravitational correction immediately yields Milgrom’s MOND as a good approximation — I claim all of MOND’s successes for my theory. In my theory, Wolfram’s cosmological principle forces a deterministic computational method upon M-theory. Wolfram’s cosmological principle is that the maximum physical wavelength equals the Planck length times the Fredkin-Wolfram constant. This radical new principle is, in my opinion, what is needed to replace the contemporary LCDM theory by a modified M-theory that correctly explains most (or perhaps all?) of the puzzles of contemporary cosmology. Seiberg-Witten M-theory with neutralino physics and with D-brane noise as dark energy might be an improved version of contemporary LCDM.

  7. correction to previous postI used the Pioneer anomaly data with the discrepancy factor (6± 1) * 10**-9 Hz/sec (the unexpected frequency shift per sec) and modified M-theory to deduce that the -1/2 in the standard form of Einstein’s field equations should be replaced by -1/2 + sqrt((60 ± 10)/4) * 10**-5. The idea is that the time-measured-in-seconds * sqrt(unexpected-frequency-shift-per-second) = Rañada-Milgrom-adjustment-factor * Einsteinian-gravitational-redshift. I made the assumption that (6±1) * 10**-9 HZ/sec is valid data for the Rañada-Milgrom effect — the flyby anomaly data gives a Rañada-Milgrom-adjustment-factor of roughly similar size

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s