A large number of dwarf galaxies in the Fornax cluster (Figure 1) appear to be disturbed, most likely due to tides from the cluster gravity. In the standard cosmological model (ΛCDM) , the observable structure of the dwarfs is barely susceptible to gravitational effects of the cluster environment, as the dwarfs are surrounded by a dark matter halo. Because of this, it is very hard to explain the observations of the perturbed Fornax dwarfs in this theory. However, these observations can be easily explained in MOND, where dwarfs are much more susceptible to tides due to their lack of protective dark matter halos and the fact that they become quasi-Newtonian as they approach the cluster center due to the external field effect.
Figure 1: Fornax galaxy cluster. The yellow crosses mark all the objects identified in the Fornax deep survey (FDS) for this region of the sky, the black circles are masks for the spikes and reflection haloes, and the red crosses mark the objects that pass the selection criteria to be included in the FDS catalog. Image taken from Venhola et al. 2018.
The impact of tides on what the dwarfs look like is illustrated in Figure 2, which shows the fraction of disturbed galaxies as a function of tidal susceptibility η in ΛCDM and MOND, with η = 1 being the theoretical limit above which the dwarf would be unstable to cluster tides. Moreover, there is a lack of diffuse galaxies (large size and low mass) towards the cluster center. This is illustrated in Figure 3, which shows how at low projected separation from the cluster center, dwarfs of any given mass cannot be too large, but larger sizes are allowed further away. Figure 3 thus shows a clear tidal edge that cannot be explained by selection effects, since the survey detection limit would be a horizontal line at 1 on this plot such that dwarfs above it cannot be detected. Diffuse dwarf galaxies are clearly detectable, but are missing close to the cluster center. Another crucial detail in Figure 3 is that dwarfs close to the tidal edge are much more likely to appear disturbed, which is better quantified in Figure 2 in the rising fraction of disturbed galaxies with tidal stability η. The tidal edge is also evident in Figure 2 in that the dwarfs only go up to some maximum value of η, which should be close to the theoretical stability limit of 1. This is roughly correct in MOND, but not in ΛCDM.
Figure 2: Fraction of disturbed galaxies for each tidal susceptibility bin in MOND (red) and ΛCDM (blue). Larger error bars in a bin indicate that it has fewer dwarfs. The bin width of the tidal susceptibility η is 0.5 in MOND and 0.1 in ΛCDM (each data point is plotted at the center of the bin). Notice the rising trend and the maximum η that arises in each theory.
Figure 3: Projected distances of Fornax dwarfs to the cluster center against the ratio Re/rmax, where Re is the dwarf radius containing half of its total stellar mass, and rmax is the maximum Re at fixed stellar mass above which the dwarf would not be detectable given the survey sensitivity. The dwarfs are classified as “disturbed” (red) “undisturbed” (blue). The black dashed line shows a clear tidal edge – at any given mass, large (diffuse) dwarfs are present only far from the cluster center. This is not a selection effect, as the survey limit is a horizontal line at 1 (though e.g. some nights could be particularly clear and allow us to discover a dwarf slightly above this).
We therefore conclude that MOND and its corresponding cosmological model νHDM (see blog post “Solving both crises in cosmology: the KBC-void and the Hubble-Tension” by Moritz Haslbauer) is capable of explaining not only the appearance of dwarf galaxies in the Fornax cluster, but also other ΛCDM problems related to clusters such as the early formation of El Gordo, a massive pair of interacting galaxy clusters. νHDM also better addresses larger scale problems such as the Hubble tension and the large local supervoid (KBC void) that probably causes it by means of enhanced structure formation in the non-local universe. These larger scale successes build on the long-standing success of MOND with galaxy rotation curves (“Hypothesis testing with gas rich galaxies”). MOND also offers a natural explanation for the Local Group satellite planes as tidal dwarf galaxies (“Modified gravity in plane sight”), and has achieved many other successes too numerous to list here (see other posts). Given all these results, the MOND framework appears better suited than the current cosmological model (ΛCDM) to solve the new astrophysical challenges that keep arising with the increase and improvement of the available astronomical data, which far surpass what was known in 1983 when MOND was first proposed.
In The Dark Matter Crisis by Moritz Haslbauer, Marcel Pawlowski and Pavel Kroupa. A listing of contents of all contributions is available here.
(Guest post by Elena Asencio, University of Bonn, January 16th, 2021)
It is currently accepted that structure in the Universe formed in a hierarchical way. In other words, smaller structures formed first and then merged into larger structures. The largest gravitationally bound structures in the Universe are the galaxy clusters. Since the predicted timescale in which these structures formed depends on the cosmological model adopted and, subsequently, on the gravity theory assumed, galaxy clusters can be used to test both gravity theories and cosmological models models on large scales.
In the last decades, the improvements in telescope detection capabilities have made possible to observe objects which are deeper in space. The further an astronomical object is from us, the longer it takes for its light to reach us. Therefore, deeper surveys allow us to observe how the Universe looked like in the fairly distant past. Some of the galaxy clusters that were detected in these deep surveys surpass the standard model (ΛCDM) predictions in terms of mass, size and/or galaxy-infall velocities, and could potentially pose a serious problem to the model.
El Gordo (ACT-CL J0102-4915) is a galaxy cluster with particularly extreme properties. It is located more than 7 billion light years from Earth and is composed of two sub-clusters weighing together approximately 3e15 Solar masses with a mass ratio of 3.6 and a high collision velocity of approximately 2500 km/s. Due to the highly energetic interaction of its two sub-clusters, it is also the hottest and most X-ray luminous galaxy cluster observed at this distance according to Menanteau et al. (2012).
Figure 1: A composite image showing El Gordo in X-ray light from NASA’s Chandra X-ray Observatory in blue, along with optical data from the European Southern Observatory’s Very Large Telescope (VLT) in red, green, and blue, and infrared emission from the NASA’s Spitzer Space Telescope in red and orange. Notice the twin tails towards the upper right.Image from this source. Credits: X-ray: NASA/CXC/Rutgers/J. Hughes et al; Optical: ESO/VLT & SOAR/Rutgers/F. Menanteau; IR: NASA/JPL/Rutgers/F. Menanteau.
In our paper “A massive blow for ΛCDM – the high redshift, mass, and collision velocity of the interacting galaxy cluster El Gordo contradicts concordance cosmology” (Elena Asencio, Indranil Banik & Pavel Kroupa 2021), we conducted a rigorous analysis on how likely it is that this object exists according to ΛCDM cosmology.
In order to do this, we searched for cluster pairs that could potentially be progenitors of the El Gordo cluster in the ΛCDM cosmological simulation developed by the Juropa Hubble Volume Simulation Project – also known as the Jubilee simulation. The reason why we searched for the El Gordo progenitors instead of directly looking for an El Gordo-like object is because extremely large objects like El Gordo require very large simulation boxes to have their number of analogues estimated in a reliable way. Larger simulation boxes have lower resolution. Therefore, when searching for El Gordo analogues in the simulation, we can not aim to match its morphological properties (e.g. the observed X-ray morphology) — as these would need a high resolution simulation with gas dynamics to be reproduced. Such simulations covering a sufficiently large volume cannot be achieved today even on the most powerful supercomputers (and are in actuality also not necessary for the present aim) — but we can try to find cluster pairs whose configuration matches the initial configuration of El Gordo in terms of total mass, mass ratio and infall velocity. To determine the values of the parameters describing this initial configuration, we need to rely on the results of detailed hydrodynamical simulations. Zhang et al. (2015) performed a series of hydrodynamical simulations of two colliding galaxy clusters trying to find which set of initial conditions would result in a merger with similar properties to El Gordo. Among the 123 simulations that they ran for different parameters, they found that the model that gave the best fit to the observed properties of El Gordo had a total mass of 3.2e15 Solar masses, a mass ratio of 3.6, an infall velocity of 2500 km/s, and an impact parameter of 800 kpc. Models with lower mass or lower infall velocity were not able to reproduce the twin-tailed morphology of El Gordo (see Figure 1) and its high X-ray luminosity.
Using the Jubilee simulation, we found no analogues to El Gordo. We therefore relaxed the requirement of a sufficiently high mass, and found out how the number of El Gordo analogues (in terms of mass ratio and infall velocity) decreased with increasing mass. Since the Jubilee simulation was run for different cosmological epochs or redshifts, we were also able to determine how the number of El Gordo analogues (in terms of total mass, mass ratio, and infall velocity) decreased for earlier epochs or larger redshift. From these results and accounting for the fact that the total volume of the Jubilee simulation is significantly larger than the space volume in which El Gordo was found, we obtained the probability of finding a cluster pair with a similar configuration to the expected pre-merger configuration of El Gordo, at a slightly earlier epoch to that at which we observe El Gordo (see Figure 2).
Figure 2: Plot showing the frequency of analogues to the El Gordo progenitors for each position in the grid. The grid is constructed for a series of mass values in log10 scale (y-axis) and cosmic scale factor a (x-axis). The a values determine the cosmological epoch (for reference, a = 1 today, a = 0.535 at the epoch at which we observe El Gordo and a = 0.5 at the epoch at which we look for El Gordo progenitors, and generally the expansion factor a and redshift z are related by a=1/(1+z) ). The probability of lying outside a contour (region of fixed colour) can be expressed in terms of the number of standard deviations (σ). The higher the number of standard deviations at a certain point in the grid, the further away will this point be from the expected value of the distribution. It is generally considered that if a model surpasses the 5σ threshold, then this model is falsified. In this plot, the point in the grid corresponding to the M̃ and a values of the El Gordo progenitors is marked with a red X and it corresponds to 6.16σ. In terms of probability, this is equivalent to saying that there is a 7.51e-10 chance of finding an interacting pair of El Gordo progenitors or an even more extreme pair in the ΛCDM model.
The chance of observing an El Gordo-like object in the ΛCDM cosmology is 7.51e-10, which corresponds to 6.16σ (as a reminder: physicists accepted the existence of the Higgs boson once the experimental data reached a 5σ significance level — in general, when a phenomenon reaches a confidence of 5σ or more, then it is formally taken to be certain corresponding to a chance of one in 1.7 million that the phenomenon is untrue). This means that, assuming the ΛCDM model, we should not be observing El Gordo in the sky (but we do observe it). In fact, the tension between the ΛCDM model and the observations is even higher if one takes into account that El Gordo is not the only problematic object found in the sky.
Another well-known galaxy cluster that poses a potential problem to ΛCDM is the Bullet Cluster. It is also an interacting cluster composed of two subclusters colliding at high velocity (3000 km/s) which, according to the ΛCDM model, is unexpected at the distance at which it is observed (3.72 billion light-years).
Kraljic & Sarkar (2015) obtained a 10% probability of finding a Bullet Cluster analogue in the ΛCDM cosmology over the whole sky. In order to get a more helpful estimate of the Bullet Cluster probability, the sky area in which the Bullet Cluster was observed should be taken into account – it would not be realistic to use the probability for the whole sky as this would imply that the Bullet Cluster was found in a fully sky survey, which is not the case. Taking into consideration that the survey in which the Bullet Cluster was found only covered 5.4% of the sky, the actual probability of observing a Bullet Cluster-like object is 0.54%, which makes it a 2.78σ outlier. Combining the probability of observing both the Bullet Cluster and El Gordo in the sky raises the tension to 6.43σ.
We also considered the possibility that the problem is not in the ΛCDM model but in the Jubilee cosmological simulation, in the Zhang et al. (2015) hydrodynamical simulations, or in our statistical analysis. According to Watson et al. (2014), up to now, the Jubilee simulation has been shown to work correctly in accordance with the ΛCDM cosmological model for which it was designed. So we have no reasons to believe that there might be any problems with the Jubilee simulation in that regard. We also found many lower mass analogues to El Gordo, so numerically our results should be quite sound and allow an accurate extrapolation up to the El Gordo mass. The results of Zhang et al. (2015) for the initial configuration of El Gordo are backed up by previous independent studies of El Gordo. The weak lensing analysis of El Gordo by Jee et al. (2014) confirms the mass estimate of 3e15 Solar masses. The simulations by Donnert (2014) and Molnar & Broadhurst (2015) agree on an infall velocity of 2250 – 2600 km/s. Besides this, Zhang et al. (2015) had already checked that lower values for the mass and infall velocity – which would be easier to explain in ΛCDM – were unable to reproduce the morphology of El Gordo. Regarding our own analysis, in the paper we also performed the statistical analysis with a different method to check the consistency of our results. The results were indeed consistent, so we consider our methods to be reliable. The more conservative and detailed method is shown in Figure 2.
Since the ΛCDM model cannot account for the existence of extreme objects like El Gordo or the Bullet Cluster, some authors tested other cosmological models to check how well they work in this respect. Katz et al. (2013) searched for El Gordo analogues in a simulation that adopted a νHDM cosmological model. The νHDM model has the standard hot Big Bang, primordial nucleosynthesis, CMB and expansion history as the ΛCDM model, but assumes the extended gravity law devised by Milgrom (MOND) and the presence of an undetected mass in galaxy clusters composed of particles like sterile neutrinos that only interact with gravity (see the post “Solving both crises in cosmology: the KBC-void and the Hubble-Tension” by Moritz Haslbauer for a more detailed explanation of the νHDM model). Using this model, Katz et al. found that about one El Gordo analogue was expected to be encountered in their simulation box, while they could not find any analogues when they performed a simulation of similar characteristics with the ΛCDM model. Accounting for the fact that the volume of the survey in which El Gordo was found is slightly different from the volume of the simulation used by Katz et al. (2013), we determined that the number of El Gordo analogues that we expect to observe in a νHDM model is 1.16. Therefore, the vHDM model gets the right order of magnitude for the frequency of El Gordo-like objects. The reason for this is that the growth of structure is enhanced in MONDian gravity, so it is more natural to find very massive objects like El Gordo at high redshift in models that assume this type of gravity.
But then, if smaller structures formed first and larger structures formed afterwards, how is it possible that we do not observe more super-massive objects like El Gordo at closer distances? The fact that structures form more efficiently in MONDian gravity also implies that larger and deeper voids will be generated with this gravity law. This prediction is in agreement with the results of Keenan, Barger & Cowie (2013), who observationally found that the local Universe is immersed in an underdensity bubble (the KBC void) with a radius of about one billion light years. For this reason, it is not expected that very massive objects will be able to form in the nearby regions of our Universe, as these regions will have a low density with respect to the mean density of the global Universe (see the post “Solving both crises in cosmology: the KBC-void and the Hubble-Tension” by Moritz Haslbauer for a more detailed explanation of the KBC void). Therefore, the νHDM model is capable of explaining the presence of super-massive objects like El Gordo at distant epochs and is also able to explain the absence of objects like this in the local Universe.
We conclude that El Gordo falsifies ΛCDM at 6.16σ (6.43σ if we take into account the Bullet Cluster too). We propose the νHDM cosmological model as a possible explanation to the formation of extreme objects like El Gordo or the Bullet Cluster at very early cosmological epochs. Moreover, the νHDM model also explains other observations that cannot be justified with the ΛCDM model, such as the existence of the KBC void, therewith automatically resolving the Hubble tension and accounting for the lack of super-massive galaxy clusters like El Gordo in the local Universe. Since the νHDM cosmological model automatically accounts for the observed stellar dynamics in the smallest dwarf and most massive galaxies, the rotating-planar distributions of satellite galaxies, and many other observed properties of galaxies and large scale structure, it is clear that it poses a far superior framework than the (in any case falsified) ΛCDM model for understanding the Universe.
In The Dark Matter Crisis by Elena Asencio. A listing of contents of all contributions is available here.
(Guest post by Dr. Jörg Dabringhausen, Charles University in Prague, Dec. 18th 2020)
The hypothesis of dark matter in galaxies was originally brought up by observations. Zwicky (1933) first found out that galaxies were usually moving too fast to stay in the observed galaxy clusters, if the luminous matter was all there is in galaxies. With “luminous matter”, essentially all stars were meant. Stars are understood well in terms of how much mass in a star leads to a certain light strength, or luminosity. But if the light emitted by the galaxies in a galaxy cluster is translated to a stellar population similar to the stellar population of the Milky Way, the stellar population would not have enough mass by a factor of a couple hundreds to keep the galaxies bound to the cluster. Thus, the galaxy clusters would have dispersed billions of years ago, and today we would be surrounded by a uniform distribution of galaxies. But that is not what we see: galaxies are still in galaxy clusters today.
But the problem was not only with galaxy clusters. Rubin & Ford (1970) found out, that the Andromeda Galaxy rotates so fast, that its stars would disperse if only the standard gravity would keep them together. And the Anromeda galaxy turned out be the rule rather than the exception; all spiral galaxies that were studied later on showed similar trends (for example Rubin et al. 1980). So, not only galaxy clusters would disperse, but also the (spiral) galaxies themselves. It is like the riders (that is the stars) on a merry-go-round (that is the galaxy). Forces keep the riders on circles around the merry-go-round, and if the forces for some reason become weaker or cease to exist (for example because the link between the rider and the merry-go-round breaks), the riders would move away from it. But again, this is against our observations: There are large spiral galaxies everywhere around us (including our Milky Way), and the stars in them move on stable orbits.
In general, the problem of missing mass in galaxies is nowadays omnipresent. It arises because there are different ways to estimate masses in astronomy. One such way is to make educated guesses about the age and the composition of the stellar population of a galaxy, and calculate from there how much units of mass it should have per unit of luminosity. Astronomers call this a stellar mass estimate. Another way is to measure the radius of a galaxy and how fast stars move on average in it, then make some educated guesses about the dynamics of the galaxy, and calculate the ratio of mass to light from there. Astronomers call this a dynamical mass estimate. Ideally, stellar and dynamical mass would agree for the same galaxy, because the galaxy only has one real mass (within uncertainties, of course). In practice however, the dynamical mass is usually larger than the stellar mass, and the factor ranges from slightly above one to 10000 or so. Apparently, the error lies somewhere in the guesswork leading to the two different mass estimates. Astronomers tried to solve the problem of the missing visible matter in two general ways: Either by adding more matter, so that the matter in total would produce the observed gravitational force, or by changing the laws of gravity themselves and saying that the visible matter is all the matter there is in galaxies.
Adding more matter is mathematically the simpler solution, which is also why many people favoured it at first. The gravitational force is then linear in the critical range of values, that is weak to moderate gravity. This means that if there is twice the matter, there is also twice the gravitational force, independent of the total amount of matter there is. Note that from this point of view, the type of matter does not matter, as long as it is invisible, or nearly so. Also the Earth is near invisible next to the Sun, even though they both consist basically of the same kind of matter (that is atoms, not something exotic). It is only a matter of temperature that makes the Sun brighter than the Earth. Indeed, there was a theory that the missing matter are earth-like bodies (that is free-floating planets and brown dwarfs), until the needed quantity of those bodies was observationally excluded. More and more alternatives for the additional matter were excluded as well, so that we are today at the Lambda-Cold-Dark-Matter Model (LCDM-model) for this class of models. However, the LCDM-model requires exotic dark matter beyond the standard model of particles. But this kind of matter has not been discovered yet, including in the largest accelerators like CERN. Nevertheless, this first group of physicists still believes the LCDM-model to be true in general (even though there are some changes to be made) and therefore they continue to search for the so far still hypothetic dark-matter particle.
The second group of physicists rather correct the law of gravity than adding a hypothetic particle beyond the standard model of particle physics. It is like whichever way you go, you have to expand a theory which has been extremely succesful so far: you either have to give up the standard model of particle physics in order to save the LCDM-model, or have to have to give up general relativity, with Newtonian gravity as its limiting case for weak and moderate gravity. This new theory of gravity is, unlike Newtonian gravity, not linear in the critial range. This means that twice the matter does not necessarily mean twice the gravity when the gravitational force is weak enough. This has a funny consequence, which is in contrast to our daily-life experience, namely that the same amount of matter suddently looks like it becomes more gravitating when you spread it out thinly enough. Lüghausen et al. (2015) therefore called it “phantom dark matter”, because this dark matter is a mirage that disappears when the real matter is put close enough together. (Of course, inside the Solar system, the matter must be on average dense enough for the gravitational force to be linear – otherwise we would not be able to send spaceships with high precision to other planets using Newtonian gravity.) This second set of theories leads to Modified Newtonian Dynamics or Milgromian Dynamics (MOND).
Here, I will concentrate on the “missing” matter of elliptical galaxies – “missing” in the sense that there is usually less matter if seen from a stellar perspective than if seen from a dynamical perspective on the same galaxy. Are there alternatives to adding exotic dark matter to the visible matter, and thus supportive to the second group of physicists?
First of all, let’s start with the question of what an elliptical galaxy is. A very short answer would be that they are more or less like the spiral galaxies, but without the disks that contain the spirals. So, only the central bulge is there, and hence, they are called ellipitical because of their elliptical shape. That central bulge can however be very massive, and the most massive elliptical galaxies are even more massive than the most massive spiral galaxies (bulge and disk of the spirals together)!
Going a bit more to the details of elliptical galaxies, they show however some diversity in their mass and radius. I will distingish them into three different kinds of objects, namely ultra-compact dwarf galaxies (UCDs), conventional elliptical galaxies (Es) and dwarf spheroidal galaxies (dSphs), and discuss the invisible matter in each of them. We will see that the invisible matter is just a mirage in some of them, while others contain really some more matter than originally accounted for, but not the exotic dark matter predicted by the LCDM-model.
UCDs (Figures 1 and 2) stand a little apart from the other elliptical galaxies, and some doubt that some of them really are galaxies, and not just very massive star clusters. The reason lies in their compactness, which makes them look much like very massive globular clusters. However, their compactness also places them deeply in the Newtonian regime, so there is literally no room for the phantom dark matter of MOND. Yet, it was claimed that they may contain dark matter (see for example by Drinkwater et al 2004 and Hasegan et al. 2005).
The reason for that is that at the turn of the millenium, it was popular among atronomers that the stellar initial mass function (IMF) is universal (see for example Kroupa 2001). What this means is that all stellar systems formed with a fixed ratio of massive stars to light stars, and only the age of the stars and their chemical composition may change from stellar system to stellar system. This is not to say that people back then were unaware of the influence that, for example, different temperatures and chemical composition had on the process of star formation. Rather, they were looking for different IMFs, but did not find supportable evidence for them in resolved stellar populations. However, when modeling a UCD (or any other kind of stellar system) with the universal IMF, there is maximum ratio between stellar mass and stellar light that can be reached for any reasonable stellar ages and chemical compositions. Nevertheless, there are many UCDs above that limit, and Dabringhausen et al. (2008) showed that this is not just a statistical uncertainty. So, there must be a reason for this unseen mass, and the exotic dark matter that comes with the LCDM-model was a proposition.
However, Murray (2009) voiced serious doubts that the LCDM-model could accomodate enough exotic dark matter inside the tiny radii of UCDs. This is even though the dark-matter halos around the galaxies can be very massive in the LCDM-model. However, the LCDM-model then also predicts that the halos would be very extended, and thus the density (that is mass per volume) of the dark-matter halo would be very thin. So, the total mass of the dark-matter halo may be gigantic, but the fraction of its mass inside a UCD would be tiny because of the small radius of the UCD, and this tiny amount of dark matter inside the UCD would not influence the internal dynamics of the UCD much. Thus, in short, it is not the exotic dark matter of the LCDM-model that increases the mass of the UCDs. It is then likely “conventional” matter, for example from a different IMF. Thus, the word “universal” IMF is then misleading because the IMF is in fact not universal, but “standard” IMF or “canonical” IMF are pretty good replacements. After all, this IMF pretty much seems to be the standard in our immediate surroundings (in an astromical sense); that is regions whose mixture of chemical elements is like that of the Sun and which do not form so many stars at present.
In UCDs, the conditions under which star formation took place were probably far away from those we know to produce the standard IMF. Thus, Dabringhausen et al. (2009) proposed that the UCDs may have formed with an IMF that had a different shape than the standard IMF, namely one that formed more massive stars. (IMFs that have more massive stars than they should have according to the standard IMF are called “top-heavy”.) These massive stars are known to be short-lived, and after they have burned all their nuclear fuel, they leave remnants which produce little or no light compared to their mass. These remnants exist of course in any aged stellar population, but if the IMF had more massive stars once, it has more stellar remnants now. The stellar remnants thus increase the ratio between mass and light, and make a UCD “darker”. Dabringhausen et al. (2012) also tried an alternative way to detect those additional stellar remnants by looking for systems, where a stellar remnant accretes matter from a companion star. Those stellar systems become distinctive X-ray sources, and are thus countable. They compared the numbers they found in UCDs to the numbers they found in globular clusters (that is stellar systems more or less like UCDs, but less massive), and they found more X-ray sources in UCDs than they expected. This as well could indicate that there are more high-mass stars per low-mass stars in UCDs. Based also on their works, Marks et al. (2012) proposed an IMF that changes with the mass of the stellar system (that is from globular clusters to UCDs) and with the chemical composition. Thus, they gave up the notion of the universal IMF, but explained changes in the ratio between mass and light in UCDs with changes in their IMFs.
Another way to increase the mass of UCDs, but not their emission of light, are central massive black holes. In a black hole so much mass is kept, that nothing that comes too close to it can escape it, not even light. Black holes are a prediction of general relativity and known to exist. For example, very massive stars become black holes when all their nuclear fuel is burned, and the pressure from stellar radiation no longer opposes the pull of gravity. Or, as another example, there is a massive black hole at the center of the Milky Way, and many other galaxies as well, even though it is less clear than for massive stars how those came to be. (This year’s Nobel Prize for physics was about the detection of this central black hole.) But if massive black holes are common at the centers of galaxies, why can’t UCDs have them as well? However, a massive central black hole is easy to overlook at the distance of known UCDs. That is because at the distance of UCDs, the stars look like they are almost located at a single point in space, whereas the mass of the central massive black hole is precisely located a this single point. Thus, if seen from Earth, there is not much difference in the distribution of matter, while the central massive black hole would still add its mass to the mass of the stellar population. Therefore, only by careful observations with the telescopes with the best optical resolution, one has a chance to detect them. Nevertheless, massive central black holes were indeed proposed as a solution for the problem of the missing mass in UCDs; for example by Mieske et al. (2013) and Janz et al. (2015). Seth et al. (2014) then observationally confirmed a massive central black hole in a UCD for the first time. Later, massive black holes were also discovered in other UCDs, see for example Afanasiev et al. (2018).
Naturally, also a mixture of non-standard IMFs and central massive black holes is possible to explain why UCDs are so massive for their light. However, what is important here is that there are less far-fetched alternatives to exotic dark matter in UCDs.
2.) Conventional elliptical galaxies
The conventional elliptical galaxies are not only usually more massive than the UCDs, but also far more extended. What I mean with “conventional” is that they were among the first galaxies to be identified as galaxies – this was in the 1920ies, when people like Hubble first discovered that some “nebulae” are not just gas clouds inside the Milky Way, but distant stellar islands just like the Milky Way. It is unclear what mass exactly is required for an elliptical galaxy in order to be coventional, perhaps 108 Solar masses or so. This unclearity is because there is an extension of elliptical galaxies to even lower masses, which are however not (compact, star-cluster-like) UCDs, but (extended, galaxy-like) dwarf Spheroidal galaxies (dSphs). However, there are some specialities on dSphs about dark matter and its seeming existence, and therefore I will treat them in an own section. What I will not do, though, is to distinguish the elliptical galaxies into dwarf elliptical galaxies and elliptical galaxies proper, because this distinction in merely historical in my eyes (see also Ferguson & Binggeli 1994 about this). The most massive of all galaxies (about 1012 Solar masses) are conventional elliptical galaxies, too.
So, how much exotic dark matter do elliptical galaxies contain, if any? Cappellari et al. (2006), for instance, found out that the conventional elliptical galaxies they observed had on average 30 percent too much mass for the IMF they assumed. They suggested that the missing mass could be the dark matter predicted by the LCDM-model. However, for this finding, they also assumed that the standard IMF is universal for all star-forming regions. Tortora et al. (2014) later tried to fix this without exotic dark matter, but MOND. They also failed with a universal IMF, but not if the IMF was changing with the mass of the galaxy. So, the real question is: Can the IMF change with galaxy mass or is the standard IMF also the universal IMF?
For answering this question, let’s look at star clusters, which are the building blocks of galaxies. Could a star cluster have a star more massive than the cluster itself? Of course not. Actually, Weidner et al. (2010) found out that the mass of the most massive star of a star cluster is much lower still. An impressive example of this was observed by Hsu et al (2012): They compared a large cluster of some mass with several adjacent small star clusters with the same mass in total. All the other parameters like age, chemical composition, and so on are the same, just how the total mass of the stars is bundled is different. However, the massive star cluster has heavier stars than the several small star clusters. This would not be a problem by itself, if the overall star formation was the same in all galaxies; that is when all galaxies form the same number of light star clusters per massive star cluster. But this is not the case. Weidner et al. (2004) found that the mass of the most massive cluster that can form in a galaxy depends on its star formation rate; that is how many stars form in a galaxy per time unit. Low-mass elliptical galaxies have low star formation rates and massive elliptical galaxies have high star formation rates. Thus, low-mass conventional elliptical galaxies have a lack of massive stars. This already is an argument against a universal IMF in all star clusters and in all galaxies.
The galaxies with the highest star formation rates (that is also the most massive galaxies) produce also star clusters in the mass range globular clusters and UCDs. Now, lets assume that these most massive star clusters are in fact UCDs and that these UCDs have IMFs with more massive stars per low-mass stars than “normal” star clusters (see the section about UCDs). Then the real IMF deviates from the once-thought universal IMF not only in low-mass star clusters (by not having any massive stars), but also in high-mass star clusters (by having too many massive stars). Now, remember what we have said about IMFs with more massive stars than the standard IMF: when they grow old, they produce less light per unit mass than the standard IMF. Or when a certain amout of light is observed, a stellar population with more massive stars and a certain age must have more mass to produce it. The stellar populations of elliptical galaxies are usually that old that the massive stars (which are short-lived) have already evolved into dark stellar remnants, and only the light stars continue to shine. So, if the IMF behaves with the star formation rate of the galaxies like it is assumed nowadays (see for example Kroupa & Weidner 2003 or Fontanot et al 2017), then the low-mass elliptical galaxies have a little less mass than assumed with the standard IMF for their light, and the massive elliptical have a little more mass than assumed with the standard IMF. This goes up to about twice the mass for the most massive conventional elliptical galaxies, and the point where the mass estimate is equal to that for the standard IMF is at approximately 109 Solar masses. Thus, for most conventional elliptical galaxies, the mass estimates are above the mass estimates for the standard IMF, and the “missing” mass is about the mass detected by Cappellari based on the standard IMF. (See also Dabringhausen et al. 2016 if you want to follow the brightness of elliptical galaxies with their mass, and Dabringhausen 2019 if you wish to go deeper on elliptical galaxies and non-standard IMFs). Thus, again like with UCDs, there is an alternative, more down-to-earth explanation for the excess mass of those elliptical galaxies.
3.) Dwarf speroidal galaxies (dSphs)
Dwarf spheriodal galaxies (dSphs, Figure 3) are in a way the low mass extension to “conventional” elliptical galaxies, because in a plot of their radius against their mass, they continue the line established by the conventional elliptical galaxies to lower masses. However, the brightest ones are in light and mass like UCDs, but way more extended than UCDs. In other words, there is a gap in radius between dSphs and UCDs (see Gilmore et. al 2007), in contrast to conventional elliptical galaxies and dSphs.
If it is true that dSphs are in fact very low-mass conventional elliptical galaxies, then we would expect them to be about 20 percent or so lighter than expected based on their light with a standard IMF. But in fact, they are way more massive. Just in order get a feeling for the numbers we are dealing with: Let’s say the standard IMF would predict a ratio of mass to light of 2 for a dSph, the ratio for the corrected IMF would then give 1.5, but the measured value is 2000 (all numbers are in Solar units). So, how can we be wrong to a factor up to approximately 1000 (even though in many cases less)?
This is where MOND finally kicks in, because the visible matter in dSphs is actually thin enough, in contrast to UCDs and Es. MOND can rise the ratio of the mass of a dSph over its light from values of a few (that is a stellar population in Newtonian dynamics) to values up to about 100. This fits the dynamical values of many dSphs, which would contain plenty of “dark” matter in Newtonian dynamics. Thus, in MOND, their dark matter is actually phantom dark matter – it would disappear if the matter was denser. Or, in other words, the difference between stellar and dynamical mass estimates disappears for those dSphs, and all is well. The precise value for a given dSph depends on which value the mass-to-light ratio of the stellar population would have according Newtonian dynamics and on how many stars are distributed over which volume, that is the density of visible matter. Estimates for the mass-to-light ratios in Newtonian and MONDian dynamics for a number of dSphs are for example given in Dabringhausen el al. (2016).
But it is also visible in Dabringhausen el al. (2016) that even MONDian dynamics cannot explain the mass-to-light ratios of the few dSphs, which have a mass-to-light ratio far beyond 100. So, have we finally found a failure of MOND? Not necessarily. So far, we have implicitly always assumed that the galaxies are in virial equilibrium. What this means is for instance the absence of tides because of other distracting souces of gravity. The tides on Earth are the best-known example, even though Earth is dense enough to be near tidal equilibrium, given the gravitational forces from the Moon and the Sun. We only see them so well because because in this case, the tides are happening right under our noses. Ultimately, there are tides on Earth because the Earth is an extended body. Thus, the gravitational force from the Moon pull on the near side of the Earth a bit stronger than on the far side, and the Earth is being stretched a bit by the tides. There are ebb and flow of the oceans on Earth, because the Earth also rotates, while the tides are always directed towards the Moon. There of course also other sources of gravity on Earth which cause tides (the Sun for instance), but the Moon is the strongest.
Also UCDs and conventional elliptical galaxies are dense enough to be nearly unaffected by neighboring galaxies, which are the potential reason for tides in them. But the internal gravity is comparatively weak on the thin matter of dSphs, so that they are easy to stretch by outside forces of other galaxies. Thus, the tidal forces form gigantic tidal “waves” consisting of stars. Every encounter with another galaxy pulls on the galaxy, because the gravitational force is stronger on the near side of the encounter than on the far side. This heats the galaxy up, meaning that the galaxy is being pulled out of virial equilibrium by the encounter and that the average velocities of the stars get faster with enconters. Finally, the tidal forces from encounters with other galaxies make the galaxy break apart.
Now, what would an observer from Earth see? The observer could for example see a dSph that has been heated up by a recent encounter with another galaxy, and is thus out of virial equilibrium. Or the dSph has found its virial equilibrium again, but at the cost of stars which have left the dSph, and are now moving faster or slower than the stars which are still bound to the galaxy. But the observer could be ignorant of this fact, and assume that all the stars (s)he sees are bound to the galaxy. Or the dSph has dissolved already completely, but the stars still move all along on similar orbits, even though they are not bound to each other any more. The radius in which the stars are is then just much larger than it would be, if the stars were bound to each other. If the observer then wrongly assumes the dSph to be in virial equilibrium, all these effects increase the dynamical mass estimate (not the real mass!) (s)he makes for the mass of the galaxy. And those effects could indeed raise the dynamical mass estimate by the required factor. For a discussion of tidal heating of dSphs under Newtonian gravity, see for example Kroupa (1997). McGaugh and Wolf (2010) made a similar study with MOND. Notably, they found for observed dSphs surrounding the Milky Way that if a dSph is more susceptible to tidal forces, it is also more likely to be outside virial equilibrium for MOND. For an interesting theoretical discussion of how a dissolving star cluster in a tidal field could be mistaken for a much more massive (but evidently not more luminous) dSph, see Dominguez et al. (2016).
However, the dSphs which are out of virial equilibrium far enough to increase the dynamically estimated mass-to-light ratio by a few or more compared to the real mass could just be a few dSphs out of a larger sample. For the majority, the effect would simply be too weak now, although their time to dissolve will also come. In other words, this scenario is highly improbable if gravity was Newtonian, because then all dSphs around the Milky Way must be in dissolution. However, if gravity is MONDian, only a few would be near their dissolution, while most would be in or near virial equilibrium – see Dabringhausen el al. (2016).
There is also another argument against dark matter in dSphs. Galaxies are usually not by themselves, but surrounded by other galaxies. Together, these galaxies form gravitationally bound galaxy clusters. But how do these galaxy clusters form? According to the LCDM-model, this happens by the infall of galaxies from all directions. They can come, the dSphs included, with any amount of exotic dark matter into a galaxy cluster. We will call those galaxies “primordial galaxies” from now on, because there is also another way to form galaxies that look like dSphs to an observer. This other way is through close encounters of already existing galaxies. In such encounters, matter is pulled away from the existing galaxies by gravity though tides (Figures 4 and 5), and new small galaxies can form from this matter. We know that this process happens. Otherwise, the elongated streaks of matter of, for instance, the Antennae Galaxies and the Tadpole Galaxy would be difficult to explain. Simulations of interacting galaxies, which are set up to reproduce situations like in the Antennae Galaxies, show also those streaks of matter like the ones observed (see for example Bournaud & Duc 2006 or Wetzstein et al. 2007). They are called tidal tails for obvious reasons. The Tadpole Galaxy even has a new small star-forming regions in its tidal tail, which may become dSphs. If aged enough, these dwarf galaxies may be difficult to distinguish from primordial galaxies of the same mass, though (see Dabringhausen & Kroupa 2013). However, in the following, we call galaxies of tidal origin “tidal dwarf galaxies”, in order to distinguish them from primordial galaxies. The tidal dwarf galaxies cannot contain the exotic dark matter of the LCDM-model, even if their progenitor galaxies did. The reason is that all matter that ends up in a tidal dwarf galaxy, whether visible or not, must have occupied similar regions of space with similar velocities also before the encounter of the existing galaxies. The total amount of the exotic dark matter may be huge, but most dark matter had other velocities and other locations, and therefore does not qualify to be bound to the tidal dwarf galaxy. After all, simulations of galaxy encounters by, for example, Barnes & Hernquist (1992) show that most visible matter that is to become a tidal dwarf galaxy comes from the disks of spiral galaxies. This visible matter does not only form a thin disk, as opposed to the presumed dark matter halo, but it also moves with the same velocity in the same direction, again in contrast to the presumed dark matter halo. Also, the tidal dwarf galaxies that form in an encounter of galaxies can only move in the plane of the encounter (because of the conservation of angular momentum). Thus, there is an easy way to distinguish the dSphs in the LCDM-model: those which move in a plane and those which cannot be assigned to a plane. Those in a plane are very likely tidal dwarf galaxies and cannot have any exotic dark matter. Those, however, which cannot be assigned to a plane might also be primordial and can thus contain dark matter (see for example Kroupa et al 2010). Now, what do observations tell us about the pattern of motion of the dSphs? In the Milky Way, it was shown by Lynden-Bell (1976) and by Kroupa et al. (2005) that the then known dSphs are most likely arranged in a plane. Later, additional objects and also velocities were added, but the long-lasting disk of Satellites was always confirmed (see for example Pawlowski et al. 2012 and Pawlowski & Kroupa 2020). This was according to some proponents of the LCDM-model just an exception, while other, they said more normal galaxies would have dSphs with random motions around them. However, it was shown then that also the Andromeda Galaxy has a disk of dSphs around it (for example Ibata et al 2013), and Centaurus A as well (Mueller et al 2018). In short, disks of satellites around major galaxies are more the rule than the exception, see for example Ibata et al (2014) for an attempt of a census. Thus, galaxies in these planes must manage their high dynamical mass-to-light ratios without exotic dark matter, despite numerous claims to the contrary from the LCDM-community. If MOND is the correct description of gravitation, then the large gravitating (phantom) masses of the satellite galaxies, as opposed to their small masses in stars, is beautifully resolved.
I have discussed the reasons for “dark” matter in elliptical galaxies, which comes ultimately from the comparison of different mass estimates. Also, some assumptions which were used for the lack of better knowledge have been proven wrong by now. This concerns the theory of a universal IMF in all star-forming regions, which was leading to a mismatch between the mass estimates from stellar populations and from the dynamics in UCDs and conventional elliptical galaxies. If the “one-size-fits-all” IMF is replaced by a more elaborate picture of the IMF, those differences disappear easily without using exotic dark matter or MOND. For dSphs, the situation is different. They cannot have exotic dark matter because it could not bind to them, but neither can their extreme mass-to-light ratios be explained with different stellar populations. Here, MOND and tidal fields offer an answer. Thus, adding more exotic dark matter to all galaxies until their dynamics is fitted might appear the simpler solution on first sight, but it is not necessarily the correct one. The seemingly more complicated solution without exotic dark matter stands a better test result here.
In The Dark Matter Crisis by Joerg Dabringhausen. A listing of contents of all contributions is available here.
The following is a guest post by Indranil Banik. Indranil is a PHD student at the University of Saint Andrews, part of the Scottish Universities’ Physics Alliance. He was born in Kolkata, India and moved to the UK with his parents a few years later. Indranil works on conducting tests to try and distinguish between standard and modified gravity, especially by considering the Local Group. Before starting his PhD in autumn 2014, he obtained an undergraduate and a Masters degree from the University of Cambridge with top grades. There, he worked on understanding the dynamics of ice shelves, and on a Masters project on the thick disk of the Milky Way, as well as on a few other problems.
I recently won the Duncombe Prize from the American Astronomical Society’s Division on Dynamical Astronomy for a detailed investigation into the Local Group timing argument. This was to present a recently accepted scientific publication of mine (link at bottom of article) at their annual conference in Nashville, Tennessee.
The timing argument takes advantage of the fact that the Universe has a finite age of just under 14 billion years. Thus, everything we see must have started at a single point at that time, which we call the Big Bang. Due to the finite speed of light, by looking very far away, we are able to look back in time. In this way, we observe that, shortly after the Big Bang, the Universe was uniform to about one part in 100,000. Thus, we know that the expansion of the Universe was very nearly homogeneous at early times. This means that any two objects were moving away from each other with a speed almost proportional to the distance between them. This is called the Hubble law.
The Hubble law also works today, but only on large scales. On small scales, the expansion of the Universe is no longer homogeneous because gravity has had a long time to change the velocities of objects. As a result, our galaxy (the Milky Way, MW for short) and its nearest major galaxy, Andromeda (or M31) are currently approaching each other. This implies that there must have been a certain amount of gravitational pull between the MW and M31.
Although this has been quantified carefully for nearly 60 years, my contribution involves analysing the effects of the MW and M31 on the rest of the Local Group (LG), the region of the Universe where gravity from these objects dominates (out to about 10 million light years from Earth). Recently, a large number of LG dwarf galaxies have been discovered or had their velocity measured for the first time (McConnachie, 2012). We took advantage of this using a careful analysis.
We treated the MW and M31 as two separate masses and found a trajectory for them consistent with their presently observed separation. We treated the other LG dwarf galaxies as massless, which should be valid as they are much fainter than the MW or M31. For each LG dwarf, we obtained a test particle trajectory whose final position (i.e. at the present time) matches the observed position of the dwarf. The velocity of this test particle is the model prediction for the velocity of that galaxy.
The basic feature of the model is that the expansion of the Universe has been slowed down locally by gravity from the MW and M31. At long range (beyond 3 Mpc or about 10 million light years), this effect is very small and so objects at those distances should essentially just be following the Hubble law. But closer to home, the results of this model are clear: the MW and M31 are holding back the expansion of the Universe, and objects within about 1.5 Mpc should be approaching us rather than moving away (see figure above). By comparing the detailed predictions of our model with observations, we were able to show that, for all plausible MW and M31 masses, a significant discrepancy remains. This is because a number of LG galaxies are flying away from us much faster than expected in the model.
An important aspect of these models is that the MW and M31 have never approached each other closely. Although one can in principle get them to have a past close flyby in Newtonian gravity if they are assigned very high masses, there are several problems with this. Such high masses are unreasonable given other evidence. More importantly, if there had been such a flyby, the dark matter halos of the MW and M31 would have overlapped, leading to a substantial amount of friction (of a type called dynamical friction, which is reliant only on gravity). This would have caused the galaxies to merge, contradicting the fact that they are now 2.5 million light years apart.
I was aware of an alternative model for galaxies called Modified Newtonian Dynamics (MOND – Milgrom, 1983). This is designed to address the fact that galaxies rotate much faster than one would expect if applying Newtonian dynamics to their distributions of visible mass. The conventional explanation is that galaxies are held together by the extra gravitational force provided by a vast amount of invisible dark matter. Many galaxies need much more dark matter than the amount of actually observed matter. But, so far, this dark matter has not been detected directly. What MOND does is to increase the gravitational effect of the visible matter so that it is enough to explain the observed fast rates of rotation. In this model, there is no longer any need for dark matter, at least in halos around individual galaxies. You can find out more about MOND here on McGaugh’s MOND pages and here on Scholarpedia.
In MOND, the MW and M31 must have undergone a past close flyby (Zhao et al, 2013). In this model, the absence of dark matter halos around galaxies means that there need not have been any dynamical friction during the flyby (remember that the disks of the MW and M31 are much smaller than their hypothetical dark matter halos, which are only needed if we apply Newton’s law of gravity).
The high relative speed of the MW and M31 at this time (about 9 billion years ago) would probably go a long way towards explaining these puzzling observations. This is because of a mechanism called gravitational slingshots, similar to how NASA was able to get the Voyager probes to gain a substantial amount of energy each time they visited one of the giant planets in our Solar System. The idea in this case would be for the MW/M31 to play the role of the planet and of a passing LG dwarf galaxy to play the role of the spacecraft.
This mechanism is illustrated in the figure above. In the left panel, there is a small galaxy moving at 1 km/s while a much heavier galaxy moving at 5 km/s catches up with it. The massive galaxy sees the dwarf approaching at 4 km/s (right panel). The trajectory of the dwarf is then deviated strongly, so it ends up receding at 4 km/s back in the direction it approached from. Combined with the velocity of the massive galaxy (which is almost unchanged), we see that the velocity of the dwarf has been increased to 5 + 4 = 9 km/s.
We do in fact observe many LG dwarf galaxies moving away from us much faster than in the best-fitting dark matter-based model (see figure below, observed radial velocities are on the y-axis while model-predicted ones are on the x-axis). Moreover, based on the distances and velocities of these objects, we can estimate roughly when they would have been flung out by the MW/M31. This suggests a time approximately 9 billion years ago, which is also when one expects the MW and M31 to have been moving very fast relative to each other in MOND as they were close together.
These high-velocity LG dwarfs would have been flung out most efficiently in a direction parallel to the velocity of whichever heavy galaxy they interacted with. Naturally, the MW and M31 have not always been moving in the same direction. But it is very likely that they were always moving within much the same plane. Thus, one test of this scenario (suggested by Marcel Pawlowski) is that these high-velocity dwarfs should preferentially lie within the same plane.
There is some evidence that this is indeed the case. Moreover, the particular plane preferred by these objects is almost the same as what would be required to explain the distribution of satellite galaxies around the MW and M31. This is described in more detail towards the end of this lecture I gave recently about my work.
Even without this evidence, there is a strong case for MOND. One of the astronomers heavily involved in making this case is Professor Stacy McGaugh. I was very pleased to meet him at this conference. We discussed a little about his current work, which focuses on using rotation curves of galaxies to estimate forces within them. For a modified gravity theory which does away with the need for dark matter, it is important that these forces can be produced by the visible matter alone. Stacy was doing a more careful investigation into estimating the masses of galaxies from their observed luminosities and colours (which give an idea of the mix of different types of star in each galaxy, each of which has its own ratio between mass and luminosity, old stars being red and young ones blue). The success enjoyed by MOND in explaining dozens of rotation curves is one of the major reasons the theory enjoys as much support as it does.
This brought us on to discussing how we came to favour the theory over the conventional cosmological model (ΛCDM) involving Newtonian gravity and its consequent dark matter. Stacy explained how it was particularly his work on low surface brightness galaxies which convinced him. This is because such galaxies were not known about when the equations governing MOND were written down (in the early 1980s). Despite this, they seemed able to predict future observations very well. This was somewhat surprising given that the theory predicted very large deviations from Newtonian gravity. In the ΛCDM context, the presence of large amounts of invisible mass makes it difficult to know what to expect. As a result, it is difficult for the theory to explain observations indicating a very tight coupling between forces in galaxies and the distribution of their visible mass – even when most of the mass is supposedly invisible (a feature called Renzo’s Rule). A broader overview of what the observations seem to be telling us is available here (Famaey & McGaugh 2012) and here (Kroupa 2015).
I then explained my own thinking on the issue. I was aware of some of the observations which persuaded Stacy to favour MOND and I was aware of the theory, but I did not favour it over ΛCDM. Personally, what got me interested in seriously considering alternatives to ΛCDM was its missing satellites problem. The theory predicts a large number of satellite galaxies around the MW, much larger than the observed number. Although it is unclear if MOND would help with this problem, that does seem likely because structure formation should proceed more efficiently under the modified gravity law. This should lead to more concentration of matter into objects like the MW with less being left over for its satellites.
Although this suggested MOND might be better than ΛCDM, my initial reaction was to consider warm dark matter models. Essentially, if the dark matter particles were much less massive than previously thought (but the total mass in the particles was the same), then they would behave slightly differently. These differences would lead to less efficient structure formation at low masses, reducing the frequency of low-mass halos and thus making for less satellite galaxies. I hoped this would explain a related problem, the cusp-core challenge which pertains to the inner structure of satellite galaxies.
What finally convinced me against such minor alterations to ΛCDM and in favour of MOND was the spatial arrangement and internal properties of the MW and M31 satellite galaxies. Much has been written in previous posts to this blog about this issue (for example, here), with this 2005 paper by Kroupa, Theis & Boily pointing out the discrepancy between observations and models for the first time.
I have summarised the results in a flowchart (left). Essentially, the hypothetical dark matter halos around the MW and M31 need to be distributed in a roughly spherical way. This is unlike the disks of normal (baryonic) matter in these galaxies. The reason is that baryons can radiate and cool, allowing them to settle into disks. As a result, in an interaction between two galaxies, the baryons with their ordered circular motions in a disk can get drawn out into a long dense tidal tail that then collapses into small tidal dwarf galaxies. But these would be free of dark matter, and they would also be mostly located close to a plane: the common orbital plane of the interacting galaxies. You can see more about this scenario here.
The argument goes that it is difficult to form such planes of satellites in any other way (for example, see Pawlowski et al, 2014). Just such satellite planes are in fact observed around both the MW and M31. Supposedly free of dark matter, they should have quite weak self-gravity and thus low internal velocity dispersions/rotate very slowly. Yet, their observed velocity dispersions are quite high, signalling the need for some extra force to stop them flying apart.
Because the spatial arrangement of these satellites suggests a violent origin, it is unlikely that they have much dark matter. Thus, I became convinced of the need to modify our understanding of gravity. It turns out that exactly the same modification that can help explain galaxy rotation curves without dark matter could also help address this problem (McGaugh & Milgrom, 2013). Although the dark matter plus Newtonian gravity worldview might just about be able to explain galaxy rotation curves (although detailed tests are showing this not to have succeeded: Wu & Kroupa 2015), I do not think it can explain the satellite plane problem. This eventually convinced me to investigate this issue further. I explain some of the more compelling reasons for favouring MOND over ΛCDM in this lecture I gave recently.
The much awaited Planck results on the CMB have been published recently. The results are consistent with those arrived at by using Wilkinson Microwave Anisotropy Probe (WMAP) measurements.
Date: 20 Mar 2013 Satellite: Planck Depicts: Cosmic Microwave Background Copyright: ESA and the Planck Collaboration; NASA / WMAP Science Team: “This image shows temperature fluctuations in the Cosmic Microwave Background as seen by ESA’s Planck satellite (upper right half) and by its predecessor, NASA’s Wilkinson Microwave Anisotropy Probe (WMAP; lower left half) A smaller portion of the sky is highlighted in the all-sky map and shown in detail below. With greater resolution and sensitivity over nine frequency channels, Planck has delivered the most precise image so far of the Cosmic Microwave Background, allowing cosmologists to scrutinise a huge variety of models for the origin and evolution of the cosmos. The Planck image is based on data collected over the first 15.5 months of the mission; the WMAP image is based on nine years of data.”
This agreement is excellent news, because it means that the two missions are consistent and thus the Planck data enhance our confidence in what we know about the CMB.
But, what do the results mean in terms of our physical understanding of the universe?
In this guest contribution by PhD student Behnam Javanmardi, who is studying cosmological models in Bonn since the Fall of 2012, some of the problems raised by the Planck CMB map are discussed:
Behnam Javanmardi, Bonn, 19.04.2013
Contribution by Behnam Javanmardi:
The European Space Agency (ESA) launched the Planck satellite on 14 May 2009 to the second Lagrange point of the Sun-Earth system (L2), at a distance of 1.5 million kilometers from the Earth, for observing the Cosmic Microwave Background (CMB), the afterglow of the Big Bang. On 21 March 2013, the Planck collaboration released the data with a series of papers on their scientific findings. Planck observed the CMB sky in different frequency bands, some of which are sensitive to the foregrounds (anything between us and that cosmic radiation, e.g. the disk of the Milky Way). This allows to remove the foregrounds and reach to an image of the Universe when it was very young.
Statistical analysis of this image (which shows small temperature fluctuations corresponding to small density contrasts at that time) gives us valuable information about our Universe. In the following, some major Planck’s results are reviewed with the main focus on the problems cosmologists now face, given these results. Technical details can be found in the Planck 2013 Results Papers.
The current Standard Cosmological Model (ΛCDM) has a set of parameters and the Planck collaboration reported the values for these parameters by fitting the model to the data. For example, the best fit ΛCDM parameters resulted in a 6% lower value for the density parameter of dark energy (Planck: ΩL=0.686±0.020 vs WMAP-9: ΩL=0.721±0.025) and an 18% higher value for the density parameter of dark matter (Planck: Ωm=0.314±0.020 vs WMAP-9: Ωm=0.279±0.025) than the results of the previous all sky CMB survey, i.e. WMAP. As can be seen from these numbers, the two parameters are consistent with each other within the measurment uncertainties. Thus, the Planck mission has nicely confirmed the WMAP fit to the standard model of cosmology.
Date: 21 Mar 2013 Satellite: Planck Copyright: ESA and the Planck Collaboration: “Two Cosmic Microwave Background anomalous features hinted at by Planck’s predecessor, NASA’s Wilkinson Microwave Anisotropy Probe (WMAP), are confirmed in the new high precision data from Planck. One is an asymmetry in the average temperatures on opposite hemispheres of the sky (indicated by the curved line), with slightly higher average temperatures in the southern ecliptic hemisphere and slightly lower average temperatures in the northern ecliptic hemisphere. This runs counter to the prediction made by the standard model that the Universe should be broadly similar in any direction we look. There is also a cold spot that extends over a patch of sky that is much larger than expected (circled). In this image the anomalous regions have been enhanced with red and blue shading to make them more clearly visible”.
The main interesting result from Planck was the confirmation of some features that have been revealed by WMAP data. Before Planck, there were some doubts about the cosmic origin of these features, but since the precision of Planck’s map is much higher than that of WMAP and the Planck collaboration was working nearly 3 years to carefully extract any foreground emission and those features are still present, we have to accept with a much higher confidence that these may be real features of the CMB sky.
These features or anomalies, which the standard model of cosmology did not expect, are significant deviations from large scale isotropy. But large scale isotropy is one of the two fundamental assumptions that form the Cosmological Principle and simply states that the Universe we observe must not be direction-dependent. Among these features found in the CMB one can mention a “Cold Spot” which is a low-temperature region much larger than expected. And, a “Hemispherical Asymmetry” has been detected: the northern ecliptic hemisphere has on average a significantly lower signal than the southern one. The latter leads to this question: why is the orientation of this asymmetry more or less aligned with the orbital angular momentum of the Earth? Is it a not-yet understood measurement bias or a data reduction bias or a coincidence? As the Earth orbits the Sun, its orbital angular momentum remains pointing into the same direction in the Milky Way. Perhaps a remnant Milky Way foreground contamination may play a role here.
The other assumption of the cosmological principle, i.e. that the initial temperature (and density) fluctuations had Gaussian distribution, has also been tested by the Planck collaboration and no significant deviation from it was reported, except for a few signatures which were interpreted to be associated with the above-mentioned anomalies.
Furthermore, the power-spectrum calculated using the Planck data (which is one of the main statistical tools for analyzing the CMB map) has a ≈2.7σ deviation from the “best fit ΛCDM model” at low-ℓ (ℓ ≤ 30) multipoles or large angular scales.
Regarding the test of inflation (a hypothesis which says that the early Universe was inflated by a factor of at least 10^(78) in less than 10^(-36) seconds), the models with only one scalar field are preferred by the Planck results and more complex inflationary scenarios do not survive. However, a recent paper by Ijjas et al (2013) has gone through the problems of inflation considering the results from both the Planck satellite and the LHC,
“The odd situation after Planck2013 is that inflation is only favored for a special class of models that is exponentially unlikely according to the inner logic of the inflationary paradigm itself”
as they mention. The forthcoming results on polarization of the CMB from Planck will cast light on this issue.
As mentioned above, although the ΛCDM model is consistent with the overall picture as seen by Planck, it fails to account for these observed anomalies and the deviation of the power-spectrum at large scales. In addition, the three major elements of the ΛCDM model, i.e. dark matter, dark energy and inflation, still lack a firm theoretical understanding. Therefore, cosmologist should try to look for a model in which the recent observed features are no longer “anomalies” and are predicted by the model itself.
Epilogue by Pavel Kroupa:
The Planck data thus demonstrate that not all is well with our understanding of cosmology, that is, the CMB poses hitherto unanswered problems. But even if the CMB had been in perfect agreement with the expectations from the current standard model of cosmology, what would this have implied for our physical understanding of cosmology?
First of all, an elementary if not trivial truth is that consistency of a model with a set of data does not prove the model. Thus, claiming that Planck establishes the existence of (cold or warm) dark matter and dark energy would be an unscientific statement. For example, the cosmological model by Angus & Diaferio (2011, see their fig.1)shows that the CMB can be reproduced with a non-CDM/WDM model, therewith proving the non-uniqueness of the models.
Furthermore, irrespective of any success or failure of the standard (or any other) cosmological model in reproducing some large-scale data, the highly significant problems encountered on the local cosmological scale of 100Mpc and below remain hard facts to be solved: See
Following the recent incident, we and the SciLogs team decided to invite a renown colleague to write a guest blog post. Thinking about possible guest bloggers who are experts in the field of cosmology and approach theories such as MOND with the necessary scientific skepticism, we arrived at Scott Dodelson as one candidate.
Scott is a very well-respected cosmologist. He is a scientist at Fermilab and a professor in the Department of Astronomy and Astrophysics and the Kavli Institute for Cosmological Physics at the University of Chicago. His research focuses on the largest and smallest scales of the universe: the interplay of cosmology and particle physics. He investigates the nature of dark matter and dark energy, works on the cosmic microwave background and is also interested in modified gravity theories. In addition to his many papers, he has written the textbook “Modern Cosmology”.
We are very pleased that Scott Dodelson has accepted to write this guest post. Thank you, Scott!
Is modified gravity a viable alternative to dark matter? Or is dark matter so compelling that pursuits of modified gravity should be abandoned?
There are good reasons to believe in dark matter and to be optimistic about our chances of detecting it in the coming decade. Dark matter explains the flat rotation curves in galaxies; it accounts for the deflection of light far from the centers of galaxies and by galaxy clusters. Many aspects of galaxy clusters make sense only if dark matter is present. Perhaps most importantly, it is the key component in our modern story of how we got here: the standard cosmological model is called CDM or “Cold Dark Matter”. The small inhomogeneities captured in maps of the cosmic microwave background (CMB) grew to be the vast structure we see today via gravitational instability, but the story holds together only if dark matter is also present. The story works and it has been tested by observing the spectra of both the CMB and the distribution of matter on large scales. It is true that dark matter does not easily explain some phenomena on small scales, but there is a ready explanation for this: predictions on small scales are hard. Apart from the non-linearity of gravity, baryons play an important role on small scales, and incorporating these effects into numerical simulations is challenging. It is easiest to make predictions on large scales and those easy predictions have been confirmed with exquisite precision. Beyond all this lies the suite of experiments poised to detect dark matter. Thousands of scientists are now hunting for the particles that comprise dark matter by studying collisions at the LHC; by manning underground laboratories designed to detect it; and by launching satellites to observe the debris created when two dark matter particles in space collide and annihilate. We have reason to be optimistic.
Why then pursue modified gravity?
First, the people who study modified gravity (MG) tend to focus on small scale data rather than large scale data. They are serious, smart scientists who make observations and fit MG models to the data. These fits tend to be pretty good, often with very few free parameters and therefore the scientists gain confidence in their models. This focus on different data or different slices through the data presents a challenge to the dark matter model. Eventually, dark matter will have to explain these data sets as well. Slicing and combining things in different ways leads to different challenges than might otherwise arise. Even if you believe in dark matter, you want to confront the data in all forms. The simple (slightly condescending) way of saying this is to say that CDM must ultimately reduce to MONDian phenomenology on small scales.
More importantly, dark matter has not yet been detected. This is not the time to raise the barriers and decree that only those who accept dark matter are serious scientists. We are optimistic, but we have to accept the possibility that dark matter will not be detected in the next decade. Our initial feedback from the LHC shows no hint for the simplest model that contains dark matter, supersymmetry (although these early data are certainly not conclusive). There have been hints in direct and indirect detection experiments, but certainly nothing definitive. It is possible that we will need to think of something completely new. In so doing we are going to have to drop some assumptions, weight evidence differently than we do now. The MG community does this now by downweighting large scale data and focusing more on small scales. This may end up being the correct approach, or we may need to think of something even more radical. I do not know how to do this (How do we encourage a revolution?) but I am pretty sure suppressing alternatives is moving in the wrong direction.
The communities now are quite disparate and find it difficult to engage one another. Is the MG vs. dark matter dispute identical to the disagreements between people from different religions, say, virtually impossible to resolve because the two sides cannot communicate? Certainly not. We are scientists, and facts will change our minds. Some examples of things the vast majority of the MG community accepts or will accept:
MG is not theoretically favored over dark matter because “dark matter is something new”. Both approaches are changing the fundamental lagrangian of nature by adding new terms and new degrees of freedom.
The fact that Xenon100 or Fermi (or perhaps AMS in a few days) has not seen dark matter does not mean the theory is excluded. There is plenty of room in theories like supersymmetry and even more in other more generic models.
If dark matter is detected unambiguously via direct and/or indirect detection, then MG would indeed fall outside the realm of reasonable scientific investigation.
On the other hand, our dispute does share similarities with those that divide adherents of religion. We are passionate, we come at things from different directions with different preconceptions, so it is sometimes difficult to speak the same language, to focus on a single question. At the end of the day, just like the devout in different religious traditions, we are all after the same goal, in our case, trying to understand nature. It is premature to state that our way is the only way.
Guest post by Scott Dodelson (07.03.2013): “Is modified gravity a viable alternative to dark matter? Or is dark matter so compelling that pursuits of modified gravity should be abandoned?”.