A Brief Overview of Cosmology

A Brief Overview of Cosmology.

As requested, I have returned in order to dedicate a thread to this topic.

Now I'm assuming you know that the universe is expanding uniformly as informed by the Doppler effect, spectrometry allows us to discern the composition of massive bodies, both light and gravity dissipate at 1/R2, velocity is proportional to distance and the number of proportionality is H and the expansion rate is measured against standard candles, such as supernovae.

v = Hd

H = 100h km/sec/Mpc

If not, then now you do.

Ok, on with the thread!

As a quick reminder and introduction, let’s take a look at gravitational energy and the Earth:

If a coin is flung up into the air, how do we determine whether it will escape the atmosphere or return to Earth?

Et = ½ mv^2 - GMm/r

Total gravitational energy is equal to kinetic energy minus potential energy; positive energy minus negative energy.

Et >0 = escape

Et 1 = collapse

B/A

Now, let’s determine the gravitational energy of a galaxy of mass m at the edge of a region with radius R:


Et = ½ mv^2 - GMm/r


We can multiply both sides by a positive number without changing anything and divide by m, as m is also a positive number, so we’ll multiply by 2:


2/m Et = v^2 - 2GM/r


As the velocity of a galaxy at a given distance is proportional to the distance, and the constant of proportionality is H, then v^2 is equal to H^2 R^2.


Now, what is the total mass of a sphere with radius R?


Well the volume of a sphere with radius R is 4/3 π r^3 and as the mass of a sphere with radius R is equal to volume times the average density, then we can write the equation as:


2/m Et = H^2 R^2 - 2G [(4π/3) ρR^3] /R


In order to further simplify, as R2 is a positive quantity we can divide by it, which leaves us with:


2/mR^2 Et = H^2 - 8 πG/3 ρ


Now, let’s refer to the constants 2, m and Et as minus kappa, so that 2/m Et is represented by –k:


-k/R^2 = H^2 - 8 πG/3 ρ


This is Einstein’s equation for an expanding universe, derived with all the factors of π and 3, which will ultimately determine the evolution of the universe.


Minus kappa is a constant related to the total energy of the galaxy, which in general relativity is the curvature of the universe..

The nature of the expansion of the universe is dependent on its curvature..

Therefore, if the total energy of the galaxy in question is positive then kappa is negative, which indicates that it will continue expanding infinitely.

If the total energy is negative then kappa is positive, which indicates that it will collapse.

However, if the final value is zero then it will continue expanding, slowing down but never quite stopping, so that at ∞ v = 0.

The same is true of the universe as a whole.

Now, in order to determine the total energy of the universe we must determine the density of the universe and measure it against the critical density.

If the density of the universe is equal to the critical density then the universe is flat.

If it is greater than the critical density then the universe is closed.

If it is less than the critical density then the universe is open.

The critical density of the universe will be defined by ρc.

ρ > ρc = closed universe
ρ < ρc = open universe
ρ = ρc = flat universe

ρ/ ρc = Ω

Ω = >1 = closed universe
Ω = 1 = collapse

Ω =

How do we determine the density of our universe?

Well, Kepler showed that the square of the velocity of planets from the sun is inversely proportional to the distance from the sun:

v^2 = 1/R

Newton showed that the force of gravity is equal to the gravitational constant times the masses of the objects in question divided by the distance between their two centres:

F = GM1M2/r^2

Cavendish showed that it was possible to calculate G, which modern physicists have calculated to be:

6.67428 × 10^−11N-m^2/kg^2

The existence of a universal force of gravity allows us to state that:

v^2 = GM/r

Since we have a value for the strength of gravity, then the velocity of the moon around the earth can be measured and compared to its distance from the earth, subsequently allowing us to calculate the mass of the earth, which is:

5.972 × 10^24 kg

The same can be done for the sun, however only to an accuracy of 1 part in 1000, due to the difficulty in measuring such a small effect size as gravity.

In fact, gravity is tremendously weak.

If you were to fall from the top of a building, you wouldn’t even make a dent in pavement below and this is due to the fact that gravity is so much weaker than electromagnetism.

Gravity may have accelerated you 50m towards the earth, however the electromagnetic forces resulting from electrons in your body interacting with electrons in the concrete would stop you in a fraction of an inch.

Most materials do not stop you in your tracks because they are solid, as in reality they are mostly empty space: it’s the electromagnetic interaction that brings you to a halt.

In fact, electromagnetism is almost 40 orders of magnitude stronger than gravity..

The suns mass has been calculated to be approximately:

1.989 × 10^30 kg

We can measure the mass of a galaxy similarly.

Our solar system is moving around the edge of our galaxy at a rate of one orbit per 200 million years and we can use this information to measure the mass of our galaxy.

The velocity of the sun around the galaxy is 220km/sec and its distance is 8kpc light years, therefore the initial calculation of the mass comes out as:

10^11 solar masses

This is equivalent to 100 billion stars, which is great as it corresponds to our observations.

However, we want to do better than that so we take a look at objects that are further out.

Now, as we know we are at the edge of our galaxy and velocity should fall off at the square root of the distance from the centre.

After looking at satellite galaxies, molecular clouds and globular clusters that are up to ten times the distance from the centre of the galaxy as we are, we find that the velocity doesn’t fall off, but instead remains constant.

What does this mean?

After all:

V^2 = GM/r

If GM/r is constant and r is ten times bigger, then M must be ten times bigger, which means that there is ten times more mass enclosing our galaxy than expected.

The same is true of other galaxies.

The rotation curve does not fall off in relation to the location of the stars, but instead remains constant..

>Pic related

Therefore, either gravity breaks down although we have no reason to believe it does on the scale of galaxies, or there is ten times more mass enclosing galaxies than we would expect..

This is what we refer to as dark matter.

In fact, there is so much dark matter in the universe (10x all the protons and neutrons in the universe) that it may turn out to be a new type of elementary particle, which means that it is likely to be everywhere, including here on earth.

This means that we can build experiments to try and detect it, although it interacts so weakly that it passes right through the earth, therefore the equipment used will have to be extremely sensitive.

Now, these rotation curves, do they continue to remain flat forever?

We are only able to measure rotation curves up to a certain distance, due to our limited powers of observation, therefore this calculation cannot tell us how much mass is in the universe, it can only tell us the lower limit on that value.

If we want to know the value of Ω then we have to measure mass on larger scales, which we can achieve using gravity or more specifically: gravitational lensing.

We know that gravity curves space-time and subsequently light, therefore it is theoretically possibly for gravity to bend light in such a way that it acts as a lens, thereby magnifying and potentially duplicating an image.

In this respect, an image of a galaxy cluster 5 billion light years away may include multiple images of a galaxy 10 billion light years away, due to gravitational lensing induced by the mass of the cluster.

>pic related; the blue ghostly images are images of a galaxy 5 billion light years behind the galaxy cluster, resulting from gravitational lensing

We can use general relativity to discover how much mass is in this system and where it is distributed, in order to produce such an image..

This is achieved by undertaking a mathematical inversion process, which then allows us to produce an image of the mass of the system:

>pic related

The resulting image indicates that there is forty times more mass in the system than would be expected.

We can then use this data and extrapolate to estimate the mass of the universe, due to the uniformity of the universe.

An initial calculation produces a result of:

Ω = 0.30 ± 0.1 (95%)

This value being less than 1 indicates that we are living in an open universe.

However, this estimate is only based on the mass around clusters of galaxies.

How do we measure the total mass of the universe?

We do so by measuring the geometry of the universe, which involves finding a triangle..

On a flat plane the sum of the angles of a triangle is 180 degrees, however on surface with positive curvature the sum is greater than 180 degrees, just as one with negative curvature produces a triangle with a sum of less than 180 degrees.

>pic related

If we can find a big enough triangle, then we can measure the curvature of the universe.

The largest image we have of our universe comes in the form of the cosmic microwave background, which is essentially a baby picture of our universe at the tender age of 380,000 years old..

>pic related

This is an image of the cosmic microwave background.

It is incredibly uniform, however not completely as it features hot and cold spots.

The hot spots are approximately 1/1000th of a degree hotter than the average and the cold spots are 1/1000th of a degree colder.

This slight variation is what allowed for the creation of matter; these are the primordial lumps, created at the beginning of time, which went on to become galaxies, stars, planets and ultimately us.

This diagram features the surface of the cosmic microwave background at a distance of one degree in angular size, which corresponds to around 380,000 light years in size:

>pic related

The fact that the CMB is so uniform is very puzzling..

An angular distance of one degree is roughly equal to a lump that is 380,000 lightyears across, which means that, as the universe itself was only 380,000 years old, light from one particular lump would not have had enough time to reach another and, subsequently, information would not have been able to be communicated between distant regions of the universe.

Therefore, there is no reason to expect the CMB to be so uniform.

This implies that some form of order preceded creation of the CMB.

In relation to our search for a method for measuring the curvature of the universe, we only need to take a look at the size of the primordial lumps present in the CMB.

This is because the largest lumps that could possibly form would have been 380,000 light years across, as, due to the age of the universe, if they were any larger gravity would not have been able to act on them and therefore they would not have been able to collapse.

Now, the apparent angular distance of a lump 380,000 light years across (1 degree), is dependent on the curvature of the universe.

In an open universe, light rays would bend outward as time reverses, therefore the lumps would appear smaller than it actually is, such as 0.5 degrees.

Likewise, in a closed universe light rays would bend inwards as time reverses, therefore the lumps would look larger, say 2 degrees.

However, in a flat universe light rays travel in a straight line and therefore the lumps would appear to be approximately 1 degree.

Therefore, all one has to do is simulate universes featuring lumps of 0.5, 1 and 2 degrees respectively, and then compare them with the image of the CMB.

This is exactly what physicists have done and the results are extraordinary..

>pic related

This image of the CMB compared with simulated universes featuring lumps of varying sizes, allows us to conclude that we live in a flat universe; the lumps appear to be 1 degree in angular distance.

However, all the mass in the universe amounts to 0.3 as shown earlier, therefore both the visible matter and dark matter combined only amount to 30% of the mass required for a flat universe.

Therefore, we have to ask: where’s the other 70% coming from?

The odd thing is that it seems to be situated in empty space.

That is to say, there’s energy where there is nothing; most of the energy in the universe resides where there is nothing.

If you take a region of space and remove all the particles, the radiation and just everything, then it will still weigh something.

It turns out that when you combine quantum mechanics and relativity, empty space is a boiling – bubbling – brew of virtual particles and fields popping in and out of existence on a time scale far too small for us to observe them, and this is happening everywhere.

For example, less than 5% of the mass of a proton can be accounted for by its 3 quarks.

However, although we may not be able to observe them directly, we can measure their effects indirectly.

This is known as dark energy.

The fact that we can calculate that most of the mass of a proton comes from virtual particles and fields, means that we can do the same calculation to determine how much energy virtual particles and fields can give to the energy of the universe.

However, upon doing so we find that the energy of empty space is approximately 120 orders of magnitude bigger than everything we observe.

This is the worst prediction in all of physics.

So, physicists wracked their brains and came up with the idea that perhaps there exists some form of underlying symmetry that had yet to be discovered, that would allow for a cancellation of energies.

Well, oddly it turns out that if you put energy into empty space gravity is actually repulsive:

>pic related

Therefore, if the universe was really dominated by the energy of empty space then it would be speeding up, not slowing down as predicted.

Then in 1998, more extensive data concerning the expansion rate of the universe came to show that the expansion of the universe is actually speeding up.

This is shown by the fact that the supernovae towards the end of the plot do not follow the predicted path along the curve, but instead place much higher:

>pic related

It was then calculated exactly how much energy would have to be added to empty space to facilitate this accelerated expansion, which turned out to be exactly the value that was puzzling us.

That is to say, if we put around 70% of the energy of a flat universe into empty space then everything works.

If we add in dark energy then the value is no longer 0.30 ± 0.1, but rather Ω = 1.02 ± 0.02.

More recent figures suggest that 73% of the universe resides in nothing, 23% in dark matter and only 4% in luminous matter, of which we make up less than 1%.

We live in a universe in which we are tantamount to a smidge of cosmic pollution; we are wonderfully insignificant.

So the previously chaotic energy calculation has been remedied and we know we live in a flat universe, however since Ω = 1 the total energy of the universe must be 0:

0 = H^2 - 8 πG/3 ρc

This means that you can get 100 billion galaxies with 100 billion stars out of precisely nothing, that is to say once you allow for gravity: as gravitational potential energy at infinity is 0 and positive work has to be carried out by an external force in order to increase the distance from a point in infinity to a massive body, then masses must have a potential energy value less than zero, therefore gravitational potential energy is negative.

But where in the world did those primordial lumps in the CMB come from?

This is where inflation comes in.

>pic related

Essentially, when the universe was a billionth of a billionth of a billionth of a billionth of a second old, empty space received a lot of energy (much more than it has now) and in that period of a billionth of a billionth of a billionth of a billionth of a second it expanded by a factor of 10 to the 90th, from the size of a single atom to the size of a basketball.

This initial small size would have allowed for the universe to be homogenised in temperature, whereafter it would have continued to expand.

At such small scales, such as that of a proton, quantum mechanics rules and it is possible that quantum fluctuations may have been frozen in place upon the initial inflation.

Density fluctuations produced as a result of this process would produce exactly the same sort of lumps as those observed in the CMB.

This would essentially mean that the entire universe, that is to say everything we see around us and indeed us humans, are the result of quantum fluctuations at the beginning of time.

Of course, we aren’t able to see further back than the CMB, as our view is blocked by an opaque plasma wall that light is unable to penetrate; hence there is a barrier between us and inflation.

So how do we test the inflation hypothesis, if we can’t test it with light?

Well firstly, this is how we think inflation happens:

It’s much like a phase transition of a material.

For example, water can be present in an environment that is below zero without turning to ice, but then all of a sudden it phase transitions, perhaps due to an energy density fluctuation: the state preceding this transition is what is referred to as a metastable state.

When it comes to empty space we refer to this state as a false vacuum:

>pic related

The idea is that, energy can become momentarily stuck in empty space which results in a phase transition in the form of a hot big bang and that during this period it expands very rapidly, thereafter (upon the completion of the phase transition) it expands at the rate observed for our universe.

If space is continuously inflating and quantum fluctuations can lead to phase transitions, which result in the creation of non-inflating regions of space and, depending on the nature of the fluctuation, entire universes, then given an infinite amount of time there must be an infinite number of universes.

Our universe would simply be a pocket of space that happened to drop out of inflation, among infinitely many others.

Also, a potentially exciting or depressing implication of inflation is that in each non-inflationary pocket, the laws of physics can be different, depending on the nature of the fluctuations that created them.

In some universes there may be numerous galaxies, much like ours, however in others there may be none.

While there would still be fundamental quantum mechanics at play, some universes could be entirely different, in relation to their spatial geometry and physical laws, such as the forces.

It could be that the laws of physics are entirely arbitrary and not truly fundamental at all, effectively turning physics into an environmental science.

This would open the door to a sort of natural selection for universes, whereby universes that are able to support galaxies, stars and planets are the universes that would eventually give rise to life.

This is an extension of the anthropic principle.

So, how do we test the inflation hypothesis?

We use a signal that interacts less that light, as such a signal may be able to travel through the dense plasma and reach us all the way from the big bang.

Gravity is the weakest force in nature and Einstein tells us that masses disturb space in a way that changes with time and subsequently produce gravitational waves, that is to say ripples in space-time.

So, just as shaking an electron produces an electromagnetic wave, when you shake your hands around you produce gravitational waves.

These waves disturb space-time in such a way that upon moving through a region of space they would cause space to appear to expand and contract inversely along differing axes.

>pic related

The universe is full of gravitational waves, however they are very difficult to detect as gravity is so weak.

Although, detectors have been built and waves have been detected as a result of the LIGO project.

Essentially, two tunnels that are each 4km long have laser beams firing from one end of each tunnel to the other, as if a gravitational wave were to pass through it would cause one tunnel to appear slightly longer and the other slightly shorter, subsequently causing a slight disparity in the measured lengths of each tunnel by the lasers.

>pic related

So far, waves resulting from the merger of two black holes 1.4 billion light years away have been detected.

>pic related

The detectors are able to discern a change in length equivalent to 1/1000th of the size of a proton.

Now, during the inflationary period gravitational waves produced by quantum fluctuations in the gravitational field would have been stretched out and subsequently stopped oscillating.

These waves would have varied in period length, however waves with a period of 380,000 years would have begun to oscillate at the formation of the CMB.

As gravitational waves warp space-time so that it appears to expand and contract inversely along differing axes and as radiation scatters uniformly, then radiation moving through space-time would appear to be hotter and colder inversely along differing axes.

This radiation intensity-disparity is tantamount to polarisation, that is to say electromagnetic radiation that is oscillating in one particular direction.

Therefore, physicists can look for a polarisation signal caused by gravitational waves emanating from before the inflationary period and compare this to predictions of what this signal would look like, based on calculations of particular quantum fluctuations.

This would increase our window of observation by 10 to the 49th, allowing us to observe the universe when it was merely a billionth of a billionth of a billionth of a billionth of a second old.

If found, these waves would not only provide evidence for inflation but also be able to confirm the existence of multiple universes, by allowing us to calculate whether the potential for inflation would produce a phase transition that is likely to occur multiple times.

The BICEP2 project is working on detecting these very waves and researchers have claimed to have discovered the exact waves that were predicted.

>Pic related

However, it turned out that the degree of cosmic noise in our universe, which could contribute to a false positive, was underestimated by the researchers working on the BICEP2 project.

Essentially, the amount of polarised cosmic dust in our galaxy is sufficient enough to produce a signal of the same intensity as the one detected by BICEP2, which was discovered after the analysis of data gathered by the Planck satellite.

>Pic related

Therefore, it was initially unknown whether the signal detected by BICEP2 was from the early universe or simply a spurious signal resultant of polarised cosmic dust.

It was later realised that the likelihood that the signal detected emanated from the early universe was only 92%, which means that the experiment would detect a spurious signal 8% of the time.

This leaves us with an enormous large margin of error, as we would require a likelihood of 99.9999% in order to verify the legitimacy of the signal.

It was later discovered that the signal was indeed not from the early universe and was simply resultant of cosmic dust producing the signal by chance, therefore more accurate experiments will have to be designed in order to detect a true signal, which will either verify or falsify inflation.

These two graphs display the initial predicted likelihood of detecting a viable signal compared with the later prediction informed by the Planck satellite data:

>pic related; sorry about the poor quality of the first graph

So the hunt for non-spurious signals from our early universe is on and it is likely that the next generation of physicists will discover them by building better – more accurate – experiments.

That could include some of you anons and if it does, then I wish you good luck!

As a quick addition, I’d like to say that dark matter may also be detected at the LHC at CERN, as well as via experiments involving super-cooled silicon based detectors that will go off upon a dark matter particle interacting with a germanium atom for example, however I won’t go too much into that.

Also, on the topic of particle physics, you can view the forces as tantamount to exchanges of virtual particles and said forces are mathematically derived from the laws of physics being invariant under certain transformations and not others.

Particles can be viewed as excitations of fields, that are mathematically represented by tensors.

That’s just a little springboard/teaser for any mathematically fluent anons who wish to learn more.

Physics is by far the most exciting field in science in my eyes and therefore I hope you find this stuff as interesting as I do!

Here's a graph that displays the agreement between the predictions of the standard model of physics (the red line) and the data points (the black dots).

As you'll notice, the error bars for the data points are very small.

So, we have a pretty good understanding of how things work, although not quite the whole picture.

This pie chart represents the composition of our universe.

Note how insignificant we truly are, which is something that I find quite relieving to be perfectly honest!

>1/R2

1/R^2*

The superscript didn't make the re-format.

On top of the brief summary of cosmology discussed in this thread, I'd like to briefly introduce two mathematically tenable hypothetical interpretations of the external reality.

As a primer, matter is essentially energy and this energy is a fixed vacuum energy associated with space itself, which is structured/arranged in a particular manner described by mathematics.

Therefore, one may suggest that this arrangement is resultant of the computational relations between abstract elements of a mathematical structure or perhaps a code running on quantum computer.

This is of course abstract theoretical physics, however it’s entertaining and intellectually stimulating nonetheless.

I can also assure you that there will be no quantum skeletons, nor any of Michio Kaku or Deepak Chopra’s quantum wu wu.

So, the first hypothesis I'd like to briefly discuss is the notion that the external reality is mathematical structure with no intrinsic properties other than the relations between abstract entities, consisting of a level 4 multiverse built upon an inflationary model, which is referred to as the Mathematical Universe Hypothesis and was developed by Max Tegmark.

The 4 levels correspond to:

1) The region contained within our light cone.

(2)The region of non-inflationary space, in which we are situated.

(3) Alternate versions of reality existing in an abstract Hilbert space, resulting from universal cloning that appears to us as quantum randomness in the form of wave function collapse; thereby indicating that randomness is merely an illusion. (See Everett on the many worlds interpretation)

(4) The particular mathematical structure isomorphic to our level 4 multiverse.

This interpretation involves our particular universe being equivalent to a mathematical structure, contained within a much larger macrostructure.

A good analogy for such a situation is the Mandelbrot fractal, where highly complex structures are contained with a much larger and seemingly less complex structure.

Astonishingly intricate patterns that continue down to arbitrarily small scales, amounting to megabytes worth of information, can be produced from a program of only a few hundred bytes long repeating the simple computation: z^2 + c.

>pic related

This also paves the way for apparent complexity being an illusion, along with randomness.

A basic example of complexity as an illusion would be to generate an image of white noise using a quantum random number generator, which would leave you with a highly complex image that would require awareness of thousands of bits to describe it, say 128 x 128 = 16,384 bits.

Now, you could also produce a white noise image of the same size, by utilising the binary digits of the square root of two (i.e. 1.414213562… = 1.0100001010000110…).

Let’s say that this binary pattern can be generated by a computer program that is 100 bits long, then you would only need to be aware of 100 bits in order to describe a pattern of 16,384 bits, therefore reducing the apparent complexity significantly.

Furthermore, if we were to focus on a small section of such a pattern, then we would find it to be relatively easy to describe, for example a section of 9 bits would require one bit to describe each black and white pixel, therefore 9bits in total.

However, as soon as we begin moving outwards things become more complex, for example a square cut out from the middle of the image would require far more information to describe, and since it has been separated from the whole, it can no longer be described simply by √2.

In this situation the whole is less complex than its parts.

In fact, this example shows us that not only can the whole contain less information the some of its parts, but it can also contain less information than just one of its parts.

Now, the Mathematical Universe Hypothesis has been criticised for appearing to be inconsistent with Gödel's incompleteness theorem.

For example, Mark Alford has stated that, as the methods permitted by formalists cannot prove all the theorems in a sufficiently powerful system, the notion that the external reality is made up of mathematical structures is incompatible with the notion that it consists of formal systems.

However, Tegmark has suggested that perhaps only Gödel-complete mathematical structures physically exist, which would resolve this issue and also limit the potential for complexity, thereby providing an adequate explanation for relative simplicity of our universe.

Tegmark also states that although conventional physics theories are Gödel-undecidable, the mathematical structure describing our specific universe may still be Gödel-complete, and could in principle contain observers capable of thinking about Gödel-incomplete mathematics, just as finite-state digital computers can prove certain theorems about Gödel-incomplete formal systems, i.e. Peano arithmetic.

Therefore, a more restrictive version of the Mathematical Universe Hypothesis is the Computable Universe Hypothesis, which pertains exclusively to mathematical structures that are simple enough to avoid the implications of Gödel's theorem, that is to say that they would not contain any undecidable or uncomputable theorems.

The second hypothesis I’d like to introduce is the notion that the external reality is an analogue simulation performed by a quantum computer; this interpretation is resultant of the structure of quantum field theory, as it is mathematically equivalent to that of a spatially distributed quantum computer.

This quantum computer would be naturally occurring, as opposed to designed, in the same way that humans are naturally occurring organic robotic machines.

As you can imagine, computer science geeks love this hypothesis.

The Simulation Hypothesis may be interpreted as implying an infinite regress:

If the universe as we know it is a simulation, then we must consider the fact that as we are able to produce simulations ourselves (that is, a simulation within a simulation), there are likely to be far more multi-level simulations than singular ones.

This logic, coupled with the elementary laws of mathematical probability, indicates that the simulation hypothesis would lead us to conclude that we are living in a simulation within a simulation, within a simulation, ad infinitum.

However, it is important to note that apparent absurdity does not provide appropriate grounds for ruling out a mathematically viable hypothesis, especially when considering that whatever the true nature of the external reality may be, it is most definitely expected to be wholly unintuitive and absolutely bizarre.

Really nice work OP. It's nice to see people putting effort into Veeky Forums again. Do you work in the field? I'm a PhD student in observational cosmology, I work with galaxy surveys and the CMB.

I'd be happy to answer any questions people may have a bit later.

this has absolutely nothing to do with hair and nails.

On the topic of simulations, an analogy can be drawn with Plato’s cave as described in The Republic, when attempting to reconcile human perception with the actual nature of the external reality.

The inhabitants of the cave only see a 2D shadow of a 3D reality.

>pic related

In the 2D shadow world reality would appear very different, for example length would be relative rather than absolute.

In the same manner, we experience a 3D internal simulation of a 4D reality, therefore Plato was correct when he stated that we only see a shadow of the underlying structure of reality.

To put this notion into context, life as we know it can best be described as a series of highly integrated systems built upon relatively stable, complex - carbon based - replicating molecular structures and the human organism is an organic robotic machine which houses the aforementioned molecular replicators, whose on board computer has developed self-awareness.

This awareness itself is resultant of a particular series of interrelated structures, formed by neural networks in the brain and our perception of reality is tantamount to an internal simulation.

So it looks like Plato made quite the prediction, despite not knowing quite how profound it would end up being.

I enjoy bringing this up as it demonstrates that physics is the modern manifestation of classical philosophy; Socrates would be proud of all that we have achieved.

Thanks user, I'm glad you approve.

>Do you work in the field?

I can’t say that I do, however I’m utterly enthralled by cosmology.

As for what I actually do, well I’ll just say that my presence here is analogous to a Boltzmann Brain arising out of a state of chaos due to quantum fluctuations in an alternate reality.

It shares many similarities to the real me, however the two cannot be reconciled as one is merely a short lived illusion.

Therefore here, I am simply OP.

>I'd be happy to answer any questions people may have a bit later.

That’s great to hear, as I’m sure there will be many!

Ayyy, this guy!

...

>The universe is expanding uniformly
>Velocity is proportional to distance

Aaaaand, you're already wrong

Care to elaborate?

The Cosmic Microwave Background isn’t perfectly uniform either.

The expansion of the universe is uniform enough for the visual representation in pic related to have heuristic value.

>pic related

I didn’t intend to get into dark flow ITT, as it’s intended to be a basic overview of cosmology.

Read the thread and stop being a pedant; it’s not helping anyone.

>uniformly ≠ perfectly uniformly

The expansion is uniform user. Don’t be a faggot.

>expansion isotropic to better than 0.001%
>not uniform

Sanity check: the redshift from Hubble flow isn't a Dipper shift, and the recessional velocities as determined from Hubble's law are just apparent, right? There is some recessional velocity due to the Hubble flow but it's not equal to Hd, right?

Or does it actually equal Hd for the right definition of d?

*Doppler shift. Stupid phone.

Ok, Hubble's law is a statement of a direct correlation between the distance to a galaxy and its recessional velocity as determined by the red shift.

It can be stated as:

V=Hd

Or

V = Hr

>pic related

The reported value of the Hubble parameter has varied widely over the years, which is testament to the difficulty of astronomical distance measurement.

However, high precision experiments have greatly narrowed to values in the range of:

>pic related

The Particle Data Group documents quote a best modern value of the Hubble parameter as:

72 km/s per megaparsec (+/- 10%)

This value comes from the use of type Ia supernovae (which give relative distances to about 5%) along with data from Cepheid variables gathered by the Hubble Space Telescope.

The WMAP mission data leads to a Hubble constant of 71 +/- 5% km/s per megaparsec.

The Hubble flow is described by Hubble's law, however it isn't the only thing influencing the motion of galaxies, as the local flow and the motion of the galaxy within its cluster environment also contribute.

You will also be able to note that the expansion is uniform, however not perfectly as this pedantic faggot decided to point out.

[spoiler]test[/spoiler]

I get this, but I'm saying that even if you ignore peculiar velocity, Hubble's law only relates disrance d (which is luminosity distance if you use SNR measurements) to the redshift. The redshift for distant objects is a cosmological redshift, not a Doppler shift. So that distant galaxy isn't really receding at v=Hd, right? Or am I nuts?

It's as real as any other relative velocity. The value depends on the path through spacetime along which you transport the 4-vectors to compare them, but the value you compute from the Doppler shift is the correct value for the path of the light. For something far enough away for it to matter, this isn't just Hd, but depends on the history of H for the whole time the light was traveling.

There's no aether; you can't meaningfully separate "expansion through space" from "expansion of space."

Yes, for most astronomical objects the observed spectral lines are all shifted to longer wavelengths, which is known as ‘cosmological redshift’ and is given by:

>pic related

Which is for relatively nearby objects, where z is the cosmological redshift, λobs is the observed wavelength and λrest is the emitted/absorbed wavelength.

When dealing with red shift caused solely by the expansion of the universe, the value of the cosmological redshift indicates the recession velocity of the object, or its distance.

For small velocities, cosmological redshift is related to recession velocity (v) through:

>pic related

Where c = speed of light.

At larger distances and subsequently higher redshifts, using the theory of general relativity gives a more accurate relation for recession velocities, which can be greater than the speed of light.

This does not break the ultimate speed limit of c in special relativity as nothing is actually moving at that speed, but rather the entire distance between the receding object and us is increasing.

This is a complex formula requiring knowledge of the overall expansion history of the universe to calculate correctly but a simple recession velocity is given by multiplying the comoving distance (D) of the object by the Hubble parameter at that redshift (H) as:

>pic related

Now, although the cosmological redshift at first appears to be a similar effect to the more familiar Doppler shift, there is a distinction.

In Doppler Shift, the wavelength of the emitted radiation depends on the motion of the object at the instant the photons are emitted.

If the object is travelling towards us, the wavelength is shifted towards the blue end of the spectrum, if the object is travelling away from us, the wavelength is shifted towards the red end.

In cosmological redshift, the wavelength at which the radiation is originally emitted is lengthened as it travels through expanding space.

Cosmological redshift results from the expansion of space itself and not from the motion of an individual body.

For example, in a distant binary system it is theoretically possible to measure both a Doppler shift and a cosmological redshift:

The Doppler shift would be determined by the motions of the individual stars in the binary – whether they were approaching or receding at the time the photons were emitted.

The cosmological redshift would be determined by how far away the system was when the photons were emitted.

The larger the distance to the system, the longer the emitted photons have travelled through expanding space and the higher the measured cosmological redshift.

This is the part I think isn't right. Translating the redshift into a recessional velocity assumes the shift is a Doppler shift, ie all the shift in its entirety occurred when the light was emitted. This is wrong - the wavelength of the light increases AS the light travels because of the expansion of space.

user, you only had to wait a few minutes for that part:

I guess I don't get how you can assign a recessional velocity using the Doppler shift formula when the observed shift isn't a Doppler shift. It doesn't even make sense to me to define it as an apparent velocity since the shift takes place over time instead of at an instant.

It is a Doppler shift. The wavelength of the light is frame dependent, and you're measuring in the average rest frame of the nearby matter, which is changing as the light travels through space.

Here's a saucy .gif to go with this post.

This

Great thread OP, some stuff flies over my head because i'm a pleb engi but it's still interesting

Thanks, user.

It's a lot simpler than it seems, once you get your head around the abstract concepts.

OP ANNOUNCEMENT:

Lads, I don’t intend for this thread to derail into nit-picking over the definition of uniformity and how to assign recessional velocity, as this is supposed to be a brief overview of cosmology and therefore will end up being an introduction for many.

This thread is supposed to be a relatively simplistic summary of the field and not a meticulously accurate description of every aspect.

I derived Einstein’s equation for an expanding universe in an approachable way, for this very reason.

Let’s all calm down and let tentative anons bring their questions forward without worrying about their posts getting lost in a sea of quasi-specialist shit posting.

...

...

PLZ STICKY THIS MODS!

...

Shouldn't it have been
2/mR^2 Et = H^2 - 8(pi)GR/3 (rho)
R^3/R^2 = R

>actual science thread
>Veeky Forums

whats happening?

protip: go search Lawrence Krauss: A universe from nothing on youtube, most of this thread are taken from that video

i watched it. it doesn't have all the information ITT.

It doesn't have everything in it, but a lot of the history discussed of cosmology is similar

There's a lot more info in this thread, than in that video and it'd be a shame to lose it to 404.

Theeeeeere's way more ITT than in that video! Lets not lose this shit!

...

>cosmology

tl;dr