Search This Blog

Wikipedia

Search results

Showing posts with label Mach Effect Propulsion. Show all posts
Showing posts with label Mach Effect Propulsion. Show all posts

Sunday, August 30, 2015

Explaining EmDrive, the ‘physics-defying’ thruster even NASA is puzzled over

http://www.digitaltrends.com/

By Brendan Hesse — August 30, 2015


NASA

Even if you don’t keep up with developments in space propulsion technology, you’ve still probably heard about the EmDrive by now. You’ve probably seen headlines declaring it as the key to interstellar travel, and claims that it will drastically reduce trips across our solar system, making our dreams of people walking on other planets even more of a reality. There have even been claims that this highly controversial technology is the key to creating warp drives.

These are bold claims, and as the great cosmologist and astrophysicist Carl Sagan once said, “extraordinary claims require extraordinary evidence.” With that in mind, we thought it’d be helpful to break down what we know about the enigmatic EmDrive, and whether or not it is, in fact, the key to mankind exploring the stars. So without further ado, here’s absolutely everything you need to know about the world’s most puzzling propulsion device.
What is the EmDrive?

See, the EmDrive is a conundrum. First designed in 2001 by aerospace engineer Roger Shawyer, the technology can summed up as a propellantless propulsion system, meaning that the engine doesn’t use fuel to cause a reaction. By removing the need for fuel, a craft would be substantially lighter, and therefore easier to move (and cheaper to make, theoretically). In addition, the hypothetical drive would also be able to reach extremely high speeds — we’re talking potentially getting humans to the outer reaches of the solar system in a matter of months.

We’re talking potentially getting humans to the outer reaches of the solar system in a matter of months.

The issue is, the entire concept of a reactionless drive is inconsistent with Newton’s conservation of momentum, which states that within a closed system, linear and angular momentum remain constant regardless of any changes that take place within said system. More plainly: Unless outside force is applied, an object will not move. Reactionless drives are named as such because they lack the “reaction” defined in Newton’s third law: “For every action there is an equal and opposite reaction.” But this goes against our current fundamental understanding of physics, as for an action (propulsion of a craft) to take place without a reaction (ignition of fuel and expulsion of mass) is impossible. In order for such a thing to occur, it would mean an as-yet-undefined phenomena is taking place, or our understanding of physics is completely wrong.
How does the EmDrive “work?”

Setting aside the potentially physics-breaking improbabilities the technology, let’s break down in simple terms how the proposed drive operates. The EmDrive is what is called an RF resonant cavity thruster, and is one of several hypothetical machines that use this model. These designs are said to work by having a magnetron push microwaves into a closed truncated cone, then push against the short end of the cone, and propel the craft forward. This is in contrast to the form of propulsion current spacecraft use, which instead burn large quantities of fuel to expel massive amount of energy and mass to rocket the craft into the air. An often-used metaphor for the inefficacy of this is to compare the particles pushing against the enclosure and producing thrust to the act of sitting in a car and pushing a steering wheel to move the car forward.

While tests have been done on experimental versions of the drive with low energy inputs produced very minimal, resulting in a few micronewtons of thrust (about as much force as the weight of a penny), none of the findings have ever been published in a peer-reviewed journal. That means that any and all purportedly positive test results, and the claims of those who have a vested interest in the technology, should be taken with a very big grain of skepticism-flavored salt. It’s likely that the thrust recorded was due to interference or an unaccounted error with equipment. Until the tests have been verified through the proper scientific and peer-reviewed processes, one can assume the drive does not yet work. Still, it’s interesting to note the number of people who have tested the drive and reported achieving thrust:
In 2001, Shawyer was given a £45,000 grant from the British government to test the EmDrive. His test was claimed to have achieved 0.016 Newtons of force and required 850 watts of power, but no peer review of the tests verified this. It’s worth noting, however, that this number was low enough it was potentially an experimental error.
In 2008, Yang Juan and a team of Chinese researches at the Northwestern Polytechnical University allegedly verified the theory behind RF resonant cavity thrusters, and subsequently built their own version in 2010, testing the drive multiple times from 2012-2014. Tests results were purportedly positive, achieving up yo 750 mN (micronewtons) of thrust, and requiring 2,500 watts of power.
In 2014, NASA researchers, tested their own version of an EmDrive, including in a hard vacuum. Once again, the group reported thrust (about 1/1000 of Shawyer’s claims), and once again, the data was never published through peer-reviewed sources. Other NASA groups are skeptical of researchers’ claims, but in their paper, it is clearly stated that these findings neither confirm or refute the drive, instead calling for further tests.
Most recently, in 2015, that same NASA group tested a version of chemical engineer Guido Fetta’s Cannae Drive (née Q Drive), and reported positive net thrust. Similarly, a research group at Dresden University of Technology also tested the drive, again reporting thrust, both predicted and unexpected.
Implications of a working EmDrive

From the sections above, it becomes easy to see how many in the scientific community are wary of EmDrive and RF resonant cavity thrusts altogether. But on the other hand, it raises a few questions: Why is there such a interest in the technology, and why do so many people wish to test it? What exactly are the claims being made about the drive that make it such an attractive idea? While everything from atmospheric temperature-controlling satellites, to safer and more efficient automobiles have been drummed up as potential applications for the drive, the real draw of the technology — and the impetus for its creation in the first place — is the implications for space travel.



Spacecraft equipped with a reactionless drive could potentially make it to the Moon in just a few hours; to Mars in two to three months; and to Pluto within two years. These are extremely bold claims, but if the EmDrive does turn out to be a legitimate technology, they may not be all that outlandish after all. And with no need to pack several tons-worth of fuel, spacecraft become cheaper and easier to produce, and far lighter. For NASA and other such organizations, including the numerous private space corporations like SpaceX, lightweight, affordable spacecraft that can travel to parts of space fast are something of a unicorn. Still, in order for that to become a reality, the science has to add up.

Shawyer is adamant that there is no need for pseudo science or quantum theories in order to explain how EmDrive works. Instead, he believes that current models of Newtonian physics offer an explanation, and has written papers on the subject, one of which is currently being peer reviewed. He expects the paper to be published sometime this year. While in the past Shawyer has been criticized by other scientists for incorrect and inconsistent science, if the paper does indeed get published, it may in fact begin to legitimize the EmDrive, and spur more testing and research.

Spacecraft equipped with a reactionless drive could potentially make it to the Moon in just a few hours.

Despite his insistence that the drive behaves within the laws of physics, it hasn’t prevented him from making bold assertions regarding EmDrive. Shawyer has gone on record saying that this new drive produced warp bubbles which allow the drive to move, claiming that this is how NASA’s test results were likely achieved. Assertions such as these have garnered much interest online, but have no clear supporting data and will (at the very least) require extensive testing and debate in order to be taken seriously by the scientific community — the majority of which remain skeptical of Shawyer’s claims.

Colin Johnston of the Armagh Planetarium wrote an extensive critique of the EmDrive and the inconclusive findings of numerous tests. Similarly, Corey S. Powell of Discovery wrote his own indictment of both Shawyer’s EmDrive and Fetta’s Cannae Drive, as well as the recent fervor over NASA’s findings. Both point out the need for greater discretion when reporting on such instances. Professor and mathematical physicist, John C. Baez expressed his exhaustion at the conceptual technology’s persistence in debates and discussions, calling the entire notion of a reactionless drive “baloney.” His impassioned dismissal echoes the sentiments of many others. Elsewhere, however, Shawyer’s EmDrive has been met with enthusiasm, including the website NASASpaceFlight.com, which is where the information about the most recent Eagleworks’ tests was first posted, and the New Scientist Journal, which published a favorable and optimistic paper on EmDrive. (New Scientist has gone on to make a statement that, despite their enduring excitement over the idea, they should have shown more tact when writing on the controversial subject).

Clearly, the EmDrive and RF resonant cavity thruster technology have a lot to prove. There’s no denying that the technology is an exciting thought, and that the number of “successful” tests are interesting, but one must keep in mind the physics preventing the EmDrive from gaining any traction, and the rather curious lack of peer-reviewed studies done on the subject. If the EmDrive is so groundbreaking (and works), surely people like Shawyer would be clamoring for peer-reviewed verification. A demonstrably working EmDrive could open up exciting possibilities for both space and terrestrial travel — not to mention call into question our entire understanding of physics. However, until that comes to pass, the EmDrive will remain as nothing more than science fiction.

Monday, August 10, 2015

Here’s why scientists haven’t invented an impossible space engine

Scientific Method / Science & Exploration

http://arstechnica.com/

Actual peer reviewed papers aren't making "Earth to Moon in four hours" claims.

What if I told you that recent experiments have revealed a revolutionary new method of propulsion that threatens to overthrow the laws of physics as we know them? That its inventor claims it could allow us to travel to the Moon in four hours without the use of fuel? What if I then told you we cannot explain exactly how it works and, in fact, there are some very good reasons why it shouldn’t work at all? I wouldn’t blame you for being sceptical.
The somewhat fantastical EMDrive (short for Electromagnetic Drive) recently returned to the public eye after an academic claimed to have recorded the drive producing measurable thrust. The experiments from Professor Martin Tajmar’s group at the Dresden University of Technology have spawned numerous overexcited headlines making claims that—let’s be very clear here—are not supported by the science.
The idea for the EMDrive was first proposed by Roger Shawyer in 1999 but, tellingly, he has only recently published any work on it in a peer-reviewed scientific journal, and a rather obscure one at that. Shawyer claims his device works by bouncing microwaves around inside a conical cavity. According to him, the taper of the cavity creates a change in the group velocity of the microwaves as they move from one end to the other, which leads to an unbalanced force, which then translates into a thrust. If it worked, the EMDrive would be a propulsion method unlike any other, requiring no propellant to produce thrust.

Fundamental problems

There is, of course, a flaw in this idea. The design instantly violates the principle of conservation of momentum. This states the total momentum (mass x velocity) of objects in a system must remain the same and is linked to Newton’s Third Law. Essentially, for an object to accelerate in one direction, there must be an equal force directed the opposite way. In the case of engines, this usually means firing out particles (such as propellant) or radiation.
The EMDrive is designed to be a closed system that doesn’t emit any particles or radiation. It cannot possibly generate any thrust without breaking some seriously fundamental laws of physics. To put it bluntly, it’s like trying to pull yourself up by your shoelaces and hoping you’ll levitate.
Nonetheless, a few open-minded experimental groups have built prototype EMDrives and all seem to see it generate some form of thrust. This has led to a lot of excitement. Maybe the laws of physics as we know them are wrong?
Eagleworks, a NASA-based group, built a prototype and last year reported 30-50 micronewtons of thrust that could not be explained by any conventional theory. This work was not peer-reviewed. Now, Tajmar’s group in Dresden say they have built a new version of the EMDrive and detected 20 micronewtons of thrust. This is a much smaller value, but it's still significant if it really is generated by some new principle.

Experimental problems

Straightaway, there are problems with this experiment. The abstract states: “Our test campaign cannot confirm or refute the claims of the EMDrive.” Then, a careful reading of the paper reveals this observation: “The control experiment actually gave the biggest thrust … We were really puzzled by this large thrust from our control experiment where we expected to measure zero.”
Yes, the control experiment designed not to generate any thrust still measures a thrust. Then there’s the peculiar gradual way the thrust seems to turn on and off that looks suspiciously like a thermal effect, and then there are acknowledged heating problems. All this leads to the conclusion stated in the paper that “such a set-up does not seem to be able to adequately measure precise thrusts.” Similar problems were seen by the Eagleworks group, with thrust also mysteriously appearing in their control test.
Taken together, these results strongly suggest that the measured signatures of thrust are subtle experimental errors. Possible sources include thermal effects, problems with magnetic shielding, or even a non-uniform gravitational field in the laboratory leading to erroneous force measurements. As a comparison, the force measured in this latest experiment is roughly comparable to the gravitational attraction between two average-sized people (100kg) standing about 15cm apart. It is an extremely small force.
That the experiments detect a measurable thrust is undeniable. Where the thrust comes from, whether it is real or erroneous, is inconclusive. That the experiments in any way confirm the EMDrive works is a falsehood. This was noted by Tajmar himself, who told the International Business Times “I believe there is no real news here yet.”
The experimental scientists involved have done their jobs to the best of their ability, having tested a hypothesis—albeit a spectacularly unlikely one—and reported their results. These scientists aren’t actually claiming to have invented a warp drive or to have broken the laws of physics. All they’re saying at the moment is that they’ve found something odd and unexplained that might be something new but is likely an experimental artefact that needs further study. The panoply of clickbait headlines and poorly researched articles on the topic are doing something of a disservice to their scientific integrity by claiming otherwise.
The article was originally published at The Conversation

Wednesday, July 29, 2015

Anti-gravity

From Wikipedia, the free encyclopedia
 
Anti-gravity is an idea of creating a place or object that is free from the force of gravity. It does not refer to the lack of weight under gravity experienced in free fall or orbit, or to balancing the force of gravity with some other force, such as electromagnetism or aerodynamic lift. Anti-gravity is a recurring concept in science fiction, particularly in the context of spacecraft propulsion. An early example is the gravity blocking substance "Cavorite" in H. G. Wells' The First Men in the Moon.
In Newton's law of universal gravitation, gravity was an external force transmitted by unknown means. In the 20th century, Newton's model was replaced by general relativity where gravity is not a force but the result of the geometry of spacetime. Under general relativity, anti-gravity is impossible except under contrived circumstances.[1][2][3] Quantum physicists have postulated the existence of gravitons, a set of massless elementary particles that transmit the force, and the possibility of creating or destroying these is unclear.
"Anti-gravity" is often used colloquially to refer to devices that look as if they reverse gravity even though they operate through other means, such as lifters, which fly in the air by using electromagnetic fields.[4][5]

Contents

Hypothetical solutions

Gravity shields

In 1948 successful businessman Roger Babson (founder of Babson College) formed the Gravity Research Foundation to study ways to reduce the effects of gravity.[6] Their efforts were initially somewhat "crankish", but they held occasional conferences that drew such people as Clarence Birdseye known for his frozen-food products and Igor Sikorsky, inventor of the helicopter. Over time the Foundation turned its attention away from trying to control gravity, to simply better understanding it. The Foundation nearly disappeared after Babson's death in 1967. However, it continues to run an essay award, offering prizes of up to $5,000. As of 2013, it is still administered out of Wellesley, Massachusetts, by George Rideout, Jr., son of the foundation's original director.[7] Winners include California astrophysicist George F. Smoot, who later won the 2006 Nobel Prize in physics.

General relativity research in the 1950s

General relativity was introduced in the 1910s, but development of the theory was greatly slowed by a lack of suitable mathematical tools. Although it appeared that anti-gravity was outlawed under general relativity, there were a number of efforts to study potential solutions that allowed anti-gravity-type effects.
It is claimed the US Air Force also ran a study effort throughout the 1950s and into the 1960s.[8] Former Lieutenant Colonel Ansel Talbert wrote two series of newspaper articles claiming that most of the major aviation firms had started gravity control propulsion research in the 1950s. However, there is little outside confirmation of these stories, and since they take place in the midst of the policy by press release era, it is not clear how much weight these stories should be given.
It is known that there were serious efforts underway at the Glenn L. Martin Company, who formed the Research Institute for Advance Study.[9][10] Major newspapers announced the contract that had been made between theoretical physicist Burkhard Heim and the Glenn L. Martin Company. Another effort in the private sector to master understanding of gravitation was the creation of the Institute for Field Physics, University of North Carolina at Chapel Hill in 1956, by Gravity Research Foundation trustee, Agnew H. Bahnson.
Military support for anti-gravity projects was terminated by the Mansfield Amendment of 1973, which restricted Department of Defense spending to only the areas of scientific research with explicit military applications. The Mansfield Amendment was passed specifically to end long-running projects that had little to show for their efforts.
Under general relativity, gravity is the result of following spatial geometry (change in the normal shape of space) caused by local mass-energy. This theory holds that it is the altered shape of space, deformed by massive objects, that causes gravity, which is actually a property of deformed space rather than being a true force. Although the equations cannot normally produce a "negative geometry", it is possible to do so by using "negative mass". The same equations do not, of themselves, rule out the existence of negative mass.
Both general relativity and Newtonian gravity appear to predict that negative mass would produce a repulsive gravitational field. In particular, Sir Hermann Bondi proposed in 1957 that negative gravitational mass, combined with negative inertial mass, would comply with the strong equivalence principle of general relativity theory and the Newtonian laws of conservation of linear momentum and energy. Bondi's proof yielded singularity free solutions for the relativity equations.[11] In July 1988, Robert L. Forward presented a paper at the AIAA/ASME/SAE/ASEE 24th Joint Propulsion Conference that proposed a Bondi negative gravitational mass propulsion system.[12]
Bondi pointed out that a negative mass will fall toward (and not away from) "normal" matter, since although the gravitational force is repulsive, the negative mass (according to Newton's law, F=ma) responds by accelerating in the opposite of the direction of the force. Normal mass, on the other hand, will fall away from the negative matter. He noted that two identical masses, one positive and one negative, placed near each other will therefore self-accelerate in the direction of the line between them, with the negative mass chasing after the positive mass.[11] Notice that because the negative mass acquires negative kinetic energy, the total energy of the accelerating masses remains at zero. Forward pointed out that the self-acceleration effect is due to the negative inertial mass, and could be seen induced without the gravitational forces between the particles.[12]
The Standard Model of particle physics, which describes all presently known forms of matter, does not include negative mass. Although cosmological dark matter may consist of particles outside the Standard Model whose nature is unknown, their mass is ostensibly known – since they were postulated from their gravitational effects on surrounding objects, which implies their mass is positive. The proposed cosmological dark energy, on the other hand, is more complicated, since according to general relativity the effects of both its energy density and its negative pressure contribute to its gravitational effect.

Fifth force

Under the general relativity any form of energy couples with spacetime to create the geometries that cause gravity. A longstanding question was whether or not these same equations applied to antimatter. The issue was considered solved in 1960 with the development of CPT symmetry, which demonstrated that antimatter follows the same laws of physics as "normal" matter, and therefore has positive energy content and also causes (and reacts to) gravity like normal matter (see gravitational interaction of antimatter).
For much of the last quarter of the 20th century, the physics community was involved in attempts to produce a unified field theory, a single physical theory that explains the four fundamental forces: gravity, electromagnetism, and the strong and weak nuclear forces. Scientists have made progress in unifying the three quantum forces, but gravity has remained "the problem" in every attempt. This has not stopped any number of such attempts from being made, however.
Generally these attempts tried to "quantize gravity" by positing a particle, the graviton, that carried gravity in the same way that photons (light) carry electromagnetism. Simple attempts along this direction all failed, however, leading to more complex examples that attempted to account for these problems. Two of these, supersymmetry and the relativity related supergravity, both required the existence of an extremely weak "fifth force" carried by a graviphoton, which coupled together several "loose ends" in quantum field theory, in an organized manner. As a side effect, both theories also all but required that antimatter be affected by this fifth force in a way similar to anti-gravity, dictating repulsion away from mass. Several experiments were carried out in the 1990s to measure this effect, but none yielded positive results.[13]
In 2013 Cern looked for an antigravity effect in an experiment designed to study the energy levels within antihydrogen. The antigravity measurement was just an "interesting sideshow" and was inconclusive.[14]

General-relativistic "warp drives"

There are solutions of the field equations of general relativity which describe "warp drives" (such as the Alcubierre metric) and stable, traversable wormholes. This by itself is not significant, since any spacetime geometry is a solution of the field equations for some configuration of the stress–energy tensor field (see exact solutions in general relativity). General relativity does not constrain the geometry of spacetime unless outside constraints are placed on the stress–energy tensor. Warp-drive and traversable-wormhole geometries are well-behaved in most areas, but require regions of exotic matter; thus they are excluded as solutions if the stress–energy tensor is limited to known forms of matter. Dark matter and dark energy are not understood enough at this present time to make general statements regarding their applicability to a warp-drive.

Breakthrough Propulsion Physics Program

During the close of the twentieth century NASA provided funding for the Breakthrough Propulsion Physics Program (BPP) from 1996 through 2002. This program studied a number of "far out" designs for space propulsion that were not receiving funding through normal university or commercial channels. Anti-gravity-like concepts were investigated under the name "diametric drive". The work of the BPP program continues in the independent, non-NASA affiliated Tau Zero Foundation.[15]

Empirical claims and commercial efforts

There have been a number of attempts to build anti-gravity devices, and a small number of reports of anti-gravity-like effects in the scientific literature. None of the examples that follow are accepted as reproducible examples of anti-gravity.

Gyroscopic devices

A "kinemassic field" generator from U.S. Patent 3,626,605: Method and apparatus for generating a secondary gravitational force field
 
Gyroscopes produce a force when twisted that operates "out of plane" and can appear to lift themselves against gravity. Although this force is well understood to be illusory, even under Newtonian models, it has nevertheless generated numerous claims of anti-gravity devices and any number of patented devices. None of these devices have ever been demonstrated to work under controlled conditions, and have often become the subject of conspiracy theories as a result. A famous example is that of Professor Eric Laithwaite of Imperial College, London, in the 1974 address to the Royal Institution.[16]
Another "rotating device" example is shown in a series of patents granted to Henry Wallace between 1968 and 1974. His devices consist of rapidly spinning disks of brass, a material made up largely of elements with a total half-integer nuclear spin. He claimed that by rapidly rotating a disk of such material, the nuclear spin became aligned, and as a result created a "gravitomagnetic" field in a fashion similar to the magnetic field created by the Barnett effect.[17][18][19] No independent testing or public demonstration of these devices is known.
In 1989, it was reported that a weight decreases along the axis of a right spinning gyroscope.[20] A test of this claim a year later yielded null results.[21] A recommendation was made to conduct further tests at a 1999 AIP conference.[22]

Thomas Townsend Brown's gravitator

In 1921, while still in high school, Thomas Townsend Brown found that a high-voltage Coolidge tube seemed to change mass depending on its orientation on a balance scale. Through the 1920s Brown developed this into devices that combined high voltages with materials with high dielectric constants (essentially large capacitors); he called such a device a "gravitator". Brown made the claim to observers and in the media that his experiments were showing anti-gravity effects. Brown would continue his work and produced a series of high-voltage devices in the following years in attempts to sell his ideas to aircraft companies and the military. He coined the names Biefeld–Brown effect and electrogravitics in conjunction with his devices. Brown tested his asymmetrical capacitor devices in a vacuum, supposedly showing it was not a more down to earth electrohydrodynamic effect generated by high voltage ion flow in air.
Electrogravitics is a popular topic in ufology, anti-gravity, free energy, with government conspiracy theorists and related websites, in books and publications with claims that the technology became highly classified in the early 1960s and that it is used to power UFOs and the B-2 bomber.[23] There is also research and videos on the internet purported to show lifter-style capacitor devices working in a vacuum, therefore not receiving propulsion from ion drift or ion wind being generated in air.[23][24]
Follow-up studies on Brown's work and other claims have been conducted by R. L. Talley in a 1990 US Air Force study, NASA scientist Jonathan Campbell in a 2003 experiment,[25] and Martin Tajmar in a 2004 paper.[26] They have found that no thrust could be observed in a vacuum and that Brown's and other ion lifter devices produce thrust along their axis regardless of the direction of gravity consistent with electrohydrodynamic effects.

Gravitoelectric coupling

In 1992, the Russian researcher Eugene Podkletnov claimed to have discovered, whilst experimenting with superconductors, that a fast rotating superconductor reduces the gravitational effect.[27] Many studies have attempted to reproduce Podkletnov's experiment, always to negative results.[28][29][30][31]
Ning Li and Douglas Torr, of the University of Alabama in Huntsville proposed how a time dependent magnetic field could cause the spins of the lattice ions in a superconductor to generate detectable gravitomagnetic and gravitoelectric fields in a series of papers published between 1991 and 1993.[32][33][34] In 1999, Li and her team appeared in Popular Mechanics, claiming to have constructed a working prototype to generate what she described as "AC Gravity." No further evidence of this prototype has been offered.[35][36]
Douglas Torr and Timir Datta were involved in the development of a "gravity generator" at the University of South Carolina.[37] According to a leaked document from the Office of Technology Transfer at the University of South Carolina and confirmed to Wired reporter Charles Platt in 1998, the device would create a "force beam" in any desired direction and that the university planned to patent and license this device. No further information about this university research project or the "Gravity Generator" device was ever made public.[38]

Göde Award

The Institute for Gravity Research of the Göde Scientific Foundation has tried to reproduce many of the different experiments which claim any "anti-gravity" effects. All attempts by this group to observe an anti-gravity effect by reproducing past experiments have been unsuccessful thus far. The foundation has offered a reward of one million euros for a reproducible anti-gravity experiment.[39]

Conventional effects that mimic anti-gravity effects

  • Magnetic levitation suspends an object against gravity by use of electromagnetic forces. While visually impressive, gravitation itself functions normally in such devices. Various alleged anti-gravity devices may in reality work by electromagnetism.
  • A tidal force causes objects to move along diverging paths near a massive body (such as a planet or star), producing effects that seem like repulsion or disruptive forces when observed locally. This is not anti-gravity. In Newtonian mechanics, the tidal force is the effect of the larger object's gravitational force being different at the differing locations of the diverging bodies. Equivalently, in Einsteinian gravity, the tidal force is the effect of the diverging bodies following different paths in the negatively curved spacetime around the larger body.
  • Large amounts of normal matter can be used to produce a gravitational field that compensates for the effects of another gravitational field, though the entire assembly will still be attracted to the source of the larger field. Physicist Robert L. Forward proposed using lumps of degenerate matter to locally compensate for the tidal forces near a neutron star.
  • Ionocraft, sometimes referred to as "Lifters", have been claimed to defy gravity, but in fact they use accelerated ions which have been stripped from the air around them to produce thrust. The thrust produced by one of these devices is not enough to lift its own power supply. Specifically, a special type of electrohydrodynamic thruster uses the Biefeld–Brown effect to hover.
  • Archimedes Principle that a body experiences an upthrust equal to the weight of the air it displaces mimics the effects of 'antigravity' and is responsible for the confusion regarding whether superconductors produce gravitational effects. When a superconductor is first weighed at room temperature it displaces a given volume of air and experiences a certain (small) upthrust. When the same superconductor is then cooled with liquid nitrogen, the cold nitrogen gas around the superconductor has a significantly higher density (approx 7%) than room temperature air, which causes a larger upthrust and hence an apparent reduction in measured weight.

Dark energy

From Wikipedia, the free encyclopedia
Not to be confused with Dark flow, Dark fluid, or Dark matter.
In physical cosmology and astronomy, dark energy is an unknown form of energy which is hypothesized to permeate all of space, tending to accelerate the expansion of the universe.[1] Dark energy is the most accepted hypothesis to explain the observations since the 1990s indicating that the universe is expanding at an accelerating rate. According to the Planck mission team, and based on the standard model of cosmology, on a mass–energy equivalence basis, the observable universe contains 26.8% dark matter, 68.3% dark energy (for a total of 95.1%) and 4.9% ordinary matter.[2][3][4][5] Again on a mass–energy equivalence basis, the density of dark energy (6.91 × 10−27 kg/m3) is very low, much less than the density of ordinary matter or dark matter within galaxies. However, it comes to dominate the mass–energy of the universe because it is uniform across space.[6][7]
Two proposed forms for dark energy are the cosmological constant, a constant energy density filling space homogeneously,[8] and scalar fields such as quintessence or moduli, dynamic quantities whose energy density can vary in time and space. Contributions from scalar fields that are constant in space are usually also included in the cosmological constant. The cosmological constant can be formulated to be equivalent to vacuum energy. Scalar fields that do change in space can be difficult to distinguish from a cosmological constant because the change may be extremely slow.
High-precision measurements of the expansion of the universe are required to understand how the expansion rate changes over time and space. In general relativity, the evolution of the expansion rate is parameterized by the cosmological equation of state (the relationship between temperature, pressure, and combined matter, energy, and vacuum energy density for any region of space). Measuring the equation of state for dark energy is one of the biggest efforts in observational cosmology today.
Adding the cosmological constant to cosmology's standard FLRW metric leads to the Lambda-CDM model, which has been referred to as the "standard model of cosmology" because of its precise agreement with observations. Dark energy has been used as a crucial ingredient in a recent attempt to formulate a cyclic model for the universe.[9]

Contents

Nature of dark energy

Many things about the nature of dark energy remain matters of speculation. The evidence for dark energy is indirect but comes from three independent sources:
  • Distance measurements and their relation to redshift, which suggest the universe has expanded more in the last half of its life.[10]
  • The theoretical need for a type of additional energy that is not matter or dark matter to form the observationally flat universe (absence of any detectable global curvature).
  • It can be inferred from measures of large scale wave-patterns of mass density in the universe.
Dark energy is thought to be very homogeneous, not very dense and is not known to interact through any of the fundamental forces other than gravity. Since it is quite rarefied—roughly 10−30 g/cm3—it is unlikely to be detectable in laboratory experiments. Dark energy can have such a profound effect on the universe, making up 68% of universal density, only because it uniformly fills otherwise empty space. The two leading models are a cosmological constant and quintessence. Both models include the common characteristic that dark energy must have negative pressure.

Effect of dark energy: a small constant negative pressure of vacuum


Diagram representing the accelerated expansion of the universe due to dark energy.
Independently of its actual nature, dark energy would need to have a strong negative pressure (acting repulsively) in order to explain the observed acceleration of the expansion of the universe. According to general relativity, the pressure within a substance contributes to its gravitational attraction for other things just as its mass density does. This happens because the physical quantity that causes matter to generate gravitational effects is the stress–energy tensor, which contains both the energy (or matter) density of a substance and its pressure and viscosity. In the Friedmann–Lemaître–Robertson–Walker metric, it can be shown that a strong constant negative pressure in all the universe causes an acceleration in universe expansion if the universe is already expanding, or a deceleration in universe contraction if the universe is already contracting. This accelerating expansion effect is sometimes labeled "gravitational repulsion", which is a colorful but possibly confusing expression. In fact a negative pressure does not influence the gravitational interaction between masses—which remains attractive—but rather alters the overall evolution of the universe at the cosmological scale, typically resulting in the accelerating expansion of the universe despite the attraction among the masses present in the universe. The acceleration is simply a function of dark energy density. Dark energy is persistent: its density remains constant (experimentally, within a factor of 1:10), i.e. it does not get diluted when space expands.

Evidence of existence

Supernovae


A Type Ia supernova (bright spot on the bottom-left) near a galaxy
In 1998, published observations of Type Ia supernovae ("one-A") by the High-Z Supernova Search Team[11] followed in 1999 by the Supernova Cosmology Project[12] suggested that the expansion of the universe is accelerating.[13] The 2011 Nobel Prize in Physics was awarded to Saul Perlmutter, Brian P. Schmidt and Adam G. Riess for their leadership in the discovery.[14][15]
Since then, these observations have been corroborated by several independent sources. Measurements of the cosmic microwave background, gravitational lensing, and the large-scale structure of the cosmos as well as improved measurements of supernovae have been consistent with the Lambda-CDM model.[16] Some people argue that the only indication for the existence of dark energy is observations of distance measurements and associated redshifts. Cosmic microwave background anisotropies and baryon acoustic oscillations are only observations that distances to a given redshift are larger than expected from a "dusty" Friedmann–Lemaître universe and the local measured Hubble constant.[17]
Supernovae are useful for cosmology because they are excellent standard candles across cosmological distances. They allow the expansion history of the universe to be measured by looking at the relationship between the distance to an object and its redshift, which gives how fast it is receding from us. The relationship is roughly linear, according to Hubble's law. It is relatively easy to measure redshift, but finding the distance to an object is more difficult. Usually, astronomers use standard candles: objects for which the intrinsic brightness, the absolute magnitude, is known. This allows the object's distance to be measured from its actual observed brightness, or apparent magnitude. Type Ia supernovae are the best-known standard candles across cosmological distances because of their extreme and consistent luminosity.
Recent observations of supernovae are consistent with a universe made up 71.3% of dark energy and 27.4% of a combination of dark matter and baryonic matter.[18]

Cosmic microwave background


Estimated distribution of matter and energy in the universe[19]
The existence of dark energy, in whatever form, is needed to reconcile the measured geometry of space with the total amount of matter in the universe. Measurements of cosmic microwave background (CMB) anisotropies indicate that the universe is close to flat. For the shape of the universe to be flat, the mass/energy density of the universe must be equal to the critical density. The total amount of matter in the universe (including baryons and dark matter), as measured from the CMB spectrum, accounts for only about 30% of the critical density. This implies the existence of an additional form of energy to account for the remaining 70%.[16] The Wilkinson Microwave Anisotropy Probe (WMAP) spacecraft seven-year analysis estimated a universe made up of 72.8% dark energy, 22.7% dark matter and 4.5% ordinary matter.[4] Work done in 2013 based on the Planck spacecraft observations of the CMB gave a more accurate estimate of 68.3% of dark energy, 26.8% of dark matter and 4.9% of ordinary matter.[20]

Large-scale structure

The theory of large-scale structure, which governs the formation of structures in the universe (stars, quasars, galaxies and galaxy groups and clusters), also suggests that the density of matter in the universe is only 30% of the critical density.
A 2011 survey, the WiggleZ galaxy survey of more than 200,000 galaxies, provided further evidence towards the existence of dark energy, although the exact physics behind it remains unknown.[21][22] The WiggleZ survey from Australian Astronomical Observatory scanned the galaxies to determine their redshift. Then, by exploiting the fact that baryon acoustic oscillations have left voids regularly of ~150 Mpc diameter, surrounded by the galaxies, the voids were used as standard rulers to determine distances to galaxies as far as 2,000 Mpc (redshift 0.6), which allowed astronomers to determine more accurately the speeds of the galaxies from their redshift and distance. The data confirmed cosmic acceleration up to half of the age of the universe (7 billion years) and constrain its inhomogeneity to 1 part in 10.[22] This provides a confirmation to cosmic acceleration independent of supernovae.

Late-time integrated Sachs-Wolfe effect

Accelerated cosmic expansion causes gravitational potential wells and hills to flatten as photons pass through them, producing cold spots and hot spots on the CMB aligned with vast supervoids and superclusters. This so-called late-time Integrated Sachs–Wolfe effect (ISW) is a direct signal of dark energy in a flat universe.[23] It was reported at high significance in 2008 by Ho et al.[24] and Giannantonio et al.[25]

Observational Hubble constant data

A new approach to test evidence of dark energy through observational Hubble constant (H(z)) data (OHD) has gained significant attention in recent years.[26][27][28][29] The Hubble constant is measured as a function of cosmological redshift. OHD directly tracks the expansion history of the universe by taking passively evolving early-type galaxies as “cosmic chronometers”.[30] From this point, this approach provides standard clocks in the universe. The core of this idea is the measurement of the differential age evolution as a function of redshift of these cosmic chronometers. Thus, it provides a direct estimate of the Hubble parameter H(z)=-1/(1+z)dz/dt≈-1/(1+z)Δz/Δt. The merit of this approach is clear: the reliance on a differential quantity, Δz/Δt, can minimize many common issues and systematic effects; and as a direct measurement of the Hubble parameter instead of its integral, like supernovae and baryon acoustic oscillations (BAO), it brings more information and is appealing in computation. For these reasons, it has been widely used to examine the accelerated cosmic expansion and study properties of dark energy.

Theories of explanation

Cosmological constant

Main article: Cosmological constant
For more details on this topic, see Equation of state (cosmology).

Lambda, the letter that represents the cosmological constant
The simplest explanation for dark energy is that it is simply the "cost of having space": that is, a volume of space has some intrinsic, fundamental energy. This is the cosmological constant, sometimes called Lambda (hence Lambda-CDM model) after the Greek letter Λ, the symbol used to represent this quantity mathematically. Since energy and mass are related by E = mc2, Einstein's theory of general relativity predicts that this energy will have a gravitational effect. It is sometimes called a vacuum energy because it is the energy density of empty vacuum. In fact, most theories of particle physics predict vacuum fluctuations that would give the vacuum this sort of energy. This is related to the Casimir effect, in which there is a small suction into regions where virtual particles are geometrically inhibited from forming (e.g. between plates with tiny separation). The cosmological constant is estimated by cosmologists to be on the order of 10−29 g/cm3, or about 10−120 in reduced Planck units[citation needed]. Particle physics predicts a natural value of 1 in reduced Planck units, leading to a large discrepancy.
The cosmological constant has negative pressure equal to its energy density and so causes the expansion of the universe to accelerate. The reason why a cosmological constant has negative pressure can be seen from classical thermodynamics; Energy must be lost from inside a container to do work on the container. A change in volume dV requires work done equal to a change of energy −P dV, where P is the pressure. But the amount of energy in a container full of vacuum actually increases when the volume increases (dV is positive), because the energy is equal to ρV, where ρ (rho) is the energy density of the cosmological constant. Therefore, P is negative and, in fact, P = −ρ.
A major outstanding problem is that most quantum field theories predict a huge cosmological constant from the energy of the quantum vacuum, more than 100 orders of magnitude too large.[8] This would need to be cancelled almost, but not exactly, by an equally large term of the opposite sign. Some supersymmetric theories require a cosmological constant that is exactly zero,[31] which does not help because supersymmetry must be broken. The present scientific consensus amounts to extrapolating the empirical evidence where it is relevant to predictions, and fine-tuning theories until a more elegant solution is found. Technically, this amounts to checking theories against macroscopic observations. Unfortunately, as the known error-margin in the constant predicts the fate of the universe more than its present state, many such "deeper" questions remain unknown.
In spite of its problems, the cosmological constant is in many respects the most economical solution to the problem of cosmic acceleration. One number successfully explains a multitude of observations. Thus, the current standard model of cosmology, the Lambda-CDM model, includes the cosmological constant as an essential feature.

Quintessence

Main article: Quintessence (physics)
In quintessence models of dark energy, the observed acceleration of the scale factor is caused by the potential energy of a dynamical field, referred to as quintessence field. Quintessence differs from the cosmological constant in that it can vary in space and time. In order for it not to clump and form structure like matter, the field must be very light so that it has a large Compton wavelength.
No evidence of quintessence is yet available, but it has not been ruled out either. It generally predicts a slightly slower acceleration of the expansion of the universe than the cosmological constant. Some scientists think that the best evidence for quintessence would come from violations of Einstein's equivalence principle and variation of the fundamental constants in space or time.[32] Scalar fields are predicted by the Standard Model of particle physics and string theory, but an analogous problem to the cosmological constant problem (or the problem of constructing models of cosmological inflation) occurs: renormalization theory predicts that scalar fields should acquire large masses.
The coincidence problem asks why the acceleration of the Universe began when it did. If acceleration began earlier in the universe, structures such as galaxies would never have had time to form and life, at least as we know it, would never have had a chance to exist. Proponents of the anthropic principle view this as support for their arguments. However, many models of quintessence have a so-called tracker behavior, which solves this problem. In these models, the quintessence field has a density which closely tracks (but is less than) the radiation density until matter-radiation equality, which triggers quintessence to start behaving as dark energy, eventually dominating the universe. This naturally sets the low energy scale of the dark energy.[33]
In 2004, when scientists fit the evolution of dark energy with the cosmological data, they found that the equation of state had possibly crossed the cosmological constant boundary (w=−1) from above to below. A No-Go theorem has been proved that gives this scenario at least two degrees of freedom as required for dark energy models. This scenario is so-called Quintom scenario.
Some special cases of quintessence are phantom energy, in which the energy density of quintessence actually increases with time, and k-essence (short for kinetic quintessence) which has a non-standard form of kinetic energy. They can have unusual properties: phantom energy, for example, can cause a Big Rip.

Alternative ideas

Some alternatives to dark energy aim to explain the observational data by a more refined use of established theories, focusing, for example, on the gravitational effects of density inhomogeneities, or on consequences of electroweak symmetry breaking in the early universe. If we are located in an emptier-than-average region of space, the observed cosmic expansion rate could be mistaken for a variation in time, or acceleration.[34][35][36][37] A different approach uses a cosmological extension of the equivalence principle to show how space might appear to be expanding more rapidly in the voids surrounding our local cluster. While weak, such effects considered cumulatively over billions of years could become significant, creating the illusion of cosmic acceleration, and making it appear as if we live in a Hubble bubble.[38][39][40]
Another class of theories attempts to come up with an all-encompassing theory of both dark matter and dark energy as a single phenomenon that modifies the laws of gravity at various scales. An example of this type of theory is the theory of dark fluid. Another class of theories that unifies dark matter and dark energy are suggested to be covariant theories of modified gravities. These theories alter the dynamics of the space-time such that the modified dynamic stems what have been assigned to the presence of dark energy and dark matter.[41]
A 2011 paper in the journal Physical Review D by Christos Tsagas, a cosmologist at Aristotle University of Thessaloniki in Greece, argued that it is likely that the accelerated expansion of the universe is an illusion caused by the relative motion of us to the rest of the universe. The paper cites data showing that the 2.5 billion ly wide region of space we are inside of is moving very quickly relative to everything around it. If the theory is confirmed, then dark energy would not exist (but the "dark flow" still might).[42][43]
Some theorists think that dark energy and cosmic acceleration are a failure of general relativity on very large scales, larger than superclusters.[citation needed] However most attempts at modifying general relativity have turned out to be either equivalent to theories of quintessence, or inconsistent with observations.[citation needed] Other ideas for dark energy have come from string theory, brane cosmology and the holographic principle, but have not yet proved[citation needed] as compelling as quintessence and the cosmological constant.
On string theory, an article in the journal Nature described:[44]
String theories, popular with many particle physicists, make it possible, even desirable, to think that the observable universe is just one of 10500 universes in a grander multiverse, says Leonard Susskind, a cosmologist at Stanford University in California. The vacuum energy will have different values in different universes, and in many or most it might indeed be vast. But it must be small in ours because it is only in such a universe that observers such as ourselves can evolve.
Paul Steinhardt in the same article criticizes string theory's explanation of dark energy stating "...Anthropics and randomness don't explain anything... I am disappointed with what most theorists are willing to accept".[44]
Another set of proposals is based on the possibility of a double metric tensor for space-time.[45][46] It has been argued that time reversed solutions in general relativity require such double metric for consistency, and that both dark matter and dark energy can be understood in terms of time reversed solutions of general relativity.[47]
It has been shown that if inertia is assumed to be due to the effect of horizons on Unruh radiation then this predicts galaxy rotation and a cosmic acceleration similar to that observed.[48]

Variable Dark Energy models


The equation of state of Dark Energy for 4 common models by Redshift.[49]
A: CPL Model,
B: Jassal Model,
C: Barboza & Alcaniz Model,
D: Wetterich Model
In general, the dark energy can be variable. Modern observational data have determined the density of dark energy in the present. Using baryon acoustic oscillations, we can investigate the effect of dark energy in the history of the Universe and we can constrain parameters of the equation of state of dark energy. One of the proposed solutions to get closer to answering the question of dark energy, is to assume that it is variable. To that end, several models have been proposed. One of their most popular models is Chevallier–Polarski–Linder model (CPL).[50][51] Some other common models are, (Barboza & Alcaniz. 2008),[52] (Jassal et al. 2005),[53] (Wetterich. 2004).[54]

Implications for the fate of the universe

Cosmologists estimate that the acceleration began roughly 5 billion years ago. Before that, it is thought that the expansion was decelerating, due to the attractive influence of dark matter and baryons. The density of dark matter in an expanding universe decreases more quickly than dark energy, and eventually the dark energy dominates. Specifically, when the volume of the universe doubles, the density of dark matter is halved, but the density of dark energy is nearly unchanged (it is exactly constant in the case of a cosmological constant).
If the acceleration continues indefinitely, the ultimate result will be that galaxies outside the local supercluster will have a line-of-sight velocity that continually increases with time, eventually far exceeding the speed of light.[55] This is not a violation of special relativity because the notion of "velocity" used here is different from that of velocity in a local inertial frame of reference, which is still constrained to be less than the speed of light for any massive object (see Uses of the proper distance for a discussion of the subtleties of defining any notion of relative velocity in cosmology). Because the Hubble parameter is decreasing with time, there can actually be cases where a galaxy that is receding from us faster than light does manage to emit a signal which reaches us eventually.[56][57] However, because of the accelerating expansion, it is projected that most galaxies will eventually cross a type of cosmological event horizon where any light they emit past that point will never be able to reach us at any time in the infinite future[58] because the light never reaches a point where its "peculiar velocity" toward us exceeds the expansion velocity away from us (these two notions of velocity are also discussed in Uses of the proper distance). Assuming the dark energy is constant (a cosmological constant), the current distance to this cosmological event horizon is about 16 billion light years, meaning that a signal from an event happening at present would eventually be able to reach us in the future if the event were less than 16 billion light years away, but the signal would never reach us if the event were more than 16 billion light years away.[57]
As galaxies approach the point of crossing this cosmological event horizon, the light from them will become more and more redshifted, to the point where the wavelength becomes too large to detect in practice and the galaxies appear to vanish completely[59][60] (see Future of an expanding universe). The Earth, the Milky Way, and the Virgo Supercluster[contradictory], however, would remain virtually undisturbed while the rest of the universe recedes and disappears from view. In this scenario, the local supercluster would ultimately suffer heat death, just as was thought for the flat, matter-dominated universe before measurements of cosmic acceleration.
There are some very speculative ideas about the future of the universe. One suggests that phantom energy causes divergent expansion, which would imply that the effective force of dark energy continues growing until it dominates all other forces in the universe. Under this scenario, dark energy would ultimately tear apart all gravitationally bound structures, including galaxies and solar systems, and eventually overcome the electrical and nuclear forces to tear apart atoms themselves, ending the universe in a "Big Rip". On the other hand, dark energy might dissipate with time or even become attractive. Such uncertainties leave open the possibility that gravity might yet rule the day and lead to a universe that contracts in on itself in a "Big Crunch".[61] Some scenarios, such as the cyclic model, suggest this could be the case. It is also possible the universe may never have an end and continue in its present state forever (see The Second Law as a law of disorder). While these ideas are not supported by observations, they are not ruled out.

History of discovery and previous speculation

The cosmological constant was first proposed by Einstein as a mechanism to obtain a solution of the gravitational field equation that would lead to a static universe, effectively using dark energy to balance gravity.[62] Not only was the mechanism an inelegant example of fine-tuning but it was also later realized that Einstein's static universe would actually be unstable because local inhomogeneities would ultimately lead to either the runaway expansion or contraction of the universe. The equilibrium is unstable: If the universe expands slightly, then the expansion releases vacuum energy, which causes yet more expansion. Likewise, a universe which contracts slightly will continue contracting. These sorts of disturbances are inevitable, due to the uneven distribution of matter throughout the universe. More importantly, observations made by Edwin Hubble in 1929 showed that the universe appears to be expanding and not static at all. Einstein reportedly referred to his failure to predict the idea of a dynamic universe, in contrast to a static universe, as his greatest blunder.[63]
Alan Guth and Alexei Starobinsky proposed in 1980 that a negative pressure field, similar in concept to dark energy, could drive cosmic inflation in the very early universe. Inflation postulates that some repulsive force, qualitatively similar to dark energy, resulted in an enormous and exponential expansion of the universe slightly after the Big Bang. Such expansion is an essential feature of most current models of the Big Bang. However, inflation must have occurred at a much higher energy density than the dark energy we observe today and is thought to have completely ended when the universe was just a fraction of a second old. It is unclear what relation, if any, exists between dark energy and inflation. Even after inflationary models became accepted, the cosmological constant was thought to be irrelevant to the current universe.
Nearly all inflation models predict that the total (matter+energy) density of the universe should be very close to the critical density. During the 1980s, most cosmological research focused on models with critical density in matter only, usually 95% cold dark matter and 5% ordinary matter (baryons). These models were found to be successful at forming realistic galaxies and clusters, but some problems appeared in the late 1980s: notably, the model required a value for the Hubble constant lower than preferred by observations, and the model under-predicted observations of large-scale galaxy clustering. These difficulties became stronger after the discovery of anisotropy in the cosmic microwave background by the COBE spacecraft in 1992, and several modified CDM models came under active study through the mid-1990s: these included the Lambda-CDM model and a mixed cold/hot dark matter model. The first direct evidence for dark energy came from supernova observations in 1998 of accelerated expansion in Riess et al.[11] and in Perlmutter et al.,[12] and the Lambda-CDM model then became the leading model. Soon after, dark energy was supported by independent observations: in 2000, the BOOMERanG and Maxima cosmic microwave background experiments observed the first acoustic peak in the CMB, showing that the total (matter+energy) density is close to 100% of critical density. Then in 2001, the 2dF Galaxy Redshift Survey gave strong evidence that the matter density is around 30% of critical. The large difference between these two supports a smooth component of dark energy making up the difference. Much more precise measurements from WMAP in 2003–2010 have continued to support the standard model and give more accurate measurements of the key parameters.
The term "dark energy", echoing Fritz Zwicky's "dark matter" from the 1930s, was coined by Michael Turner in 1998.[64]
As of 2013, the Lambda-CDM model is consistent with a series of increasingly rigorous cosmological observations, including the Planck spacecraft and the Supernova Legacy Survey. First results from the SNLS reveal that the average behavior (i.e., equation of state) of dark energy behaves like Einstein's cosmological constant to a precision of 10%.[65] Recent results from the Hubble Space Telescope Higher-Z Team indicate that dark energy has been present for at least 9 billion years and during the period preceding cosmic acceleration.