Alarming data from China was met with a soothing hint about monetary
policy. But treasuries cannot keep pumping cheap credit into a series of
asset bubbles
Like
children clinging to their parents, stock market traders turned to
their central banks last week as they sought protection from the
frightening economic figures coming out of China. Surely, they asked,
the central banks would ward off the approaching bogeymen, as they had
so many times since the 2008 crash.
The US Federal Reserve
came up with the goods. William Dudley, president of the bank’s New
York branch, hinted that the interest rate rise many had expected next
month was likely to be delayed.
A signal that borrowing costs would remain at rock bottom was all it
took. After Black Monday and Wobbly Tuesday, the markets recovered to
regain almost all their recent losses.
It was just as if they had said to themselves: who cares if China’s
economy is slowing; the “Greenspan put”, which so famously propped up US
stock markets during the 1990s and early 2000s with one interest rate
cut after another, is still in operation.
The meeting of the world’s most important central bankers in Jackson
Hole, Wyoming, this weekend only confirmed the need for Britain, Japan,
the eurozone and the US to keep monetary policy loose.
Yet the palliative offered by the Fed is akin to a parent soothing
fears with another round of ice-creams despite expanding waistlines and
warnings from the dentist and the doctor.
According to some City analysts, the stock markets are pumped
with so much cheap credit that a crash is just around the corner. And
they worry that when that crash comes, the central banks are all out of
moves to prevent the aftershocks from causing a broader collapse.
Since 2008 the Fed has pumped around $4.5 trillion into the financial system. The Bank of England
stopped at £375bn. The Bank of Japan is still adding to its post-crash
stimulus with around $700bn a year and the Frankfurt-based European
Central Bank will have matched its cousin in Tokyo by the end of the
year.
In each case, the central bank has adopted quantitative easing, which
involves buying government debt to drive up its price. A higher price
lowers the returns and encourages investors to go elsewhere in search of
gains. It has meant a big shift in the portfolios of fund managers in
favour of shares. Apart from a few blips due to the Greek crisis, stock
markets have boomed. This summer, the FTSE 100 soared past 2008 levels
to top its 1999 peak.
But China, which has borrowed heavily to keep its economy moving, is
running out of steam. Beijing has said it does not want to encourage
another borrowing boom. But to prevent a crash, it is doing just that.
In the last two weeks it has cut interest rates and loosened borrowing
limits. It has even invested directly in the market, buying the shares
of smaller companies.
So we face the shocking prospect of central bankers, in thrall to
stock market gyrations, making the world a more unstable place with
promises of yet more cheap credit.
There are a few alternative courses of action that Bank of England governor Mark Carney
could still propose. He could tell politicians that the only
sustainable way to get their economies moving is with hard cash from
taxed wealth and incomes. If that is too unpalatable, governments should
borrow directly to fund public infrastructure and productivity
improvements.
And if the government is too embarrassed to admit to voters that it
needs to borrow money, then the least central banks can do is sign deals
with high street banks to lend, rather than hoping they will take QE
funds and do something useful with them. Because the evidence is already
there for all to see that investors would prefer to pump the money into
the stock market and property, both of them inherently unstable and
prone to violent crashes.
Google feels the EU heat
‘We have owned the internet,” Barack Obama crowed in a video
interview this February. “Our companies have created it, expanded it,
perfected it.” But from tax to privacy and now a string of antitrust
investigations, one of those companies, Google, is under attack on multiple fronts in Europe.
The US president argued that what has been presented as high-minded
intervention by regulators is in fact a commercially driven bid to
protect old-world technology companies from the Silicon Valley invaders.
Europe’s most powerful regulator, the Brussels competition policy chief Margrethe Vestager, has taken aim at Google Shopping,
its price-comparison service. In April, she launched a legal process
that could result in a big fine. On Thursday, the search giant filed a
150-page rebuttal.
First of all, Google says it has not choked off traffic to rival
shopping price sites – the traffic it sends to these kinds of sites has
increased by 227% over a decade (its own data shows). Sounds like a big
number. Until you look at the growth in Google’s own traffic over that
period. The number of searches worldwide now stands at somewhere near
1.2 trillion a year – an increase of 750% in a decade.
Google also wants its shopping service considered as part of a much
larger group that includes actual retailers, like Amazon, and
marketplaces, like eBay, where people also go to compare prices.
Perhaps, but these are very different businesses. The capital needed to
launch an online retailer, let alone one with a global presence and a
one-day delivery guarantee, is much larger than the funding required for
a price-comparison startup.
Google’s strongest point is that forcing a company to offer its
services to rivals is a drastic measure normally reserved for monopoly
utilities. Google is very useful, but it is not water or electricity.
Google Shopping is a very niche service.
The argument about fair price comparison is a sideshow given the much
bigger worries about privacy, security and tax that currently dog the
digital world. But a victory for regulators here could lead to more
assaults – on Google’s flight-comparison service and maps. Vestager has
also launched an investigation into Android, the group’s market-leading
phone software. Obama may be called on to cheer the home team again
before this fight is over.
Two sides to the national living wage?
George Osborne has already claimed credit for Sainsbury’s increasing the base pay of its shop floor staff by 4%. “Britain
deserves a pay rise so great Sainsbury’s staff will be paid at least
national living wage early with biggest increases in 10 years,” he tweeted last week.
It is a indeed a credit to the chancellor that he has put pay on the
agenda for Britain’s supermarkets, the country’s largest private
employers, by announcing a national living wage in the budget last
month. However, Sainsbury’s announcement is just one side of the coin:
the other side is how it will finance the pay increase.
The Office for Budget Responsibility has estimated that the
chancellor’s national living wage will lead to 60,000 job losses as
companies fund the higher minimum pay by cutting jobs. Moody’s, the
credit rating agency, has warned retailers could close stores, increase
prices and employ more under-25s – who do not qualify for the national
living wage. Perks for staff, such as in-store discounts, could also be
at risk.
Sainsbury’s did not downgrade its profit forecasts at the same time
as announcing the pay increase, so it must have found the money from
somewhere – time will tell where.
Stocks Are Sending a Recession Warning
By Anthony Mirhaydari
2 hours ago
The bad omens are building in the stock market.
Set aside the situation in China, where data released Tuesday showed
manufacturing activity dropped last month to a three-year low and
reached contractionary territory — the given reason for Tuesday’s market tumble.
Forget for a moment about the Federal Reserve, which seems committed to
raising interest rates this month for the first time since 2006. The
stock market itself is warning of big trouble.
Collectively representing the opinions of countless individual
participants, it pays to pay attention to what is the greatest future
discounting mechanism in human history. Related: The Troubling Truth Revealed by the Stock Market’s Nosedive
The technical damage to stock
prices has been severe. The S&P 500 has suffered its first "Death
Cross" — a plunge of the 50-day moving average below the 200-day moving
average, a sign of lost medium-term momentum — in four years. The
long-term trend is at risk, as the index closed Monday’s session below
its 12-month moving average, a strong predictor of bear markets.
Unless stocks mount a historic
charge higher here — ending September 6 percent higher — it could be
game over for the bull market. History isn't on their side.
August ended with more than a 5
percent loss on the S&P 500, the worst performance for the month in
17 years and down 7.5 percent from its July high. According to Jason
Goepfert at SentimenTrader, after August losses of this magnitude since
1928, September sported a positive return only 4 out of 13 times,
posting an average loss of 5.4 percent. When they rallied, stocks only
rose above August's close by an average of 1.4 percent. When they fell,
the drop averaged 8.3 percent.
In his words, that's the data reveals a "terrible risk/reward ratio" in stocks right now.
The kicker is that the decline
we've already seen in stocks is setting off alarm bells in the
macroeconomic models created by Wall Street trading desks. Bank of
America Merrill Lynch Economist Michael Hanson's model puts the
probability of recession at nearly 50 percent based on the 15 percent
annualized drop in stocks over the last six months.
Admittedly, the model flashed a
59 percent chance of recession during the 2011 market slide. But should
Goepfert's analysis come to fruition, the odds are likely to rise.
Moreover, the economy was saved
in 2011 by aggressive monetary policy efforts, including the start of
the Fed's "Operation Twist" maturity extension program, the expansion of
dollar liquidity swap lines to Europe that November and the European
Central Bank's three-year bank liquidity stimulus.
A repeat performance would be hard to pull off when, at this point,
market turmoil only appears set to forestall the start of policy
tightening. Citigroup rate strategist Jabaz Mathai admits a further 10
percent drop in stocks would mean the "Fed will most likely not hike, no
matter what the payrolls data is" for August when those numbers are
released on Sept. 4.
Goldman Sachs looks at it from a different angle, noting the recent rise in the CBOE Volatility Index (VIX),
known as Wall Street's "fear gauge." The current level is equal to the
median the VIX has been trading at over the last three recessions. VIX
levels this high for extended periods, in their analysis, "are rare
outside recessions."
Deutsche Bank's chief strategist
Binky Chadha recently wrote to clients that equity market corrections
of 10 percent or more are "rare outside recessions," with only 13
occurrences in the last 65 years in the context of a falling
unemployment rate.
On the flip side, should the
warnings prove false and the bulls manage to push stocks higher in the
weeks to come, the bad omens turn very positive indeed: Recoveries from
non-recession corrections average 8 percent one month later, 10 percent
three months later and 19 percent six months later.
The "energy crisis" hit like a locomotive in the 1970s. Today's
"energy revolution" didn't happen suddenly. It grew out millions of
innovations, processes, and decisions.
A crisis usually arrives amid blaring headlines and a wildfire of
worry. That was how the “energy crisis” of the late 20th century
seemed. The era of plentiful oil was over. Middle East producers enacted
embargoes. Gas lines stretched for blocks. In 1977, then-President
Jimmy Carter called the battle against energy shortages “the moral
equivalent of war.”
A genuine revolution often arrives quietly,
barely noticed because it unfolds gradually and cumulatively. That’s
today’s energy revolution.
Oil prices are tumbling. New
extraction procedures have made oil and natural gas abundant. But that
hasn’t slowed solar, wind, geothermal, and other alternative power
sources. Conservation hasn’t slowed either. LED lights and
less-voracious appliances are curbing consumption and forcing the
mothballing of carbon-spewing power plants.
And that is only the beginning. The next wave is batteries. As you’ll see in David Unger’s cover story (click here),
better batteries will make solar and wind power effective when the sun
doesn’t shine and the winds don’t blow. As major undertakings such as
Elon Musk’s Tesla “gigafactory” improve lithium-ion batteries and
manufacture them at industrial scale, prices will decrease and use will
surge.
When houses, offices, and industrial plants
can produce and store energy sufficient for their needs, then power
plants, utility companies, and the electric grid – that 450,000-mile
network of high-voltage transmission lines strung across the US that is
perhaps the most complex and vulnerable installation on the planet –
become less important. There will still be a need for always-available,
industrial-scale electricity. But power consumption is already
diminishing year by year. Ahead lies a shakeout of the 7,300 power
plants in the US, especially the dirtier and less efficient ones.
The
energy crisis that gripped the world in the late 20th century was not
fought and won. It was worked on year by year by inventors and
improvers. Ideas were tried, tested, reconfigured, and enhanced. A Bell
Labs physicist came up with the first solar cell in 1941; an engineer at
General Electric invented the light-emitting diode in 1962; three
scientists from Oxford University conceived of the lithium-ion battery
in 1980. Today’s rooftop solar arrays, low-energy lightbulbs, and the
power packs that run our cellphones – and soon our houses and offices –
are the product of thousands of improvements layered atop those early
concepts.
Invention is important. Improvement is crucial. The
energy crisis of the late 20th century was a big problem. In the middle
of it, it did feel like the moral equivalent of war. But piece by piece,
the problem was worked out. Shortage turned into abundance.
There
are other crises today. Global climate change is perhaps the biggest of
them. The solution will involve millions of ideas, products, and
techniques that will improve year after year. A crisis doesn’t go away
by ignoring it. It is solved by working on it.
Even if you don’t keep up with developments in space propulsion technology, you’ve still probably heard about the EmDrive by now. You’ve probably seen headlines declaring it as the key to interstellar travel, and claims that it will drastically reduce trips across our solar system, making our dreams of people walking on other planets even more of a reality. There have even been claims that this highly controversial technology is the key to creating warp drives.
These are bold claims, and as the great cosmologist and astrophysicist Carl Sagan once said, “extraordinary claims require extraordinary evidence.” With that in mind, we thought it’d be helpful to break down what we know about the enigmatic EmDrive, and whether or not it is, in fact, the key to mankind exploring the stars. So without further ado, here’s absolutely everything you need to know about the world’s most puzzling propulsion device.
What is the EmDrive?
See, the EmDrive is a conundrum. First designed in 2001 by aerospace engineer Roger Shawyer, the technology can summed up as a propellantless propulsion system, meaning that the engine doesn’t use fuel to cause a reaction. By removing the need for fuel, a craft would be substantially lighter, and therefore easier to move (and cheaper to make, theoretically). In addition, the hypothetical drive would also be able to reach extremely high speeds — we’re talking potentially getting humans to the outer reaches of the solar system in a matter of months.
We’re talking potentially getting humans to the outer reaches of the solar system in a matter of months.
The issue is, the entire concept of a reactionless drive is inconsistent with Newton’s conservation of momentum, which states that within a closed system, linear and angular momentum remain constant regardless of any changes that take place within said system. More plainly: Unless outside force is applied, an object will not move. Reactionless drives are named as such because they lack the “reaction” defined in Newton’s third law: “For every action there is an equal and opposite reaction.” But this goes against our current fundamental understanding of physics, as for an action (propulsion of a craft) to take place without a reaction (ignition of fuel and expulsion of mass) is impossible. In order for such a thing to occur, it would mean an as-yet-undefined phenomena is taking place, or our understanding of physics is completely wrong.
How does the EmDrive “work?”
Setting aside the potentially physics-breaking improbabilities the technology, let’s break down in simple terms how the proposed drive operates. The EmDrive is what is called an RF resonant cavity thruster, and is one of several hypothetical machines that use this model. These designs are said to work by having a magnetron push microwaves into a closed truncated cone, then push against the short end of the cone, and propel the craft forward. This is in contrast to the form of propulsion current spacecraft use, which instead burn large quantities of fuel to expel massive amount of energy and mass to rocket the craft into the air. An often-used metaphor for the inefficacy of this is to compare the particles pushing against the enclosure and producing thrust to the act of sitting in a car and pushing a steering wheel to move the car forward.
From the sections above, it becomes easy to see how many in the scientific community are wary of EmDrive and RF resonant cavity thrusts altogether. But on the other hand, it raises a few questions: Why is there such a interest in the technology, and why do so many people wish to test it? What exactly are the claims being made about the drive that make it such an attractive idea? While everything from atmospheric temperature-controlling satellites, to safer and more efficient automobiles have been drummed up as potential applications for the drive, the real draw of the technology — and the impetus for its creation in the first place — is the implications for space travel.
Spacecraft equipped with a reactionless drive could potentially make it to the Moon in just a few hours; to Mars in two to three months; and to Pluto within two years. These are extremely bold claims, but if the EmDrive does turn out to be a legitimate technology, they may not be all that outlandish after all. And with no need to pack several tons-worth of fuel, spacecraft become cheaper and easier to produce, and far lighter. For NASA and other such organizations, including the numerous private space corporations like SpaceX, lightweight, affordable spacecraft that can travel to parts of space fast are something of a unicorn. Still, in order for that to become a reality, the science has to add up.
Shawyer is adamant that there is no need for pseudo science or quantum theories in order to explain how EmDrive works. Instead, he believes that current models of Newtonian physics offer an explanation, and has written papers on the subject, one of which is currently being peer reviewed. He expects the paper to be published sometime this year. While in the past Shawyer has been criticized by other scientists for incorrect and inconsistent science, if the paper does indeed get published, it may in fact begin to legitimize the EmDrive, and spur more testing and research.
Spacecraft equipped with a reactionless drive could potentially make it to the Moon in just a few hours.
Despite his insistence that the drive behaves within the laws of physics, it hasn’t prevented him from making bold assertions regarding EmDrive. Shawyer has gone on record saying that this new drive produced warp bubbles which allow the drive to move, claiming that this is how NASA’s test results were likely achieved. Assertions such as these have garnered much interest online, but have no clear supporting data and will (at the very least) require extensive testing and debate in order to be taken seriously by the scientific community — the majority of which remain skeptical of Shawyer’s claims.
Colin Johnston of the Armagh Planetarium wrote an extensive critique of the EmDrive and the inconclusive findings of numerous tests. Similarly, Corey S. Powell of Discovery wrote his own indictment of both Shawyer’s EmDrive and Fetta’s Cannae Drive, as well as the recent fervor over NASA’s findings. Both point out the need for greater discretion when reporting on such instances. Professor and mathematical physicist, John C. Baez expressed his exhaustion at the conceptual technology’s persistence in debates and discussions, calling the entire notion of a reactionless drive “baloney.” His impassioned dismissal echoes the sentiments of many others. Elsewhere, however, Shawyer’s EmDrive has been met with enthusiasm, including the website NASASpaceFlight.com, which is where the information about the most recent Eagleworks’ tests was first posted, and the New Scientist Journal, which published a favorable and optimistic paper on EmDrive. (New Scientist has gone on to make a statement that, despite their enduring excitement over the idea, they should have shown more tact when writing on the controversial subject).
Clearly, the EmDrive and RF resonant cavity thruster technology have a lot to prove. There’s no denying that the technology is an exciting thought, and that the number of “successful” tests are interesting, but one must keep in mind the physics preventing the EmDrive from gaining any traction, and the rather curious lack of peer-reviewed studies done on the subject. If the EmDrive is so groundbreaking (and works), surely people like Shawyer would be clamoring for peer-reviewed verification. A demonstrably working EmDrive could open up exciting possibilities for both space and terrestrial travel — not to mention call into question our entire understanding of physics. However, until that comes to pass, the EmDrive will remain as nothing more than science fiction.
Welcome to quantum reality (Image: Julie Guiche/Picturetank)
It’s official: the universe is weird. Our everyday experience tells
us that distant objects cannot influence each other, and don’t disappear
just because no one is looking at them. Even Albert Einstein was dead
against such ideas because they clashed so badly with our views of the
real world.
But it turns out we’re wrong – the quantum nature of reality means,
on some level, these things can and do actually happen. A groundbreaking
experiment puts the final nail in the coffin of our ordinary “local
realism” view of the universe, settling an argument that has raged
through physics for nearly a century.
Teams of physicists around the world have been racing to complete this experiment for decades. Now, a group led by Ronald Hanson
at Delft University of Technology in the Netherlands has finally
cracked it. “It’s a very nice and beautiful experiment, and one can only
congratulate the group for that,” says Anton Zeilinger, head of one of the rival teams at the University of Vienna, Austria. “Very well done.”
To understand what Hanson and his colleagues did, we have to go back
to the 1930s, when physicists were struggling to come to terms with the
strange predictions of the emerging science of quantum mechanics. The
theory suggested that particles could become entangled, so that
measuring one would instantly influence the measurement of the other,
even if they were separated by a great distance. Einstein dubbed this
“spooky action at a distance”, unhappy with the implication that
particles could apparently communicate faster than any signal could pass
between them.
What’s more, the theory also suggested that the properties of a
particle are only fixed when measured, and prior to that they exist in a
fuzzy cloud of probabilities.
Nonsense, said Einstein, who famously proclaimed that God does not
play dice with the universe. He and others were guided by the principle
of local realism, which broadly says that only nearby objects can
influence each other and that the universe is “real” – our observing it
doesn’t bring it into existence by crystallising vague probabilities.
They argued that quantum mechanics was incomplete, and that “hidden
variables” operating at some deeper layer of reality could explain the
theory’s apparent weirdness.
It wasn’t until the 1960s that the debate shifted further to Bohr’s
side, thanks to John Bell, a physicist at CERN. He realised that there
was a limit to how connected the properties of two particles could be if
local realism was to be believed. So he formulated this insight into a
mathematical expression called an inequality. If tests showed that the
connection between particles exceeded the limit he set, local realism was toast.
“This is the magic of Bell’s inequality,” says Zeilinger’s colleague Johannes Kofler.
“It brought an almost purely philosophical thing, where no one knew how
to decide between two positions, down to a thing you could
experimentally test.”
And test they did. Experiments have been violating Bell’s inequality
for decades, and the majority of physicists now believe Einstein’s views
on local realism were wrong. But doubts remained. All prior experiments
were subject to a number of potential loopholes, leaving a gap that
could allow Einstein’s camp to come surging back.
“The notion of local realism is so ingrained into our daily thinking,
even as physicists, that it is very important to definitely close all
the loopholes,” says Zeilinger.
Loophole trade-off
A typical Bell test begins with a source of photons, which spits out
two at the same time and sends them in different directions to two
waiting detectors, operated by a hypothetical pair conventionally known
as Alice and Bob. The pair have independently chosen the settings on
their detectors so that only photons with certain properties can get
through. If the photons are entangled according to quantum mechanics,
they can influence each other and repeated tests will show a stronger
pattern between Alice and Bob’s measurements than local realism would
allow.
But what if Alice and Bob are passing unseen signals – perhaps
through Einstein’s deeper hidden layer of reality – that allow one
detector to communicate with the other? Then you couldn’t be sure that
the particles are truly influencing each other in their instant, spooky
quantum-mechanical way – instead, the detectors could be in cahoots,
altering their measurements. This is known as the locality loophole, and
it can be closed by moving the detectors far enough apart that there
isn’t enough time for a signal to cross over before the measurement is
complete. Previously Zeilinger and others have done just that, including
shooting photons between two Canary Islands 144 kilometres apart.
Close one loophole, though, and another opens. The Bell test relies
on building up a statistical picture through repeated experiments, so it
doesn’t work if your equipment doesn’t pick up enough photons. Other
experiments closed this detection loophole, but the problem gets worse
the further you separate the detectors, as photons can get lost on the
way. So moving the detectors apart to close the locality loophole begins
to widen the detection one.
“There’s a trade-off between these two things,” says Kofler. That
meant hard-core local realists always had a loophole to explain away
previous experiments – until now.
“Our experiment realizes the first Bell test that simultaneously
addressed both the detection loophole and the locality loophole,” writes
Hanson’s team in a paper detailing the study. Hanson declined to be interviewed because the work is currently under review for publication in a journal.
Entangled diamonds
In this set-up, Alice and Bob sit in two laboratories 1.3 kilometres
apart. Light takes 4.27 microseconds to travel this distance and their
measurement takes only 3.7 microseconds, so this is far enough to close
the locality loophole.
Each laboratory has a diamond that contains an electron with a
property called spin. The team hits the diamonds with randomly produced
microwave pulses. This makes them each emit a photon, which is entangled
with the electron’s spin. These photons are then sent to a third
location, C, in between Alice and Bob, where another detector clocks
their arrival time.
If photons arrive from Alice and Bob at exactly the same time, they
transfer their entanglement to the spins in each diamond. So the
electrons are entangled across the distance of the two labs – just what
we need for a Bell test. What’s more, the electrons’ spin is constantly
monitored, and the detectors are of high enough quality to close the
detector loophole.
But the downside is that the two photons arriving at C rarely
coincide – just a few per hour. The team took 245 measurements, so it
was a long wait. “This is really a very tough experiment,” says Kofler.
The result was clear: the labs detected more highly correlated spins
than local realism would allow. The weird world of quantum mechanics is
our world.
“If they’ve succeeded, then without any doubt they’ve done a remarkable experiment,” says Sandu Popescu
of the University of Bristol, UK. But he points out that most people
expected this result – “I can’t say everybody was holding their breath
to see what happens.” What’s important is that these kinds of
experiments drive the development of new quantum technology, he says.
One of the most important quantum technologies in use today is quantum cryptography. Data networks that use the weird properties of the quantum world to guarantee absolute secrecy are already springing up across the globe,
but the loopholes are potential bugs in the laws of physics that might
have allowed hackers through. “Bell tests are a security guarantee,”
says Kofler. You could say Hanon’s team just patched the universe.
Freedom of choice
There are still a few ways to quibble with the result. The experiment
was so tough that the p-value – a measure of statistical significance –
was relatively high for work in physics. Other sciences like biology
normally accept a p-value below 5 per cent as a significant result, but
physicists tend to insist on values millions of times smaller, meaning
the result is more statistically sound. Hanson’s group reports a p-value
of around 4 per cent, just below that higher threshold.
That isn’t too concerning, says Zeilinger. “I expect they have
improved the experiment, and by the time it is published they’ll have
better data,” he says. “There is no doubt it will withstand scrutiny.”
And there is one remaining loophole for local realists to cling to,
but no experiment can ever rule it out. What if there is some kind of
link between the random microwave generators and the detectors? Then
Alice and Bob might think they’re free to choose the settings on their
equipment, but hidden variables could interfere with their choice and
thwart the Bell test.
Hanson’s team note this is a possibility, but assume it isn’t the
case. Zeilinger’s experiment attempts to deal with this freedom of
choice loophole by separating their random number generators and
detectors, while others have proposed using photons from distant quasars
to produce random numbers, resulting in billions of years of
separation.
None of this helps in the long run. Suppose the universe is somehow
entirely predetermined, the flutter of every photon carved in stone
since time immemorial. In that case, no one would ever have a choice
about anything. “The freedom of choice loophole will never be closed
fully,” says Kofler. As such, it’s not really worth experimentalists
worrying about – if the universe is predetermined, the complete lack of
free will means we’ve got bigger fish to fry.
So what would Einstein have made of this new result? Unfortunately he
died before Bell proposed his inequality, so we don’t know if
subsequent developments would have changed his mind, but he’d likely be
enamoured with the lengths people have gone to prove him wrong. “I would
give a lot to know what his reaction would be,” says Zeilinger. “I
think he would be very impressed.”
Researchers from the U.S. Department of Energy’s (DOE) SLAC National Accelerator Laboratory and the
University of California, Los Angeles have demonstrated a new,
efficient way to accelerate positrons, the antimatter opposites of
electrons. The method may help boost the energy and shrink the size of
future linear particle colliders – powerful accelerators that could be
used to unravel the properties of nature’s fundamental building blocks.
The scientists had previously shown that boosting the energy of charged
particles by having them “surf” a wave of ionized gas, or plasma, works
well for electrons. While this method by itself could lead to smaller
accelerators, electrons are only half the equation for future colliders.
Now the researchers have hit another milestone by applying the
technique to positrons at SLAC’s Facility for Advanced Accelerator
Experimental Tests (FACET), a DOE Office of Science User Facility.
“Together with our previous achievement, the new study is a very
important step toward making smaller, less expensive next-generation
electron-positron colliders,” said SLAC’s Mark Hogan, co-author of the
study published today in Nature. “FACET is the only place in the world
where we can accelerate positrons and electrons with this method.”
Simulation of high-energy positron acceleration in an ionized gas, or
plasma – a new method that could help power next-generation particle
colliders. The image shows the formation of a high-density plasma
(green/orange color) around a positron beam moving from the bottom right
to the top left. Plasma electrons pass by the positron beam on
wave-like trajectories (lines). (W. An/UCLA)
Future
particle colliders will require highly efficient acceleration methods
for both electrons and positrons. Plasma wakefield acceleration of both
particle types, as shown in this simulation, could lead to smaller and
more powerful colliders than today’s machines. (F. Tsung/W. An/UCLA/SLAC
National Accelerator Laboratory)
Researchers study matter’s fundamental components and the
forces between them by smashing highly energetic particle beams into one
another. Collisions between electrons and positrons are especially
appealing, because unlike the protons being collided at CERN’s Large
Hadron Collider – where the Higgs boson was discovered in 2012 – these
particles aren’t made of smaller constituent parts.
“These collisions are simpler and easier to study,” said SLAC’s Michael
Peskin, a theoretical physicist not involved in the study. “Also, new,
exotic particles would be produced at roughly the same rate as known
particles; at the LHC they are a billion times more rare.”
However, current technology to build electron-positron colliders for
next-generation experiments would require accelerators that are tens of
kilometers long. Plasma wakefield acceleration is one way researchers
hope to build shorter, more economical accelerators.
Previous work showed that the method works efficiently for electrons:
When one of FACET’s tightly focused bundles of electrons enters an
ionized gas, it creates a plasma “wake” that researchers use to
accelerate a trailing second electron bunch.
Abstract
Electrical breakdown sets a limit on the kinetic energy that
particles in a conventional radio-frequency accelerator can reach. New
accelerator concepts must be developed to achieve higher energies and to
make future particle colliders more compact and affordable. The plasma
wakefield accelerator (PWFA) embodies one such concept, in which the
electric field of a plasma wake excited by a bunch of charged particles
(such as electrons) is used to accelerate a trailing bunch of particles.
To apply plasma acceleration to electron–positron colliders, it is
imperative that both the electrons and their antimatter counterpart, the
positrons, are efficiently accelerated at high fields using plasmas.
Although substantial progress has recently been reported on high-field,
high-efficiency acceleration of electrons in a PWFA powered by an
electron bunch, such an electron-driven wake is unsuitable for the
acceleration and focusing of a positron bunch. Here we demonstrate a new
regime of PWFAs where particles in the front of a single positron bunch
transfer their energy to a substantial number of those in the rear of
the same bunch by exciting a wakefield in the plasma. In the process,
the accelerating field is altered—‘self-loaded’—so that about a billion
positrons gain five gigaelectronvolts of energy with a narrow energy
spread over a distance of just 1.3 meters. They extract about 30 per
cent of the wake’s energy and form a spectrally distinct bunch with a
root-mean-square energy spread as low as 1.8 per cent. This ability to
transfer energy efficiently from the front to the rear within a single
positron bunch makes the PWFA scheme very attractive as an energy
booster to an electron–positron collider.
Plasma acceleration
From Wikipedia, the free encyclopedia
Plasma acceleration is a technique for accelerating charged particles, such as electrons, positrons and ions, using an electric field associated with electron plasma wave
or other high-gradient plasma structures (like shock and sheath
fields). The plasma acceleration structures are created either using
ultra-short laser
pulses or energetic particle beams that are matched to the plasma
parameters. These techniques offer a way to build high performance particle accelerators
of much smaller size than conventional devices. The basic concepts of
plasma acceleration and its possibilities were originally conceived by Toshiki Tajima and Prof. John M. Dawson of UCLA in 1979.[1] Initial designs of experiment for "wakefield" were conceived at UCLA by the group of Prof. Chan Joshi.[2] Current experimental devices show accelerating gradients several orders of magnitude better than current particle accelerators.
Plasma accelerators have immense promise for innovation of affordable
and compact accelerators for various applications ranging from high
energy physics to medical and industrial applications. Medical
applications include betatron and free-electron light sources for diagnostics or radiation therapy and protons sources for hadron therapy.
Plasma accelerators generally use wakefields generated by plasma
density waves. However, plasma accelerators can operate in many
different regimes depending upon the characteristics of the plasmas
used.
For example, an experimental laser plasma accelerator at Lawrence Berkeley National Laboratory accelerates electrons to 1 GeV over about 3.3 cm (5.4x1020gn),[3] and one at the SLAC
conventional accelerator (highest electron energy accelerator) requires
64 m to reach the same energy. Similarly, using plasmas an energy gain
of more than 40 GeV was achieved using the SLAC SLC beam (42 GeV) in just 85 cm using a plasma wakefield accelerator (8.9x1020 gn).[4]
Once fully developed, the technology could replace many of the
traditional RF accelerators currently found in particle colliders,
hospitals and research facilities.
The Texas Petawatt laser facility at the University of Texas at Austin accelerated electrons to 2 GeV over about 2 cm (1.6x1021 gn).[5] This record was broken (by more than 2x) in 2014 by the scientists at the BELLA (laser) Center at the Lawrence Berkeley National Laboratory, when they produced electron beams up to 4.25 GeV.[6]
In late 2014, researchers from SLAC National Accelerator Laboratory
using the Facility for Advanced Accelerator Experimental Tests (FACET)
published proof of the viability of plasma acceleration technology. It
was shown to be able to achieve 400 to 500 times higher energy transfer
compared to a general linear accelerator design. [7][8]
A plasma
consists of fluid of positive and negative charged particles, generally
created by heating or photo-ionizing (direct / tunneling / multi-photon
/ barrier-suppression) a dilute gas. Under normal conditions the plasma
will be macroscopically neutral (or quasi-neutral), an equal mix of electrons and ions
in equilibrium. However, if a strong enough external electric or
electromagnetic field is applied, the plasma electrons, which are very
light in comparison to the background ions (at least by a factor of
1836), will separate spatially from the massive ions creating a charge
imbalance in the perturbed region. A particle injected into such a
plasma would be accelerated by the charge separation field, but since
the magnitude of this separation is generally similar to that of the
external field, apparently nothing is gained in comparison to a
conventional system that simply applies the field directly to the
particle. But, the plasma medium acts as the most efficient transformer
(currently known) of the transverse field of an electromagnetic wave
into longitudinal fields of a plasma wave. In existing accelerator
technology various appropriately designed materials are used to convert
from transverse propagating extremely intense fields into longitudinal
fields that the particles can get a kick from. This process is achieved
using two approaches: standing-wave structures (such as resonant
cavities) or traveling-wave structures such as disc-loaded waveguides
etc. But, the limitation of materials interacting with higher and higher
fields is that they eventually get destroyed through ionization and
breakdown. Here the plasma accelerator science provides the breakthrough
to generate, sustain, and exploit the highest fields ever produced by
science in the laboratory.
What makes the system useful is the possibility of introducing waves
of very high charge separation that propagate through the plasma similar
to the traveling-wave concept in the conventional accelerator. The
accelerator thereby phase-locks a particle bunch on a wave and this
loaded space-charge wave accelerates them to higher velocities while
retaining the bunch properties. Currently, plasma wakes are excited by
appropriately shaped laser pulses or electron bunches. Plasma electrons are driven out and away from the center of wake by the ponderomotive force
or the electrostatic fields from the exciting fields (electron or
laser). Plasma ions are too massive to move significantly and are
assumed to be stationary at the time-scales of plasma electron response
to the exciting fields. As the exciting fields pass through the plasma,
the plasma electrons experience a massive attractive force back to the
center of the wake by the positive plasma ions chamber, bubble or column
that have remained positioned there, as they were originally in the
unexcited plasma. This forms a full wake of an extremely high
longitudinal (accelerating) and transverse (focusing) electric field.
The positive charge from ions in the charge-separation region then
creates a huge gradient between the back of the wake, where there are
many electrons, and the middle of the wake, where there are mostly ions.
Any electrons in between these two areas will be accelerated (in
self-injection mechanism). In the external bunch injection schemes the
electrons are strategically injected to arrive at the evacuated region
during maximum excursion or expulsion of the plasma electrons.
A beam-driven wake can be created by sending a relativistic proton or
electron bunch into an appropriate plasma or gas. In some cases, the
gas can be ionized by the electron bunch, so that the electron bunch
both creates the plasma and the wake. This requires an electron bunch
with relatively high charge and thus strong fields. The high fields of
the electron bunch then push the plasma electrons out from the center,
creating the wake.
Similar to a beam-driven wake, a laser pulse can be used to excite
the plasma wake. As the pulse travels through the plasma, the electric
field of the light separates the electrons and nucleons in the same way
that an external field would.
If the fields are strong enough, all of the ionized plasma electrons
can be removed from the center of the wake: this is known as the
"blowout regime". Although the particles are not moving very quickly
during this period, macroscopically it appears that a "bubble" of charge
is moving through the plasma at close to the speed of light. The bubble
is the region cleared of electrons that is thus positively charged,
followed by the region where the electrons fall back into the center and
is thus negatively charged. This leads to a small area of very strong
potential gradient following the laser pulse.
In the linear regime, plasma electrons aren't completely removed from
the center of the wake. In this case, the linear plasma wave equation
can be applied. However, the wake appears very similar to the blowout
regime, and the physics of acceleration is the same.
Wake created by an electron beam in a plasma
It is this "wakefield" that is used for particle acceleration. A
particle injected into the plasma near the high-density area will
experience an acceleration toward (or away) from it, an acceleration
that continues as the wakefield travels through the column, until the
particle eventually reaches the speed of the wakefield. Even higher
energies can be reached by injecting the particle to travel across the
face of the wakefield, much like a surfer
can travel at speeds much higher than the wave they surf on by
traveling across it. Accelerators designed to take advantage of this
technique have been referred to colloquially as "surfatron"s.
Comparison with RF acceleration
The advantage of plasma acceleration is that its acceleration field
can be much stronger than that of conventional radio-frequency (RF) accelerators. In RF accelerators, the field has an upper limit determined by the threshold for dielectric breakdown
of the acceleration tube. This limits the amount of acceleration over
any given area, requiring very long accelerators to reach high energies.
In contrast, the maximum field in a plasma is defined by mechanical
qualities and turbulence, but is generally several orders of magnitude
stronger than with RF accelerators. It is hoped that a compact particle
accelerator can be created based on plasma acceleration techniques or
accelerators for much higher energy can be built, if long accelerators
are realizable with an accelerating field of 10 GV/m.
Plasma acceleration is categorized into several types according to how the electron plasma wave is formed:
plasma wakefield acceleration(PWFA): The electron plasma wave is formed by an electron bunch
laser wakefield acceleration(LWFA): A laser pulse is introduced to form an electron plasma wave.
laser beat-wave acceleration(LBWA): The electron
plasma wave arises based on different frequency generation of two laser
pulses. The "Surfatron" is an improvement on this technique.[9]
self-modulated laser wakefield acceleration(SMLWFA): The formation of an electron plasma wave is achieved by a laser pulse modulated by stimulated Raman forward scattering instability.
The first experimental demonstration of wakefield acceleration, which
was performed with PWFA, was reported by a research group at Argonne National Laboratory in 1988.[10]
Formula
The acceleration gradient for a linear plasma wave is:
A Dielectric Wall Accelerator (DWA) is a compact linear particle accelerator concept designed and patented[1] in the late 1990s, that works by inducing a travelling electromagnetic wave in a tube which is constructed mostly from dielectric material. The main conceptual difference to a conventional disk-loaded linac system is given by the additional dielectric wall and the coupler construction.[clarification needed]
Possible uses of this concept include its application in external beam radiotherapy (EBRT) using protons or ions.
An external alternating-current power supply provides an
electromagnetic wave that is transmitted to the accelerator tube using a
waveguide. The power supply is switched on only a very short time (pulsed operation).[2] Electromagnetic induction creates a traveling electric field,
which accelerates charged particles. The traveling wave overlaps with
the position of the charged particles, leading to their acceleration
inside as they pass through the tube's vacuum channel.[2]
The field inside the tube is negative just ahead of the proton and
positive just behind the proton. Because protons are positively charged,
they accelerate toward the negative and away from the positive. The
power supply switches the polarity of the sections, so they stay
synchronized with the passing proton.
Construction
The accelerator tube is made from sheets of fused silica, only 250 µm
thick. After polishing, the sheets are coated with 0.5 µm of chromium
and 2.5 µm of gold. About 80 layers of the sheets are stacked together,
and then heated in a brazing furnace, where they fuse together. The
stacked assembly is then machined into a hollow cylinder. Fused silica
is pure transparent quartz glass, a dielectric, which is why the machine
is called a "dielectric wall accelerator."
A sketch of one of the assembled modules of the accelerator is shown
in the patent sketch. The module is about 3 cm long, and the beam
traveles upward. The dielectric wall is seen as item number 81. It is
surrounded by a pulse forming device called a Blumlein.
In figure 8A, the power supply charges the Blumlein. In figure 8B,
silicon carbide switches surrounding the Blumlein close, shorting out
the edge of the Blumlein. The energy stored in the Blumlein rushes
toward the dielectric wall as a high voltage pulse.
Usage in Proton Therapy
The dose produced by a native and by a modified proton beam in passing
through tissue, compared to the absorption of a photon beam
Particle beams such as protons and heavier ions offer improved dose
distributions compared with the system known as intensity-modulated
radiation therapy (IMRT).[3] IMRT uses an indirect ionization of water in the cell to produce free radicals. These react chemically with the cell destroying single strands of the dual strand DNA.
Protons are many times heavier than electrons, which makes them
easier to control and allows them to more precisely target a tumor.
Protons are charged particles and are accelerated to a predetermined
energy level which, using the bragg peak delivers the energy directly within 1–2 mm inside the tumor and stops. IMRT photons primarily indirectly ionizes the single strand DNA nuecleotide using the free radical method to disrupt cell life. Tumors are notorious for having a poor blood supply called cell (hypoxia).
This requires the use of drugs to oxygenate the tumor. Since the tumor
is fast growing it has a poor blood supply meaning any drugs directed at
them will have difficulty getting to them.
Advantages and Limitations
The DWA addresses the main issues with the current proton therapy systems—cost and size Video. Depending on the desired final beam energy, the conventional medical accelerator solutions (cyclotrons and small synchrotrons)
can have large cost factors and space requirements, which could be
circumvented by DWAs. The cost estimate for a DWA is about 20 million US dollars.[citation needed]
DWAs are expected to reach acceleration gradients around 100 MV/m.[2]
The system is a spinoff of a DOE device to inspect nuclear weapons.
This system requires several new advances because of the high energies,
like e.g. high gradient insulators.[4] A wide band-gap photoconductive switch, about 4,000 are needed. A Symmetric Blumlein, typical width 1mm.
Multi-gigaelectronvolt acceleration of positrons in a self-loaded plasma wakefield
Electrical
breakdown sets a limit on the kinetic energy that particles in a
conventional radio-frequency accelerator can reach. New accelerator
concepts must be developed to achieve higher energies and to make future
particle colliders more compact and affordable. The plasma wakefield
accelerator (PWFA) embodies one such concept, in which the electric
field of a plasma wake excited by a bunch of charged particles (such as
electrons) is used to accelerate a trailing bunch of particles. To apply
plasma acceleration to electron–positron colliders, it is imperative
that both the electrons and their antimatter counterpart, the positrons,
are efficiently accelerated at high fields using plasmas1.
Although substantial progress has recently been reported on high-field,
high-efficiency acceleration of electrons in a PWFA powered by an
electron bunch2,
such an electron-driven wake is unsuitable for the acceleration and
focusing of a positron bunch. Here we demonstrate a new regime of PWFAs
where particles in the front of a single positron bunch transfer their
energy to a substantial number of those in the rear of the same bunch by
exciting a wakefield in the plasma. In the process, the accelerating
field is altered—‘self-loaded’—so that about a billion positrons gain
five gigaelectronvolts of energy with a narrow energy spread over a
distance of just 1.3 metres. They extract about 30 per cent of the
wake’s energy and form a spectrally distinct bunch with a
root-mean-square energy spread as low as 1.8 per cent. This ability to
transfer energy efficiently from the front to the rear within a single
positron bunch makes the PWFA scheme very attractive as an energy
booster to an electron–positron collider.