Search This Blog

Wikipedia

Search results

Tuesday, December 31, 2013

Toba catastrophe theory

From Wikipedia, the free encyclopedia


Toba catastrophe theory
Tobaeruption.png
Illustration of what the eruption might have looked like from approximately 26 miles (42 km) above Pulau Simeulue.
Volcano Toba supervolcano
Date 69,000–77,000 years ago
Type Ultra Plinian
Location Sumatra, Indonesia
2.6845°N 98.8756°ECoordinates: 2.6845°N 98.8756°E
VEI 8.3
Impact Most recent supereruption; plunged Earth into 6 years of volcanic winter, possibly causing a bottleneck in human evolution and significant changes to regional topography.[1][dated info]

Toba zoom.jpg
Lake Toba is the resulting crater lake.
The Toba supereruption (Youngest Toba Tuff or simply YTT[2]) was a supervolcanic eruption that occurred sometime between 69,000 and 77,000 years ago at the site of present-day Lake Toba (Sumatra, Indonesia). It is recognized as one of the Earth's largest known eruptions. The related catastrophe hypothesis holds that this event caused a global volcanic winter of 6–10 years and possibly a 1,000-year-long cooling episode.
The Toba event [3][4] is the most closely studied super-eruption.[5] In 1993, science journalist Ann Gibbons suggested a link between the eruption and a bottleneck in human evolution, and Michael R. Rampino of New York University and Stephen Self of the University of Hawaii at Manoa gave support to the idea. In 1998, the bottleneck theory was further developed by Stanley H. Ambrose of the University of Illinois at Urbana-Champaign.

Supereruption

The Toba eruption or Toba event occurred at the present location of Lake Toba about 73,000±4,000 years[6][7] ago. This eruption was the last of the three major eruptions of Toba in the last 1 million years.[8] It had an estimated Volcanic Explosivity Index of 8 (described as "mega-colossal"), or magnitude ≥ M8; it made a sizable contribution to the 100X30 km caldera complex.[9] Dense-rock equivalent (DRE) estimates of eruptive volume for the eruption vary between 2,000 km3 and 3,000 km3 – the most common DRE estimate is 2,800 km3 (about 7×1015 kg) of erupted magma, of which 800 km3 was deposited as ash fall.[10] Its erupted mass was 100 times greater than that of the largest volcanic eruption in recent history, the 1815 eruption of Mount Tambora in Indonesia, which caused the 1816 "Year Without a Summer" in the northern hemisphere.[11]
The Toba eruption took place in Indonesia and deposited an ash layer approximately 15 centimetres thick over the whole of South Asia. A blanket of volcanic ash was also deposited over the Indian Ocean, and the Arabian and South China Sea.[12] Deep-sea cores retrieved from the South China Sea have extended the known reach of the eruption, suggesting that the 2,800 km3 calculation of the erupted mass is a minimum value or an underestimate.[13]

Volcanic winter and cooling

The Toba eruption apparently coincided with the onset of the last glacial period. Michael L. Rampino and Stephen Self argue that the eruption caused a "brief, dramatic cooling or 'volcanic winter'", which resulted in a drop of the global mean surface temperature by 3–5 °C and accelerated the transition from warm to cold temperatures of the last glacial cycle.[14] Evidence from Greenland ice cores indicates a 1,000-year period of low δ18O and increased dust deposition immediately following the eruption. The eruption may have caused this 1,000-year period of cooler temperatures (stadial), two centuries of which could be accounted for by the persistence of the Toba stratospheric loading.[15] Rampino and Self believe that global cooling was already underway at the time of the eruption, but that the process was slow; YTT "may have provided the extra 'kick' that caused the climate system to switch from warm to cold states".[16] Although Clive Oppenheimer rejects the hypothesis that the eruption triggered the last glaciation,[17] he agrees that it may have been responsible for a millennium of cool climate prior to the Dansgaard-Oeschger event.[18]
According to Alan Robock,[19] who has also published nuclear winter papers, the Toba eruption did not precipitate the last glacial period. However assuming an emission of six billion tons of sulphur dioxide, his computer simulations concluded that a maximum global cooling of approximately 15 °C occurred for three years after the eruption, and that this cooling would last for decades, being devastating to life. As the saturated adiabatic lapse rate is 4.9 °C/1,000 m for temperatures above freezing,[20] the tree line and the snow line were around 3,000 m (9,900 ft) lower at this time. The climate recovered over a few decades, and Robock found no evidence that the 1,000-year cold period seen in Greenland ice core records had resulted from the Toba eruption. In contrast, Oppenheimer believes that estimates of a drop in surface temperature by 3–5 °C are probably too high, and he suggests that temperatures dropped only by 1 °C.[21] Robock has criticized Oppenheimer's analysis, arguing that it is based on simplistic T-forcing relationships.[22]
Despite these different estimates, scientists agree that a supereruption of the scale at Toba must have led to very extensive ash-fall layers and injection of noxious gases into the atmosphere, with worldwide effects on climate and weather.[23] In addition, the Greenland ice core data display an abrupt climate change around this time,[24] but there is no consensus that the eruption directly generated the 1,000-year cold period seen in Greenland or triggered the last glaciation.[25]
Archaeologists who in 2013 found a microscopic layer of glassy volcanic ash in sediments of Lake Malawi, and definitively linked the ash to the 75,000-year-old Toba super-eruption, went on to note a complete absence of finding the change in fossil type close to the ash layer that would be expected following a severe volcanic winter. This result led the archaeologists to conclude that the largest known volcanic eruption in the history of the human species did not significantly alter the climate of East Africa.[26][27]

Genetic bottleneck theory

The Toba eruption has been linked to a genetic bottleneck in human evolution about 50,000 years ago,[28][29] which may have resulted from a severe reduction in the size of the total human population due to the effects of the eruption on the global climate.[30]
According to the genetic bottleneck theory, between 50,000 and 100,000 years ago, human populations sharply decreased to 3,000–10,000 surviving individuals.[31][32] It is supported by genetic evidence suggesting that today's humans are descended from a very small population of between 1,000 to 10,000 breeding pairs that existed about 70,000 years ago.[33]
Proponents of the genetic bottleneck theory suggest that the Toba eruption resulted in a global ecological disaster, including destruction of vegetation along with severe drought in the tropical rainforest belt and in monsoonal regions. For example, a 10-year volcanic winter triggered by the eruption could have largely destroyed the food sources of humans and caused a severe reduction in population sizes.[22] Τhese environmental changes may have generated population bottlenecks in many species, including hominids;[34] this in turn may have accelerated differentiation from within the smaller human population. Therefore, the genetic differences among modern humans may reflect changes within the last 70,000 years, rather than gradual differentiation over millions of years.[35]
Other research has cast doubt on the genetic bottleneck theory. For example, ancient stone tools in southern India were found above and below a thick layer of ash from the Toba eruption and were very similar across these layers, suggesting that the dust clouds from the eruption did not wipe out this local population.[36][37][38] Additional archaeological evidence from southern and northern India also suggests a lack of evidence for effects of the eruption on local populations, leading the authors of the study to conclude, "many forms of life survived the supereruption, contrary to other research which has suggested significant animal extinctions and genetic bottlenecks".[39] However, evidence from pollen analysis has suggested prolonged deforestation in South Asia, and some researchers have suggested that the Toba eruption may have forced humans to adopt new adaptive strategies, which may have permitted them to replace Neanderthals and "other archaic human species".[40] This has been challenged by evidence for the presence of Neanderthals in Europe and Homo floresiensis in Southeastern Asia who survived the eruption by 50,000 and 60,000 years, respectively.[41]
Additional caveats to the Toba-induced bottleneck theory include difficulties in estimating the global and regional climatic impacts of the eruption and lack of conclusive evidence for the eruption preceding the bottleneck.[42] Furthermore, genetic analysis of Alu sequences across the entire human genome has shown that the effective human population size was less than 26,000 at 1.2 million years ago; possible explanations for the low population size of human ancestors may include repeated population bottlenecks or periodic replacement events from competing Homo subspecies.[43]

Genetic bottlenecks in humans

The Toba catastrophe theory suggests that a bottleneck of the human population occurred c. 70,000 years ago, reducing the total human population to c. 15,000 individuals[44] when Toba erupted and triggered a major environmental change, including a volcanic winter. The theory is based on geological evidence for sudden climate change at that time and for coalescence of some genes (including mitochondrial DNA, Y-chromosome and some nuclear genes)[45] as well as the relatively low level of genetic variation among present-day humans.[44] For example, according to one hypothesis, human mitochondrial DNA (which is maternally inherited) and Y chromosome DNA (paternally inherited) coalesce at around 140,000 and 60,000 years ago, respectively. This suggests that the female line ancestry of all present-day humans traces back to a single female (Mitochondrial Eve) at around 140,000 years ago, and the male line to a single male (Y-chromosomal Adam) at 60,000 to 90,000 years ago.[46]
However, such coalescence is genetically expected and does not necessarily indicate a population bottleneck because mitochondrial DNA and Y-chromosome DNA are only a small part of the human genome, and are atypical in that they are inherited exclusively through the mother or through the father, respectively. Most genes are inherited randomly from either the father or mother, thus cannot be traced to either matrilineal or patrilineal ancestry.[47] Other genes display coalescence points from 2 million to 60,000 years ago, thus casting doubt on the existence of recent and strong bottlenecks.[44][48]
Other possible explanations for limited genetic variation among today's humans include a transplanting model or "long bottleneck", rather than a catastrophic environmental change.[49] This would be consistent with suggestions that in sub-Saharan Africa human populations dropped to as low as 2,000 individuals for perhaps as long as 100,000 years, before numbers began to increase in the Late Stone Age.[50]
TMRCAs of loci, Y chromosome, and mitogenomes compared to their probability distributions, assuming that the human population expanded 75kya from a population of 11,000 individuals
Limitations of single locus studies include the large randomness of the fixation process, and studies that take this randomness into account have estimated the effective human population size at 11,000–12,000 individuals.[51][52]

Genetic bottlenecks in other mammals

Some evidence points to genetic bottlenecks in other animals in the wake of the Toba eruption: the populations of the Eastern African chimpanzee,[53] Bornean orangutan,[54] central Indian macaque,[55] the cheetah, the tiger,[56] and the separation of the nuclear gene pools of eastern and western lowland gorillas,[57] all recovered from very low numbers around 70,000–55,000 years ago.

Migration after Toba

The exact geographic distribution of human populations at the time of the eruption is not known, and surviving populations may have lived in Africa and subsequently migrated to other parts of the world. Analyses of mitochondrial DNA have estimated that the major migration from Africa occurred 60,000–70,000 years ago,[58] consistent with dating of the Toba eruption to around 66,000–76,000 years ago.
However, recent archeological finds have suggested that a human population may have survived in Jwalapuram, Southern India.[59] Moreover, it has also been suggested that nearby hominid populations, such as Homo floresiensis on Flores, survived because they lived upwind of Toba.[60]

See also

Supervolcano

From Wikipedia, the free encyclopedia


A supervolcano is any volcano capable of producing a volcanic eruption with an ejecta volume greater than 1,000 km3 (240 cu mi). This is thousands of times larger than normal volcanic eruptions.[1] Supervolcanoes can occur either when magma in the mantle rises into the crust from a hotspot but is unable to break through the crust, thus pressure builds in a large and growing magma pool until the crust is unable to contain the pressure (This is the case for the Yellowstone Caldera), but they can also form at convergent plate boundaries (for example, Toba).
Although there are only a handful of Quaternary supervolcanoes, supervolcanic eruptions typically cover huge areas with lava and volcanic ash and cause a long-lasting change to weather (such as the triggering of a small ice age) sufficient to threaten species with extinction.

Terminology

The origin of the term "supervolcano" is linked to an early 20th-century scientific debate about the geological history and features of the Three Sisters volcanic region of Oregon, U.S.A. In 1925, Edwin T. Hodge suggested that a very large volcano, which he named Mount Multnomah, had existed in that region. He believed that several peaks in the Three Sisters area are the remnants left after Mount Multnomah had been largely destroyed by violent volcanic explosions, similar to Mount Mazama.[2] In 1948, the possible existence of Mount Multnomah was ignored by volcanologist Howel Williams in his book The Ancient Volcanoes of Oregon. The book was reviewed in 1949 by another volcano scientist, F. M. Byers Jr.[3] In the review, Byers refers to Mount Multnomah as a supervolcano.[4] Although Hodge's suggestion that Mount Multnomah is a supervolcano was rejected long ago, the term "supervolcano" was popularised by the BBC popular science television program Horizon in 2000 to refer to eruptions that produce extremely large amounts of ejecta.[5][6]
Volcanologists and geologists do not refer to "supervolcanoes" in their scientific work, since this is a blanket term that can be applied to a number of different geological conditions. Since 2000, however, the term has been used by professionals when presenting to the public. The term megacaldera is sometimes used for caldera supervolcanoes, such as the Blake River Megacaldera Complex in the Abitibi greenstone belt of Ontario and Quebec, Canada. Eruptions that rate VEI 8 are termed "super eruptions".[citation needed]
Though there is no well-defined minimum explosive size for a "supervolcano", there are at least two types of volcanic eruption that have been identified as supervolcanoes: large igneous provinces and massive eruptions.[citation needed]

Large igneous provinces

Large igneous provinces (LIP) such as Iceland, the Siberian Traps, Deccan Traps, and the Ontong Java Plateau are extensive regions of basalts on a continental scale resulting from flood basalt eruptions. When created, these regions often occupy several thousand square kilometres and have volumes on the order of millions of cubic kilometers. In most cases, the lavas are normally laid down over several million years. They release large amounts of gases. The Réunion hotspot produced the Deccan Traps about 66 million years ago, coincident with the Cretaceous–Paleogene extinction event. The scientific consensus is that a meteor impact was the cause of the extinction event, but the volcanic activity may have caused environmental stresses on extant species up to the Cretaceous–Paleogene boundary.[citation needed] Additionally, the largest flood basalt event (the Siberian Traps) occurred around 250 million years ago and was coincident with the largest mass extinction in history, the Permian–Triassic extinction event, although it is also unknown whether it was completely responsible for the extinction event.
Such outpourings are not explosive though fire fountains may occur. Many volcanologists consider that Iceland may be a LIP that is currently being formed. The last major outpouring occurred in 1783–84 from the Laki fissure which is approximately 40 km (25 mi) long. An estimated 14 km3 (3.4 cu mi) of basaltic lava was poured out during the eruption.
The Ontong Java Plateau now has an area of about 2,000,000 km2 (770,000 sq mi), and the province was at least 50% larger before the Manihiki and Hikurangi Plateaus broke away.

Massive explosive eruptions

Volcanic eruptions are classified using the Volcanic Explosivity Index, or VEI.
VEI - 8 eruptions are colossal events that throw out at least 1,000 km3 (240 cu mi) Dense Rock Equivalent (DRE) of ejecta.
VEI - 7 events eject at least 100 cubic kilometres (24 cu mi) DRE.
VEI - 7 or 8 eruptions are so powerful that they often form circular calderas rather than cones because the downward withdrawal of magma causes the overlying mass to collapse and fill the void magma chamber beneath.
One of the classic calderas is at Glen Coe in the Grampian Mountains of Scotland. First described by Clough et al. (1909)[7] its geology and volcanic succession have recently been re-analysed in the light of new discoveries.[8] There is an accompanying 1:25000 solid geology map.
By way of comparison, the 1980 Mount St. Helens eruption was a VEI-5 with 1.2 km3 of ejecta.
Both Mount Pinatubo in 1991 and Krakatoa in 1883 were VEI-6 with 10 km3 (2.4 cu mi) and 25 km3 (6.0 cu mi) DRE, respectively. The death toll recorded by the Dutch authorities in 1883 was 36,417, although some sources put the estimate at more than 120,000 deaths.

Known supereruptions

Cross-section through Long Valley Caldera
Location of Yellowstone hotspot over time (numbers indicate millions of years before the present).

Monday, December 30, 2013

Brainlike Computers, Learning From Experience


Erin Lubin/The New York Times
Kwabena Boahen holding a biologically inspired processor attached to a robotic arm in a laboratory at Stanford University.
PALO ALTO, Calif. — Computers have entered the age when they are able to learn from their own mistakes, a development that is about to turn the digital world on its head.
The first commercial version of the new kind of computer chip is scheduled to be released in 2014. Not only can it automate tasks that now require painstaking programming — for example, moving a robot’s arm smoothly and efficiently — but it can also sidestep and even tolerate errors, potentially making the term “computer crash” obsolete.
The new computing approach, already in use by some large technology companies, is based on the biological nervous system, specifically on how neurons react to stimuli and connect with other neurons to interpret information. It allows computers to absorb new information while carrying out a task, and adjust what they do based on the changing signals.
In coming years, the approach will make possible a new generation of artificial intelligence systems that will perform some functions that humans do with ease: see, speak, listen, navigate, manipulate and control. That can hold enormous consequences for tasks like facial and speech recognition, navigation and planning, which are still in elementary stages and rely heavily on human programming.
Designers say the computing style can clear the way for robots that can safely walk and drive in the physical world, though a thinking or conscious computer, a staple of science fiction, is still far off on the digital horizon.
“We’re moving from engineering computing systems to something that has many of the characteristics of biological computing,” said Larry Smarr, an astrophysicist who directs the California Institute for Telecommunications and Information Technology, one of many research centers devoted to developing these new kinds of computer circuits.
Conventional computers are limited by what they have been programmed to do. Computer vision systems, for example, only “recognize” objects that can be identified by the statistics-oriented algorithms programmed into them. An algorithm is like a recipe, a set of step-by-step instructions to perform a calculation.
But last year, Google researchers were able to get a machine-learning algorithm, known as a neural network, to perform an identification task without supervision. The network scanned a database of 10 million images, and in doing so trained itself to recognize cats.
In June, the company said it had used those neural network techniques to develop a new search service to help customers find specific photos more accurately.
The new approach, used in both hardware and software, is being driven by the explosion of scientific knowledge about the brain. Kwabena Boahen, a computer scientist who leads Stanford’s Brains in Silicon research program, said that is also its limitation, as scientists are far from fully understanding how brains function.
“We have no clue,” he said. “I’m an engineer, and I build things. There are these highfalutin theories, but give me one that will let me build something.”
Until now, the design of computers was dictated by ideas originated by the mathematician John von Neumann about 65 years ago. Microprocessors perform operations at lightning speed, following instructions programmed using long strings of 1s and 0s. They generally store that information separately in what is known, colloquially, as memory, either in the processor itself, in adjacent storage chips or in higher capacity magnetic disk drives.
The data — for instance, temperatures for a climate model or letters for word processing — are shuttled in and out of the processor’s short-term memory while the computer carries out the programmed action. The result is then moved to its main memory.
The new processors consist of electronic components that can be connected by wires that mimic biological synapses. Because they are based on large groups of neuron-like elements, they are known as neuromorphic processors, a term credited to the California Institute of Technology physicist Carver Mead, who pioneered the concept in the late 1980s.
They are not “programmed.” Rather the connections between the circuits are “weighted” according to correlations in data that the processor has already “learned.” Those weights are then altered as data flows in to the chip, causing them to change their values and to “spike.” That generates a signal that travels to other components and, in reaction, changes the neural network, in essence programming the next actions much the same way that information alters human thoughts and actions.
“Instead of bringing data to computation as we do today, we can now bring computation to data,” said Dharmendra Modha, an I.B.M. computer scientist who leads the company’s cognitive computing research effort. “Sensors become the computer, and it opens up a new way to use computer chips that can be everywhere.”
The new computers, which are still based on silicon chips, will not replace today’s computers, but will augment them, at least for now. Many computer designers see them as coprocessors, meaning they can work in tandem with other circuits that can be embedded in smartphones and in the giant centralized computers that make up the cloud. Modern computers already consist of a variety of coprocessors that perform specialized tasks, like producing graphics on your cellphone and converting visual, audio and other data for your laptop.
One great advantage of the new approach is its ability to tolerate glitches. Traditional computers are precise, but they cannot work around the failure of even a single transistor. With the biological designs, the algorithms are ever changing, allowing the system to continuously adapt and work around failures to complete tasks.
Traditional computers are also remarkably energy inefficient, especially when compared to actual brains, which the new neurons are built to mimic.
I.B.M. announced last year that it had built a supercomputer simulation of the brain that encompassed roughly 10 billion neurons — more than 10 percent of a human brain. It ran about 1,500 times more slowly than an actual brain. Further, it required several megawatts of power, compared with just 20 watts of power used by the biological brain.
Running the program, known as Compass, which attempts to simulate a brain, at the speed of a human brain would require a flow of electricity in a conventional computer that is equivalent to what is needed to power both San Francisco and New York, Dr. Modha said.
I.B.M. and Qualcomm, as well as the Stanford research team, have already designed neuromorphic processors, and Qualcomm has said that it is coming out in 2014 with a commercial version, which is expected to be used largely for further development. Moreover, many universities are now focused on this new style of computing. This fall the National Science Foundation financed the Center for Brains, Minds and Machines, a new research center based at the Massachusetts Institute of Technology, with Harvard and Cornell.
The largest class on campus this fall at Stanford was a graduate level machine-learning course covering both statistical and biological approaches, taught by the computer scientist Andrew Ng. More than 760 students enrolled. “That reflects the zeitgeist,” said Terry Sejnowski, a computational neuroscientist at the Salk Institute, who pioneered early biologically inspired algorithms. “Everyone knows there is something big happening, and they’re trying find out what it is.” 

Sunday, December 29, 2013

8 cm PAW 600

From Wikipedia, the free encyclopedia


8 cm Panzerabwehrwerfer 600
PAW600 8cm 1.jpg
PAW 600 at Aberdeen military proving grounds.
Type Anti-tank gun
Place of origin Nazi Germany
Service history
Used by Nazi Germany
Wars World War II
Production history
Designer Rheinmetall
Designed 1943-44
Produced Dec 1944-Mar 1945
Number built 260
Specifications
Weight 640 kg (1,400 lb)
Length 2.95 m (9 ft 8 in)
Crew 6

Caliber 81.4 mm (3.20 in)
Breech vertical block
Recoil hydropneumatic
Carriage split trail
Elevation -6° to +32°
Traverse 55°
Muzzle velocity 520 m/s (1,706 ft/s)
Effective range 750 m (820 yd) (anti-tank)
Maximum range 6,200 m (6,800 yd) (high explosive)
The PAW 600 (Panzerabwehrwerfer 600, officially designated 8H63) was a lightweight anti-tank gun that used the high-low pressure system to fire hollow charge warheads. It was used operationally by Wehrmacht in 1945 in small numbers. Only about 250 were produced before the war's end. None were reported to have seen combat.

Background

By 1943, the German army was faced with various problems with regards their existing anti-tank gun designs. They had started the war with the 3.7 cm PaK 36, which had the advantage of being very light at 328 kg, so that it could be moved a reasonable distance by hand using only its own crew. By 1941, this gun was inadequate; it could not deal with the heaviest armoured Soviet and British tanks. Its replacement, the 5 cm PaK 38 offered better performance (though still only marginal against the new threat) but, at 1,000 kg, was at the absolute limit of what the gun's own crew could effectively move into and out of a firing position by hand. The next gun, the 7.5 cm PaK 40, was a very effective tank-killer; but, at 1,425 kg, was no longer suitable for use by the infantry. A much larger crew and a vehicle was required to move this gun any distance at all and often just to displace it out of its firing position. Many were lost intact simply because they were overrun before their crews could move them. As the guns got bigger to deal with the latest tank technology and became too heavy for tactical employment, they also became more expensive. The PAK 36 cost RM 5,730 and 900 work-hours, while a PAK 40 cost RM 12,000 and took 2,000 work-hours to build. The situation was so bad that, by May 1944, the 14th (Panzerjager) Kompanie of infantry regiments were having their heavy anti-tank guns removed and replaced by the Panzerschreck rocket launcher. But with an effective range of only 150 meters, this weapon did not provide the depth of fire required for the regiment's anti-tank defense. The only other alternative for a light anti-tank gun had been recoilless weapons, but the German Army was less than enthusiastic because this type of weapons has many shortcomings, particularly a high demand for propellant.

Design and development

In 1943, a specification was issued for a lightweight anti-tank gun that used less propellant than a rocket or recoilless weapon, yet was sufficiently accurate to hit a 1-meter square target at 750 meters range. Rheinmetall-Borsig proposed a design to meet this requirement using the new high-low pressure ballistic principle, also known as the Hoch-Niederdruck system. In this system, high pressure caused by the combustion of the propellant was confined to the breech section, which was relatively heavy, and did not act directly on the projectile. It was allowed to bleed gradually into the barrel at a controlled rate and lower pressure to propel the projectile. Thus the barrel could be exceptionally light in a weapon that still had the advantages which accrue from high pressure. The carriage too could be very light, although initial prototype carriages proved to be too light and had to be redesigned. The resulting PAW 600 (later redesignated 8H63) gun weighed about 600 kg, less than half that of the 7,5 cm PaK 40 while having comparable armor penetration out to its full effective anti-tank range of 750 meters.
Unlike previous anti-tank guns, which relied on firing steel projectiles at high velocities to penetrate heavy armor, the 8H63 was designed to fire shaped charge ammunition (called also hollow-charge ammunition, high explosive anti-tank, or HEAT). Because shaped charge warheads perform best when no spin is imparted on the projectile, the 8H63 was a smoothbore design. To simplify development and manufacture, the projectiles used were based on the widely used 8 cm Granatwerfer 34 mortar (actual caliber 81.4 mm). This allowed the use of existing tooling in the manufacture of ammunition, which reduced the costs. The cartridge case was developed from the 10.5 cm leFH 18 howitzer.
The standard shaped charge projectile designated 8 cm W Gr Patr H1 4462 weighed 2.70 kg. The propelling charge was 360 gm of Digl B1 P (compared to a 3.8 kg propelling charge in a PAK 40) and with a muzzle velocity of 520 mps this had an effective range of 750 meters against a tank-sized target. Armor penetration was 140mm of vertical armor, which was comparable to the 7.5 cm PAK 40 firing the rare and expensive tungsten-cored PzGr40 shot.

Other uses and rounds

Because the ammunition was developed from the standard infantry mortar, any type of round developed for the mortar could have been readily adapted for the 8H63, including high-explosive (HE), smoke and illuminating rounds. The HE round, the 8 cm W Gr Patr 5071 with a 4.46 kg projectile and total round weight of 8.30 kg was developed. This could employ 3 different charge increments for maximum ranges of 3,400 meters @ 220 mps, 5,600 meters @ 320 mps or 6,200 meters at 420 mps - about 3 times farther than the mortar and with the possibility of direct or indirect fire. This performance brings up another interesting feature of this gun. Traditional high-velocity anti-tank guns were very inefficient when employed as field artillery firing explosive rounds in support of the infantry. The thick projectile walls necessary to withstand high velocities ensured a small explosive payload and the amount of propellent used was wasteful. The guns also fired at low trajectories (+22 degrees for a PAK 40) which limited their utility. For this reason, the German Army had always employed Infantry Guns, such as the 7.5 cm leichtes Infanteriegeschütz 18, at the regimental level to provide fire support under the direct and immediate control of the infantry. This meant every infantry regiment had an infantry gun company for use against unarmored targets and a tank destroyer (anti-tank gun) company for use against armored targets. The 8H63, firing an explosive round had lethality almost comparable to the 7.5 cm infantry gun with a greater range. The 8H63's multi-charge cartridge, 55 degree traverse (fine for anti-tank defence) and +32 degree maximum elevation could have allowed the merger of the infantry and anti-tank gun categories with a resulting savings in production, logistics, and precious manpower. The 8H63 was to be organized under the new 1945 Table of Organization and Equipment (TO&E) in anti-tank companies of 12 guns with 104 men, replacing the anti-tank and infantry gun companies (300+ men) of previous organizations.

Production

Some 260 guns and 34,800 rounds of ammunition were completed from December 1944, with 81 guns delivered to the troops in January 1945 and 155 listed on March 1, 1945. Plans had called for the production of 1,000 guns per 4,000,000 anti-tank and 800,000 explosive shells per month. Production models had either the purpose-built light carriage or used redundant PAK 38 carriages and PAK 40 muzzle brakes and were slightly heavier.

Further development perspectives

Several self-propelled models were proposed in 1945, but the war ended even before prototypes could be built. It is clear that, had the war in Europe carried on longer, the 8H63 would have been a major factor and would likely have replaced the towed PAK 40 and various 7.5 cm infantry guns in production.
Krupp was also developing an enlarged 10 cm design known as the 10 cm PAW 1000 or 10H64 at the end of the war, but none reached production. This would have had an armor penetration increased to 200 mm in a gun of about 1,000 kg with the effective range against tanks increased to 1,000 meters.

Nomenclature

The Panzerabwehrwerfer 600 ("anti-tank thrower") designation was used by Rheinmetall during the design phase. The service designation was 8H63 in accordance to the new designation system used during the last year of the war.
In 1944-5, the Germans changed their system of artillery designations from the old "year" system. Each weapon was to have a number showing their caliber group, a letter denoting ammunition group, and the last two digits were from the weapon drawing number. In this case, 8 denoted 81.4 mm caliber using the H group of ammunition. The shells were all to be designated as H with a 4 digit number, the first three were the drawing number and the last was the shell's category from the following list:
# Shell type # Shell type
1 high explosive 6 gas
2 hollow charge anti-tank 7 incendiary
3 armor-piercing 8 leaflet
4 high explosive, high capacity 9 practice
5 smoke 10 proof projectile
PAW 600

References

  • Gander, Terry and Chamberlain, Peter. Weapons of the Third Reich: An Encyclopedic Survey of All Small Arms, Artillery and Special Weapons of the German Land Forces 1939-1945. New York: Doubleday, 1979 ISBN 0-385-15090-3
  • Fleischer, Wolfgang and Eiermann, Richard. "German Anti-Tank (Panzerjager) Troops in WWII" Schiffer Military Publishing, Atglen PA 2004. ISBN 0-7643-2096-3
  • Hogg, Ian V. German Artillery of World War Two. 2nd corrected edition. Mechanicsville, PA: Stackpole Books, 1997 ISBN 1-85367-480-X
[hide]
German artillery of World War II

High–low system design of cannon and antitank launcher

From Wikipedia, the free encyclopedia

The High-Low system also referred to as the "High-Low Pressure system", the "High-Low Propulsion System", and the "High-Low projection system", is a design of cannon and antitank launcher using a smaller high-pressure chamber for storing the propellant. When the propellant is ignited, the higher pressure gases are bled out through vents (or ports) at reduced pressure to a much larger low pressure chamber to push the projectile forward. With the High-Low System a weapon can be designed with reduced or negligible recoil. The High-Low System also allows the weight of the weapon and its ammunition to be significantly reduced. Manufacturing cost and production time are drastically lower than standard cannon or other small-arm weapon systems firing a projectile of the same size and weight. It has a far more efficient use of the propellant, unlike earlier recoilless weapons, where most of the propellant is expended to the rear of the weapon to counter the recoil of the projectile being fired.[1]

Origin

In the final years of World War II, Nazi Germany researched and developed low-cost antitank weapons. Large antitank cannon firing high velocity projectiles were the best option, but expensive to produce as well as requiring a well trained crew. They also lacked mobility on the battlefield once emplaced. Antitank rocket launchers and recoilless rifles, while much lighter and simpler to manufacture, gave the gunner's position away and were not as accurate as antitank cannon. Recoilless rifles used a huge amount of propellant to fire the projectile, with estimates ranging from only one-fifth to one-ninth of the propellant gases being used to push the projectile forward.[notes 1] The German military asked for an antitank weapon with performance in-between that of the standard high velocity cannon and the cheaper rocket and recoilless infantry antitank weapons. They also stipulated that any solution had to be more efficient in the use of propellant as German war industry had reached maximum cannon propellent production capacity.[2]
In 1944, the German firm Rheinmetall-Borsig came up with a completely new concept for propelling a projectile from cannon, which, while not recoilless, greatly reduced recoil and drastically reduced the manufacturing cost. This concept was called the Hoch-und-Niederdruck System which roughly translates to "High-Low Pressure System". With this system, only the very back of the cannon's breech had to be reinforced against the high firing pressures.
Rheinmetall designed an antitank cannon using their "High-Low Pressure System" that fired a standard general purpose HE 8.1-cm mortar bomb which had been modified to function as an antitank round with a shaped charge.[notes 2] Normally, a mortar bomb cannot be fired from a cannon, because their thin walls can not endure the high stress forces upon firing from a cannon. The 8.1-cm round was mounted on a rod which was fixed to a round steel plate with eight holes in it. A shear pin held the round to the rod. The round and the plate were fitted at the mouth of a cut-down cannon shell casing which contained two propellant bags. On firing, the pressure would build up in the shell casing, which along with the reinforced breech, acted as the "High Pressure Chamber" and bled out the steel plate holes at half the pressure to the thinner walled cannon barrel which acted as the "Low Pressure Chamber". Unlike standard cannon, in which the propellant "kicks" the projectile out the barrel with an almost instant acceleration to maximum muzzle velocity, the Rheinmetall concept "shoved" the projectile out the barrel at a constantly increasing muzzle velocity. There was recoil, but nowhere near the recoil of the 5-cm and 7.5-cm antitank cannons being used at that time by the German forces, which required a heavy constructed carriage, and both a heavy and complex hydraulic recoil mechanism as well as a muzzle brake to contain the massive recoil on firing. The Rheinmetall solution required only a lightweight recoil unit and muzzle brake. The only major drawback was its maximum range of 750 meters (in direct fire against tanks), but this was offset by an armor penetration of 140 mm and no telltale back-blast. The Germans ordered the Rheinmetall gun into production, designating it as the 8-cm Panzer Abwehr Werfer 600 (PAW 600).[notes 3] Only about 250 were produced before the war's end. None were reported to have seen combat.[2]

Further development

The Allies captured and examined the PWK, but initially showed little interest in the new system developed by the Germans. The first example of a type of High-Low System developed after World War II was the British Limbo antisubmarine weapon, which launched depth charge-like projectiles. The Limbo was a development of the World War II Squid, which, while effective, was limited by a set range of 275 meters. The Limbo, by opening and closing vents that varied the pressure of the gases on firing, allowed for a range that could be varied between 336 meters to almost 1000 meters.[3]

M79 40-mm grenade launcher


Cross Section of 40mm HEDP Round

Inside view of a spent casing for a 40mm grenade, showing the internal pressure chamber for the high-low pressure system.
The most well known use of the High-Low System was by the U.S. Army with the introduction of the M79 grenade launcher shortly before the Vietnam War. The M79 fired a 40-mm shell which contained a standard fragmentation grenade with a modified fuze. The cartridge casing contained a heavy cup-shaped "High Pressure Chamber" in the bottom. On firing, the propellant builds up pressure until it breaks through the copper cover, venting out to the "Low Pressure Chamber". The U.S. Army referred to their high-low system as the High-Low Propulsion System.[4] Along with a heavy rubber pad on the M79 butt stock, the High-Low system kept recoil forces manageable for the infantryman using the weapon.
The M79 was later replaced by the M203 which mounts under the barrel of an assault rifle.[5] Later, the U.S. Army developed a higher velocity 40-mm round using their High-Low Propulsion system for use by heavier machine gun-type grenade launchers used on vehicles and helicopters. Today, besides the U.S. military, the 40-mm grenade family is extremely popular and in use by armies worldwide and variants of it are in production by countries other than the U.S., with one reputable reference publication in 1994 needing almost a dozen pages to list all the variants and nations producing 40-mm grenade ammunition based on the U.S. Army's development of the 1960s.[6]

Russian developments

Shortly after the Vietnam War ended, the Soviet Union introduced a 40-mm grenade launcher that used the High-Low Principle, but with a twist on the original design. The GP-25 40-mm grenade launcher fits under the assault rifle and fires a caseless projectile that is muzzle-loaded. Instead of having a case, the high-pressure chamber is located on the rear of the projectile with ten vent holes, in which the launcher barrel acts as the low-pressure chamber.[7] The ignition of the propellant gases also causes the drive band to engage the launcher grooves, similar to the American Civil War Parrott muzzle loading rifled cannon.
While little documentation exists, in the 1950s the Soviet Army developed 73-mm cannon for wheeled armored reconnaissance vehicles that fired a munition very similar in operation to the original World War II German concept. However, it was never introduced into service, and instead the Russians developed a low velocity 73mm cannon that fired a rocket projectile which was ejected by a small charge in the normal fashion.[2]

Swedish use

External images
Pansarskott m/68 "Miniman"
Pskott m/68 from Swedish Army manual
Miniman high-low launch system located behind 74mm HEAT projectile
Besides the previously mentioned family of popular 40-mm grenades, the only other major use of a High-Low System was by the Swedish firm FFV in their development of the 1960s-era Miniman one man infantry antitank weapon. The Miniman was simpler and cheaper than anything imagined by designers in World War II. Inside what looked like a rocket launcher tube, is a HEAT projectile attached by a break away bolt to an alloy aluminum tube with ports drilled in it and which acts like a kind of high-pressure chamber. The launch tube in which it is mounted acts as the low-pressure chamber. When the propellant is ignited in the aluminum tube, gases escape through the ports and are allowed to build up in the launch tube to the point of almost causing a recoil. The break away bolt then snaps, allowing the projectile to move forward. Unlike other High-Low Systems, gases are allowed to escape to the rear of the launch tube, achieving a totally recoilless effect.[8]