Search

The school of CAMIDRCS

(coalition against mysticism in defence of reason commonsense and science)

Category

Science

A Search Strategy for Detecting Extraterrestrial Intelligence

by Prof. Carl Sagan

 

Suppose we have arranged a meeting at an unspecified place in New York City with a stranger we have never met and about whom we know nothing – a rather foolish arrangement, but one that is useful for our purposes. We are looking for him, and he is looking for us. What is our search strategy? We probably would not stand for a week on the corner of Seventy-eighth Street and Madison Avenue.

Instead, we would recall that there are a number of widely known landmarks in New York City – as well known to the stranger as to us. He knows we know them, we know he knows we know them, and so on. We then shuttle among these landmarks: The Statue of Liberty, the Empire State Building, Grand Central Station, Radio City Music Hall, Lincoln Center, the United Nations, Times Square, and just conceivably, City Hall. We might even indulge ourselves in a few less likely possibilities, such as Yankee Stadium or the Manhattan entrance to the Staten Island Ferry. But there are not an infinite number of possibilities. There are not millions of possibilities; there are only a few dozen possibilities, and in time we can cover them all.

The situation is just the same in the frequency-search strategy for interstellar radio communication. In the absence of any prior contact, how do we know precisely where to search? How do we know which frequency or “station” to tune in on? There are at least millions of possible frequencies with reasonable radio bandpasses. But a civilization interested in communicating with us shares with us a common knowledge about radio astronomy and about our Galaxy. They know, for example, that the most abundant atom in the universe, hydrogen, characteristically emits at a frequency of 1,420 Megahertz. They know we know it. They know we know they know it. And so on. There are a few other abundant interstellar molecules, such as water or ammonia, which have their own characteristic frequencies of emission and absorption. Some of these lie in a region of the galactic radio spectrum where there is less background noise than others. This is also shared information. Students of this problem have come up with a short list of possibly a dozen frequencies that seem to be the obvious ones to examine. It is even conceivable that water-based life will communicate at water frequencies, ammonia-based life at ammonia frequencies, etc.

There appears to be a fair chance that advanced extraterrestrial civilizations are sending radio signals our way, and that we have the technology to receive such signals. How should a search for these signals be organized? Existing radio telescopes, even very small ones, would be adequate for a preliminary search. Indeed, the ongoing search at the Gorky Radiophysical Institute, in the Soviet Union, involves telescopes and instrumentation that are quite modest by contemporary standards.

The amiable and capable president of the Soviet Academy of Sciences, M. V. Keldysh, once told me, with a twinkle in his eye, that “when extraterrestrial intelligence is discovered, then it will become an important scientific problem.” A leading American physicist has argued forcefully with me that the best method to search for extraterrestrial intelligence is simply to do ordinary astronomy; if the discovery is to be made, it will be made serendipitously. But it seems to me that we can do something to enhance the likelihood of success in such a search, and that the ordinary pursuit of radio astronomy is not quite the same as an explicit search of certain stars, frequencies, bandpasses, and time constants for extraterrestrial intelligence.

But there are enormous numbers of stars to investigate, and many possible frequencies. A reasonable search program will almost certainly be a very long one. Such a search, using a large telescope full time, should take at least decades, by conservative estimates. The radio observers in such an enterprise, no matter how enthusiastic they may be about the search for extraterrestrial intelligence, would very likely become bored after many years of unsuccessful searching. A radio astronomer, like any other scientist, is interested in working on problems that have a high probability of more immediate results.

The ideal strategy would involve a large telescope that could devote something like half time to the search for extraterrestrial intelligent radio signals and about half time to the study of more conventional radioastronomical objectives, such as planets, radio stars, pulsars, interstellar molecules, and quasars. The difficulty in using several existing radio observatories, each for, say, 1 percent of their time, is that these activities would have to be pursued for many centuries to have a reasonable probability of success. Since the time on existing radio telescopes is mainly spoken for, larger allocations of time seem unlikely.

A wide variety of objects obviously should be examined: G-type stars, like our own; M-type stars, which are older; and exotic objects, which may be black holes or possible manifestations of astroengineering activities. The number of stars and other objects in our own Milky Way Galaxy is about two hundred billion, and the number that we must examine to have a fair chance of detecting such signals seems to be at least millions.

There is an alternative strategy to searching painfully each of millions of stars for the signals from a civilization not much more advanced than our own. We might examine an entire galaxy all at once for signals from civilizations much more advanced than ours . A small radio telescope can point at the nearest spiral galaxy to our own, the great galaxy M31 in the constellation Andromeda, and simultaneously observe some two hundred billion stars. Even if many of these stars were broadcasting with a technology only slightly in advance of our own, we would not pick them up. But if only a few are broadcasting with the power of a much more advanced civilization, we would detect them easily. In addition to examining nearby stars only slightly in advance of us, it therefore makes sense to examine, simultaneously, many stars in neighboring galaxies, only a few of which may have civilizations greatly in advance of our technology.

 

We have been describing a search for signals beamed in our general direction by civilizations interested in communicating with us. We ourselves are not beaming signals in the direction of some specific other star or stars. If all civilizations listened and none transmitted, we would each reach the erroneous conclusion that the Galaxy was unpopulated, except by ourselves. Accordingly, it has been proposed – as an alternative and much more expensive enterprise – that we also “eavesdrop”; that is, tune in on the signals that a civilization uses for its own purposes, such as domestic radio and television transmission, radar surveillance systems, and the like. A large radio telescope devoting half time to a rigorous search for intelligent extraterrestrial signals beamed our way would cost tens of millions of dollars (or rubles) to construct and operate. An array of large radio telescopes, designed to eavesdrop to a distance of some hundreds of lightyears, would cost many billions of dollars.

 

In addition, the chance of success in eavesdropping may be slight. One hundred years ago we had no domestic radio and television signals leaking out into space. One hundred years from now the development of tight beam transmission by satellites and cable television and new technologies may mean that again no radio and television signals would be leaking into space. It may be that such signals are detectable only for a few hundred years in the multibillion-year history of a planet. The eavesdropping enterprise, in addition to being expensive, may also have a very small probability of success.

 

The situation we find ourselves in is rather curious. There is at least a fair probability that there are many civilizations beaming signals our way. We have the technology to detect these signals out to immense distances – to the other side of the Galaxy. Except for a few back-burner efforts in the United States and the Soviet Union, we – that is, mankind – are not carrying out the search for extraterrestrial intelligence. Such an enterprise is sufficiently exciting and, at last, sufficiently respectable that there would be little difficulty in staffing a radio observatory designed for this purpose with devoted, capable, and innovative scientists. The only obstacle appears to be money.

 

While not small change, some tens of millions of dollars (or rubles) is, nevertheless, an amount of money well within the reach of wealthy individuals and foundations. In fact, there is in astronomy a long and proud history of observatories funded by private individuals and foundations: The Lick Observatory, on Mount Hamilton, California, by Mr. Lick (who wanted to build a pyramid, but settled for an observatory – in the base of which he is buried); the Yerkes Observatory in Williams Bay, Wisconsin, by Mr. Yerkes; the Lowell Observatory in Flagstaff, Arizona, by Mr. Lowell; and the Mount Wilson and Mount Palomar Observatories in Southern California, by a foundation established by Mr. Carnegie. Government money will probably be forthcoming for such an enterprise eventually. After all, it costs about the same as the replacement costs of  U.S. aircraft shot down over Vietnam in Christmas week, 1972. But a radio telescope designed for communication with extraterrestrial intelligence and an attached institute of exobiology would make a very fitting personal memorial for someone.

—The cosmic connection; 163: 1973

Pdf available

Advertisements

What Happened Before the Big Bang?

by Michio Kaku

Not only would a quantum theory of gravity resolve what happens at the center of a black hole; it would also resolve what happened before the Big Bang.

At present, there is conclusive evidence that a cataclysmic explosion occurred roughly 15 billion years ago which sent the galaxies in the universe hurtling in all directions. Decades ago, physicist George Gamow and his colleagues predicted that the “echo” or afterglow of the Big Bang should be filling up the universe even today, radiating at a temperature just above absolute zero. It wasn’t until 1992, however, that the Cosmic Background Explorer (COBE) satellite finally picked up this “echo” ofthe Big Bang. Physicists were elated to find that hundreds of data points perfectly matched the prediction of the theory. The COBE satellite detected the presence of a background microwave radiation, with a temperature of 3 degrees above absolute zero, which fills up the entire universe.

Although the Big Bang theory is on solid experimental grounds, the frustrating feature of Einstein’s theory is that it says nothing about what happened before the Big Bang or why there was this cosmic explosion. In fact, Einstein’s theory says that the universe was originally a pinpoint that had infinite density, which is physically impossible.

Infinite singularities are not allowed in nature, so ultimately a quantum theory of gravity should give us a clue as to where the Big Bang came from.

The superstring theory, being a completely finite theory, gives us deeper insight into the era before the Big Bang. The theory states that at the instant of creation, the universe was actually an infinitesimal tendimensional bubble. But this bubble (somewhat like a soap bubble) split into six-and four-dimensional bubbles. The six-dimensional universe suddenly collapsed, thereby expanding the four-dimensional universe into the standard Big Bang.

Furthermore, this excitement about quantizing gravity is fueling a new branch of physics called “quantum cosmology,” which tries to apply the quantum theory to the universe at large. At first, quantum cosmology sounds like a contradiction in terms. The “quantum” deals with the very small, while “cosmology” deals with the very large, the universe itself. However, at the instant of creation, the universe was very small, so quantum effects dominated that early moment in time.

Quantum cosmology is based on the simple idea that we should treat the universe as a quantum object, in the same way that we treat the electron as a quantum object. In the quantum theory, we treat the electron as existing in several energy states at the same time. The electron is free to move between different orbits or energy states. This, in turn, gives us modern chemistry. Thus, according to Heisenberg’s Uncertainty Principle, you never know precisely where the electron is. The electron thus exists in several “parallel states” simultaneously.

Now consider the universe to be similar to an electron. If we quantize the universe, the universe must now exist simultaneously in several “parallel universes.” Once we quantize the universe, we are necessarily led to believe that the universe can exist in parallel quantum states. When applied to a universe, it gives us the “multiverse.”

Visions; Michio Kaku: 1998:350

Pdf available

Beyond 2100: Our Place Among the Stars

by Prof. Michio Kaku

The fate of humanity ultimately must lie in the stars. This is not wishful thinking on the part of hopeless visionaries; it is mandated by the laws of quantum physics. Eventually, physics tell us, the earth must die.

Since it is inevitable that the earth will be destroyed sometime in the future, the space program may ultimately be our only salvation as a species. At some point in the distant future, either we stay on the planet and die with it or we leave and migrate to the stars.

Carl Sagan has written that human life is too precious to be restricted to one planet. Just as animal species increase their survivability by dispersing and migrating to different regions, humanity must eventually explore other worlds, if only out of self-interest. It is our fate to reach for the stars.

The upper limit for the existence of the earth is about 5 billion years, when the sun exhausts its hydrogen fuel and mutates into a red giant star. At that time, the atmosphere of the sun will expand enormously until it reaches the orbit of Mars. On earth, the oceans will gradually boil, the mountains will melt, the sky will be on fire, and the earth will be burnt into a cinder.

The poets have long asked whether the earth will die in fire or ice. The laws of quantum physics dictate the answer: the earth will die in fire. But even before that ultimate time 5 billion years from now when the sun exhausts its fuel, humanity will face a series of environmental disasters which could threaten its existence, such as cosmic collisions, new ice ages, and supernova explosions.

 

Cosmic Collisions

 

The earth lies within a cosmic shooting gallery filled with thousands of NEOs (Near Earth Objects) that could wipe out life on earth. Some   lurking in space undetected. In 1991, NASA estimated that there are 1,000 to 4,000 asteroids that cross the earth’s orbit which are greater than half a mile across and which could inflict enormous destruction on human civilization. The astronomers at the University of Arizona estimate that there are 500,000 near earth asteroids greater than a hundred meters across, and 100 million earth-crossing asteroids about ten meters across.

Surprisingly enough, every year, on average, there is an asteroid impact creating about 100 kilotons of explosive force. (Fortunately, these asteroids usually break up high in the atmosphere and rarely hit the earth’s surface.)

In June 1996, a close call with an NEO took place. This time asteroid 1996JA1, about one-third of a mile across, came within 280,000 miles of the earth, or a bit farther than the moon. It would have hit the earth with the force of about 10,000 megatons of explosive power (greater than the combined U.S./Russian nuclear weapons stockpile).

There were several deeply unsettling facts concerning both the 1993 asteroid and 1996JA1. First, they were undetected, suddenly appearing almost out of nowhere. Second, they were discovered not by any government-sponsored monitoring organization (there is none) but by mere accident. (Two students at the University of Arizona stumbled across 1996JA1.)

An asteroid only a kilometer across would create cosmic havoc by impacting on the earth. Astronomer Tom Gehrels of the University of Arizona estimates it would have the energy of a million Hiroshima bombs. If it “hit on the West Coast,” he adds, “the East Coast would go down in an earthquake; all your buildings in New York would collapse.” The shock wave would flatten much of the United States. If it hit the oceans, the tidal wave it created could be a mile high, enough to flood most coastal cities on earth. On land, the dust and dirt of an asteroid impact sent into the atmosphere would cut off the sun and cause temperatures to plunge on earth.

The most recent giant impact took place in Siberia, on June 30, 1908, near the Tunguska River, when a meteor or comet about fifty yards across exploded in midair, flattening up to 1,000 square miles of forest, as if a giant hand came down from the sky. The tremors were recorded as far as London.

About 15,000 years ago, a meteor hit Arizona, carving out the famous Barringer Crater, creating a hole almost three-quarters of a mile across. It was caused by an iron meteor about the size of a ten-story building.

And 64.9 million years ago (according to radioactive dating) the dinosaurs may have been killed off by the comet or meteor that hit the Yucatán in Mexico, gouging out an enormous crater about 180 miles across, making it the largest object to hit the earth in the last billion years.

One conclusion from all this is that a future meteor or comet impact which could threaten human civilization is inevitable. Furthermore, on the basis of previous incidences, we can even give a rough estimate of the time scale on which to expect another collision. Extrapolating from Newton’s laws of motion, there are 400 earth-crossing asteroids greater than one kilometer which definitely will hit the earth at some time in the future.

Within the next 300 years, we therefore expect to see another Tunguska-sized impact, which could wipe out an entire city. On the scale of thousands of years, we expect to see another Barringer type of impact, which can destroy a region. And on the scale of millions of years, we expect to see another impact that may threaten human existence.

Unfortunately, NEOs have a high “giggle factor.” As a result, NASA has allocated only $1 million per year to identify these planet-killing objects. Most of the work locating these NEOs is performed by a handful of amateurs.

Visions; MichioKaku: 1998:316

Pdf available

What Is a Space Suit?

by Negash Alamin

First, it is important to realize that there are numerous types of space suits developed throughout history for different purposes in space by different countries for different missions starting from the first space suit worn by Russian cosmonaut Yuri Gagarin on the first manned mission to space in 1961 until today space suits. They are indispensable equipment in space exploration and investigation. They are protective attire useful to astronauts to accomplish scientific experiments and repair outer space machinery on the ISS.

Temperatures in outer space fluctuate between 121 degree centigrade and -232 degree centigrade. Outer space is also a vacuum which creates negative pressure that is not conducive to the human body to survive. A space suit must accomplish several functions including: it must provide stable atmospheric pressure; a cooling and temperature regulation mechanism; efficient mobility for work; effective circulation of oxygen and carbon dioxide; protection against UV radiation; protection against micro-meteorites which travel at tremendous speeds which could puncture the garment if it is not resistant enough.

The need for space suits rose when travelling at high altitudes became possible and the threat posed by low atmospheric pressure and temperature fluctuation became apparent. At high altitudes oxygen gets tinnier and the pressure that keeps our body fluids in our bodies becomes weaker. At 10.667 km atmospheric pressure is 3.5 pounds per square inch; whereas it is 14.7 pounds per square inch at sea level.In addition without sufficient pure oxygen the astronaut can lose consciousness in a minute.

Modern space suits are self-contained mini spacecrafts in themselves that have all the necessary elements to keep the scientist alive and aid him in his exploration. They even allow him or her to move about independently in space from the ISS module by a mechanism called SAFER  (simplified aid for EVA rescue) which is an apparatus on the back pack of their suit with thruster jets which the astronaut controls  like we saw in the movie Gravity performed by actor George Clooney.

In addition, to keeping the scientist and explorer alive these suits also allow for space walk or extra vehicular activity (EVA).Astronauts on the International space station (ISS) go out on space walks to fix equipment, conduct experiments, repair satellites, and perform other tasks. The first spacewalk was performed by Alexi Leonov and Ed white followed in a few months time.In addition to the suit they utilize safety tethers that are attached to themselves and to their space vehicle. It is an addition safety measure which keeps the astronauts from floating into space.

The detail of every suit developed by Russian and American scientists in history is intricate and superfluous for this article; but, to mention few types of space suits:

  • SK-1 (worn by Yuri in 1961)
  • Navy Mark IV (used for project Mercury)
  • Gemini suits
  • Apollo Block IA1C
  • Apollo A7L (worn in the moon landing on Apollo 11)
  • Orlan suits (current Russian suits)
  • Advanced Crew Escape suit ( used for leaving and entering earth while in flight)
  • EMU (Extra vehicular mobility unit which is used for EVA in earth orbit)

Thus, in general based on the model, purpose and era they were developed they all differed. Space suits haven’t stagnated in design or improvement as well as purpose; they are being improved as time moves on and private companies like Space X and others join space exploration.

Pdf available

Parker solar probe to launch on 2018

by Negash Alamin

Formerly called Solar probe plus now designated Parker solar probe after Eugene N. parker professor at the University of Chicago is a probe designed to travel directly through the Sun’s atmosphere (solar wind) about 6,437,388.3515 km from the Sun’s surface. The data collected will augment our understanding of space weather and stellar function or how stars work.

According to NASA the probe is scheduled to launch on July 31, 2018 from Cape Canaveral Air force station, Florida. The probe was designed at John Hopkins Applied Physics Laboratory in Laurel, Maryland. The probe will be exposed to severe heat from our star like no other probe; thus, possesses a heat shield or thermal protection system which is made of 11.43 centimeter thick carbon composite material that will reach 1,371 degree Celsius while in action in space.

The Sun is a 1,391,398.59 kilometer fire ball which has its own atmosphere called the heliosphere which encompasses the whole solar system. The Sun amazingly takes 99.8% of the entire mass of our solar system. The heliosphere is generated by the Sun’s constant outward expulsion of solar wind which can reach 20 billion miles long. The environment inside our solar system is thus different from the environment outside it or what is called the interstellar space. Generally, a solar wind consists of ionized atoms from the Sun’s corona outer layer and magnetic fields; the solar wind is ejected away from the sun in all directions at extreme supersonic speeds.

The temperatureinside the star is about 15 million degreesCelsius and the energy from thistemperature surfaces to the outer layer of the sun by convection andthis convection current is responsible for the magnetic fields of the star. The hot gas in the corona remains entangled in the magnetic fields of the Sun as you probably saw in a solar flare ejection video or picture which creates some sort of loop ring structure over the sun then bursts free to become the solar wind.

As the solar wind starts to reach interstellar space its velocity drops down and this point is called Termination shock; then this wind continually passes to what is called heliosheath and then to the heliopause.  Beyond our heliosphere or beyond the heliopause the interstellar space or the space between stars in our galaxy is filled with plasma and radiation from other stars which is more elevated than the Sun’s heliosphere.

In short, instruments like the Parker space probe will help us in understanding our astonishing and yet mysterious solar environment better and closer than before; which will aid us in planning other missions and most importantly to solve several questions which surround our metaphysical reality.

Pdf available

footer

Bionics

by  Prof. Michio Kaku

 

Is it possible to interface directly with the brain, to harness its fantastic capability?

Scientists are proceeding to explore this possibility with remarkable speed. The first step in attempting to exploit the human brain is to show that individual neurons can grow and thrive on silicon chips. Then the next step would be to connect silicon chips directly to a living neuron inside an animal, such as a worm.

One then has to show that human neurons can be connected to a silicon chip. Last (and this is by far the most difficult part), in order to interface directly with the brain, scientists would have to decode the millions of neurons which make up our spinal cord.

In 1995, a big step was taken by a team of biophysicists led by Peter Fromherz at the Max Planck Institute of Biochemistry just outside Munich. They announced that they had successfully created a juncture between a living leech neuron and a silicon chip. In a dramatic break-through, scientists have been able to weld “hardware” with “wetware.” Their remarkable research has demonstrated that a neuron can fire and send a signal to a silicon chip, and that a silicon chip can make the neuron fire. Their methods should work for human neurons as well.

 

Of course, neurons are frustratingly thin and delicate, much thinner than a human hair. And the voltages used in experiments would often damage or kill the neurons. To solve the first problem, Fromherz used the neurons from leech ganglia (nerve bundles), which are quite large, about 50 microns across (half the diameter of a human hair). To solve the voltage problem, he brought the leech neurons, using microscopes and computer-controlled micromanipulators, to within 30 microns of a transistor on a chip.

By doing so, he was able to induce signals across this 30 micron gap without exchanging any charges whatsoever. (For example, if you vigorously rub a balloon and place it next to running water, the stream of water will bend away from the balloon without ever touching it. Likewise, the neuron never touches the silicon.)

This has paved the way to developing silicon chips that can control the firing of neurons at will, which in turn could control muscle movements. So far, Fromherz has been able to make as many as sixteen contact points between a chip and a single neuron. His next step is to use the neurons from the hippocampus of rat brains. Although they are much thinner than leech neurons, they live for months, while leech neurons last only for a matter of weeks.

Another step in trying to grow neurons on silicon was achieved in 1996. Richard Potember at Johns Hopkins University succeeded in coaxing the neurons of baby rats to grow on a silicon surface which was painted with certain peptides. These neurons sprouted dendrites and axons, just like ordinary neurons.

The ultimate aim of his group is to grow neurons so their axons and dendrites follow predetermined paths that can create “living circuits” on the silicon surface. If successful, it might allow neurons to conform to the architecture of a logic circuit in a chip. The doctors at the Harvard Medical School’s Massachusetts Eye and Ear Infirmary have already begun taking the next step: getting a team together to build the “bionic eye.” The group expects to conduct human studies with computer chips implanted into the human eye within five years. If successful, they may be able to restore vision for the blind in the twenty-first century.

“We have developed the electronics, we have learned how to put a device into the eye without hurting the eye, and we have demonstrated that the materials are biocompatible,” says Joseph Rizzo. They are designing an implant consisting of two chips, one of which contains a solar panel. Light striking the solar panel will start up a laser beam, which then hits the second panel and sends a message down the wire to the brain. A bionic eye would be of enormous help for the blind who have a damaged retina but whose connection to the brain is still intact. Ten million Americans, for example, suffer from macular degeneration, the most common form of blindness among the elderly. Retinitis pigmentosa, an inherited form of blindness, affects another 1.2 million.

Already, studies have shown that damaged cones and rods in animal retinas can be electrically stimulated, creating signals in the visual cortex of the animal’s brain. This means that, in principle, it may one day be possible to connect directly to the brain artificial eyes which have greater visual acuity and versatility than our own eye. Our eye is essentially the eye of an ape; it can see only certain colors that apes can see, and cannot see colors which are visible to other animals (for example, bees see ultraviolet radiation from the sun, which is used in their search for flowers). But an artificial eye could be constructed with superhuman capabilities, such as telescopic and microscopic vision, or the ability to see infrared and ultraviolet radiation. Thus at some point it may be possible to develop artificial eyesight that exceeds the capability of normal eyesight.

In the world beyond 2020, we may be able to connect silicon microprocessors with artificial arms, legs, and eyes directly to the human nervous system, which would be of enormous help in aiding people with disabilities. But although it may be possible to connect the human body to a powerful mechanical arm, the stunts we saw on the TV show The Six Million Dollar Man would place intolerable stresses on our

skeletal system, rendering most superhuman feats impossible. To have superhuman strength would require superhuman skeletal systems that can absorb the shock and stress of such feats.

Visions; 1999:112

Pdf available

From 2020 to 2050: Growing New Organs

by Prof. Michio Kaku

 

But even if age genes do exist and we can alter them, will we suffer the curse of Tithonus, who was doomed to live forever in a decrepit body? It is not clear that altering our age genes will reinvigorate our bodies. What is the use of living forever if we lack the mind and body to enjoy it?

A recent series of experiments show that it may one day be possible to “grow” new organs in our body to replace worn-out organs. A number of animals, such as lizards and amphibians, are able to regenerate a lost leg, arm, or tail. Mammals, unfortunately, do not posses this property, but the cells of our bodies, in principle, have, locked in their DNA, the genetic information to regenerate entire organs.

In the past, organ transplants in humans have faced a long list of problems, the most severe being rejection by our immune system. But, using bioengineering, scientists can now grow strains of a rare type of cell, called the “universal donor cells,” which do not trip our immune system into attacking them. This has made possible a promising new technology which can “grow” organ parts, as demonstrated by Joseph P. Vacanti of the Children’s Hospital in Boston and Robert S. Langer of MIT.

To grow organs, scientists first construct a complex plastic “scaffolding” which forms the outlines of the organ to be grown. Then these especially bioengineered cells are introduced into the scaffolding. As the cells grow into tissue, the scaffolding gradually dissolves, leaving healthy new tissue grown to proper specifications. What is remarkable is that the cells have the ability to grow and assume the correct position and function without a “foreman” to guide them. The “program” which enables them to assemble complete organs is apparently contained within their genes.

This technology has already been proven in growing artificial heart valves for lambs, using a biodegradable polymer, polyglycolic acid, as the scaffolding. The cells which seeded the scaffolding were taken from the animals’ blood vessels. The cells “took” to the scaffolding like children to a jungle gym.

In the past few years, this approach has been used to grow layers of human skin for use in skin grafts for burn patients, Skin cells grown on polymer substrates have been grafted onto burn patients, as well as the feet of diabetic patients, which must often be amputated for lack of circulation. This may eventually revolutionize the treatment of people with severe skin problems. As Marie Burk of Advanced Tissue Sciences says: “We can grow about six football fields from one neonatal foreskin.”

Human organs such as an ear have actually been grown inside animals as well. The scientists at MIT and the University of Massachusetts recently were able to overcome the rejection problem and (painlessly) grow a human ear inside a mouse. The scaffolding of a life-sized human ear was made of a porous, biodegradable polymer and then tucked under the skin of a specially bred mouse whose immune system was suppressed. The scaffolding was then seeded with human cartilage cells, which were then nourished by the blood of the mouse. Once the scaffolding dissolved, the mouse produced a human ear. Eventually, scientists should be able to grow this ear without the aid of the mouse. This could open up an entirely new area of “tissue engineering.”

Already other experiments have been done which show that noses can also be generated. Scientists have used computer-aided contour mapping to create the scaffolding and cartilage cells to seed the scaffolding.

Now that the technology has been shown to be effective on a small scale, the next step will be to grow entire organs, such as kidneys. Walter Gilbert predicts that within about ten years, growing organs like livers may become commonplace. One day, it may be possible to replace breasts removed in mastectomies with tissue grown from one’s own body.

Recently, a series of breakthroughs were made to grow bone, which is important since bone injuries are common among the elderly and there are more than two million serious fractures and cartilage injuries per year in the United States. Using molecular biology, scientists have isolated twenty different proteins which control bone growth. In many cases, both the genes and the proteins for bone growth have been identified. These proteins, called bone morphogenic proteins (BMP), instruct certain undifferentiated cells to become bone. In one experiment, twelve dental patients with severe bone loss in the upper jaw were successfully treated with BMP-2. (Normally, doctors would have to harvest bone from the patient’s own hip, a complicated procedure which requires surgery.)

The ultimate goal of this technology would be to grow a complex organ, such as the hand. Although this may still be decades away, it is within the realm of possibility. The step-by-step outline of such a complex process has already been mapped out.

First, the biodegradable scaffolding for the hand must be constructed, down to the microscopic details of the ligaments, muscles, and nerves. Then bioengineered cells which grow various forms of tissue would have to be introduced. As the cells grow, the scaffolding would gradually dissolve. Since blood is not yet circulating, mechanical pumps would have to provide nutrients and remove wastes during the growing process. Next, the nerve tissue would have to be grown. (Nerve cells are notoriously difficult to regenerate. However, in 1996 it was demonstrated that the severed nerve cells in mice’s spinal cords can actually regenerate across the cut.) Last, surgeons would have to connect the nerves, blood vessels, and lymph system. It is estimated that the time needed to grow such a complex organ as the hand may be as little as six months.

In the future, we may therefore expect to see a wide variety of human replacement parts becoming commercially available from now to 2020, but only those which do not involve more than just a few types of tissue or cells, such as skin, bone, valves, the ear, the nose, and perhaps even organs like livers and kidneys. Either they will be grown from scaffolding, or else from embryonic cells.

From the period 2020 to 2050, we may expect more complex organs and body parts containing a wide variety of tissue cells to be duplicated in the laboratory. These include, for example, hands, hearts, and other complex internal organs. Beyond 2050, perhaps every organ in the body will be replaceable, except the brain.

Of course, extending our life span is only one of many ancient dreams. Yet another, even more ambitious one is to control life itself, to make new organisms that have never before walked the earth. In this area, scientists are rapidly approaching the ability to create new life forms.

In summary, we may see ageing research growing in several phases. First, hormones and anti-oxidants may be able to retard the ageing process, but not stop it. Second, rapid advances in genetic research may unlock the secret of cell ageing itself. For example, in 1998, a breakthrough was made when telomerase, mentioned in Chapter 8, was shown to stop ageing in human skin cells in a petri dish. In this phase, many more age genes may be isolated and shown to control the rate of cell ageing. And lastly, human organ replacement may become standard therapy in the 21st century.

Visions; 1999: 217

Pdf available

Time Travel

by Prof. Michio Kaku

Not only was Einstein aware of the strange behavior of the Einstein-Rosen bridge, he also realized that his equations allowed for time travel. Because space and time are so intricately related, any wormhole that connects two distant regions of space can also connect two time eras.

To understand time travel, consider first that Newton thought that time was like an arrow. Once fired, it traveled in a straight line, never deviating from its path. Time never strayed in its uniform march throughout the heavens. One second on the earth equaled one second on the moon or Mars.

However, Einstein introduced the idea that time was more like a river. It meanders through the universe, speeding up and slowing as it encounters the gravitational field of a passing star or planet. One second on the earth is different from one second on the moon or Mars. (In fact, a clock on the moon beats slightly faster than a clock on earth.)

The new wrinkle on all of this which is generating intense interest is that the river of time can have whirlpools that close in on themselves or can fork into two rivers. In 1949, for example, mathematician Kurt Gödel, Einstein’s colleague at the Institute for Advanced Study at Princeton, showed that if the universe were filled with a rotating fluid or gas, then anyone walking in such a universe could eventually come back to the original spot, but displaced backward in time. Time travel in the Gödel universe would be a fact of life.

Einstein was deeply troubled by the Einstein-Rosen bridge and the Gödel time machine, for it meant that there may be a flaw in his theory of gravity. Finally, he concluded that both could be eliminated on physical grounds i.e., anyone falling into the Einstein-Rosen bridge would be killed, and the universe does not rotate, it expands, as in the Big bang theory. Mathematically, wormholes and time machines were perfectly consistent. But physically, they were impossible.

However, after Einstein’s death, so many solutions of Einstein’s equations have been discovered that allow for time machines and wormholes that physicists are now taking them seriously. In addition to the rotating universe of Gödel and the spinning black hole of Kerr, other configurations that allow for time travel include an infinite rotating cylinder, colliding cosmic strings, and negative energy.

Time machines, of course, pose all sorts of delicate issues involving cause and effect i.e., time paradoxes. For example, if a hunter goes back in time to hunt dinosaurs and accidentally steps on a rodent like creature who happens to be the direct ancestor of all humans, does the hunter disappear? If you go back in time and shoot your parents before you are born, your existence is impossibility.

Another paradox occurs when you fulfill your past. Let’s say that you are a young inventor struggling to build a time machine. Suddenly, an elderly man appears before you and offers the secret of time travel.

He gives you the blueprints for a time machine on one condition: that when you become old, you will go back in time and give yourself the secret of time travel. Then the question is: where did the secret of time travel come from? The answer to all of these paradoxes ultimately may lie in the quantum theory.

 

 Problems with Wormholes and Time Machines 

 

Although time machines and wormholes are allowed by Einstein’s theory, this does not mean that they can be built. Several major hurdles would have to be crossed to build such a device. First, the energy scale at which these space-time anomalies can occur is far beyond anything attainable on earth. The amount of energy is on the order of the Planck energy, or 10 to the power of 19 billion electron volts, roughly a quadrillion times the energy of the now-canceled Superconducting Supercollider.

In other words, wormholes and time machines might be built by advanced Type I or more likely Type II civilizations, which can manipulate energy billions of times larger than what we can generate today. (Thinking about this, I could imagine how Newton must have felt three centuries ago. He could calculate how fast you had to leap to reach the moon. One had to attain an escape velocity of 25,000 miles per hour. But what kind of vehicles did Newton have back in the 1600s? Horses and carriages. Such a velocity must have seemed beyond imagination.

The situation is similar today. We physicists can calculate that all these distortions of space and time occur if you attain the Planck energy. But what do we have today? “Horses and carriages” called hydrogen bombs and rockets, far too puny to reach the Planck energy.)

Another possibility is to use “negative matter” (which is different from antimatter). This strange form of matter has never been seen. If enough negative matter could be concentrated in one place, then conceivably one might be able to open up a hole in space. Traditionally, negative energy and negative mass were thought to be physically possible. But recently the quantum theory has shown that negative energy is, in fact, possible. The quantum theory states that if we take two parallel uncharged metal plates separated by a space, the vacuum between them is not empty, but is actually frothing with virtual electron anti-electron annihilations. The net effect of all this quantum activity in the vacuum is to create the “Casimir effect” i.e., a net attraction between these uncharged plates. Such an attraction has been experimentally measured. If one can somehow magnify the Casimir effect, then one can conceivably create a crude time machine.

In one proposal, a wormhole could connect two sets of Casimir plates. If someone were to fall between one set of Casimir plates, he would be instantly transported to the other set. If the plates were displaced in space, then the system could be used as a warp drive system. If the plates were displaced in time, then the system would act as a time machine.

But the last hurdle faced by these theories is perhaps the most important: they may not be physically stable. It is believed by some physicists that quantum forces acting on the wormhole may destabilize it, so that the opening closes up. Or the radiation coming from the wormhole as we enter it may be so great that it either kills us or closes up the wormhole. The problem is that Einstein’s equations become useless at the instant when we enter the wormhole. Quantum effects overwhelm gravity.

To resolve this delicate question of quantum corrections to the wormhole takes us to an entirely new realm. Ultimately, solving the problem of warp drive, time machines, and quantum gravity may involve solving the “theory of everything.” So in order to determine whether wormholes are really stable, and to resolve the paradoxes of time machines, one must factor in the quantum theory. This requires an understanding of the four fundamental forces.

Visions; 1999: 342

Pdf available

 

 

Why Saturn has rings?

by Negash Alamin

Saturn has a fascinating visage. It is not the only planet with rings; but it has the brightest and the biggest of Uranus, Jupiter and Neptune. Galileo was the first person to identify it rings in 1610. Its rings are made of ice and rocks of different sizes. Their formation has something to do with its moons; these rings may have developed from the broken pieces of its moons or comets as well as asteroids. In close examination the planet looks like it has seven rings designated by alphabets. The rings revolve around the planet in high velocity; close examination reveals that  the bigger rings are made of smaller rings called ringlets.

The size of the planet is much bigger than our tiny home planet. The space craft sent to study the planet in 1997 called Cassini has provided an insight into the planet. The project was collaboration between Europe space agency, the Italian space agency and NASA to send a probe to study the planet and its system. This probe was the fourth space craft to visit the planet named after Astronomers Giovanni Cassini and Christiaan Hugens.

 

ring-sizes

The densest parts of the Saturn’s ring structure are the A and B rings which are separated by the Cassini division (which is a gap 4800km in width between ring A and B discovered by G. Cassini) and the C ring. These rings are denserand contain large particles;other rings include the D ring extending into the planet; which makes it the inner most ring of the planet.The D, G, E rings are characterized to be fainter due to the small size particles that make them up. Remember that naming these rings is not exclusive there are several confusing numeral names even for a single ring like for example the D ring by scientists dividing it into three ringlets of D73,D72,D68 where D68 is the closest to the planet. We need to follow latest research and developments in these structures and the name given could change anytime as the planet evolves.

In general,Saturn’s ring system is the largest and conspicuous with a thickness of about 1 km or lessextending to about 282,000 kmhorizontally; which means that you can line up 22 earths in this ring system horizontally; that is how big the ring structure seems to be. Its rings as said above are named alphabetically according to their discoverygiving us the main rings C, B and A from the inside of the planet into the outside.There may be 500 to 1000 rings in the system with gaps inside it;where, tiny moons orbit and keep these gaps open. There is also what has been called the F ring which is a feature outside the outmost ring A and beyond that there are additional fainter rings called G and E. Their structure is related and intertwined with the gravitation of Saturn’s moons and other forces which are under study.

0ctober 6, 2017

Pdf available

A WordPress.com Website.

Up ↑