Satellites Essay

Satellite is probably the most useful invention since the wheel. Satellites have the capability to let you talk with someone across the nation or let you close a business deal through video communication. Almost everything today is heading towards the use of satellites, such as telephones. At&t has used this communications satellite (top right) ever since the late 1950s. TVS and radios are also turning to the use of satellites. RCA and Sony have released satellite dishes for Radio and Television services. New technology also allows the military to use satellites as a weapon.

The new ION cannon is a satellite that can shoot a particle beam anywhere on earth and create an earthquake. They can also use it’s capability for imaging enhancement, which allows you to zoom in on someone’s nose hairs all the way from space. Robert Gossard (left) was one of the most integral inventors of the satellite. He was born on October 5, 1882. He earned his Masters and Doctoral degree in Physics at Clark University. He conducted research on improving solid- propellant rockets. He is known best for firing the world’s first successful liquid-propellant rocket on March 16, 1926.

This was a simple pressure-fed rocket that burned gasoline and liquid oxygen. It traveled only 56m (184 ft) but proved to the world that the principle was valid. Gossard Died August 10, 1945. Gossard did not work alone, he was also in partnership with a Russian theorist named Konstantin Tsiolkovsky. Tsiolkovsky was born on September 7, 1857. As a child Tsiolkovsky educated himself and rose to become a High School teacher of mathematics in the small town of Kaluga, 145km (90mi) south of Moscow. In his early years Tsiolkovsky caught scarlet fever and became 80% deaf.

Together, the theoretical work of Russian Konstantin Tsiolkovsky and the experimental work of American Robert Gossard, confirmed that a satellite might be launched by means of a rocket. I chose the satellite to research because many things such as computers, TVS and telephones are using satellites, and I thought it would be a good idea to figure out how they work and the history behind them before we start to use them more rapidly. I also picked the satellite because I think that my life would differ without it. For instance, The Internet or World Wide Web would run ery slowly or would cease to exist altogether.

We wouldn’t be able to talk to people across the world because telephone wires would have to travel across the Atlantic, and if they did, the reception would be horrible. We wouldn’t know what the weather would be like on earth, or what the stars and planets are like in space. We wouldn’t be able to watch live television premiers across the country because all those are via satellite. A satellite is a secondary object that revolves in a closed orbit around a planet or the sun, but an artificial satellite is used to revolve around the arth for scientific research, earth applications, or Military Reconnaissance.

All artificial satellites consist of certain features in common. They include radar for altitude measurements, sensors such as optical devices in observation satellites, receivers and transmitters in communication satellites, and stable radio-signal sources in navigation satellites. Solar cells generate power from the sun , and storage batteries are used for the periods when the satellite is blocked from the sun by the Earth. These batteries in turn are recharged by the solar cells. The Russians launched Sputnik 1 (left) on October 4, 1957, as the first satellite ever to be in space.

The United States followed by launching Explorer 1 on January 31, 1958. In the years that followed, more than 3,500 satellites were launched by the end of 1986. A science physicist said that “If you added up all the radio waved sent and received by satellites, it wouldn’t equal the energy of a snowflake hitting the ground. Satellites were built and tested on the ground. They were then placed into a rocket and launched into space, where they were released and placed into orbit. The rocket would then become space junk, and the owner of the satellite would lose a tremendous amount of money.

Now that NASA has created a space shuttle, several satellites can be launched simultaneously from the shuttle and the shuttle can then land for reuse and financial purposes. The space shuttles also have the capability to retrieve a satellite from orbit and bring it down to earth for repairs or destruction. Once the satellite is released from the space shuttle, the antenna on the satellite will receive a signal from earth that will activate it’s rockets to move it into orbit. Once in orbit, The antenna will receive another signal telling the satellite to erect it’s solar panels (bottom).

Then the control center on earth will upload a program to the satellite telling it to use it’s censors to maintain a natural orbit with earth. The satellite will then pick a target point on earth, and stay above that point for the remainder of it’ s life. Once a satellite shuts down, the program uploaded to the satellite will tell it to fold up it’s solar panels and remain in its orbit. Several days after the shut down, a space shuttle will pick up the satellite for epairs or replacement of new cells.

As you can see, the satellite is a very complicated piece of technology, but it’s capabilities are endless. By the end of the year 2000, there will be an estimated 7,000 satellites in orbit! That’s a satellite per 36,000 people. Satellites are becoming more and more useful as technology advances. Computers are turning towards the Internet, telephones are turning towards video- communication, and televisions are looking for better cable services. So as long as satellites orbit the earth, you might as well take advantage of them now, before it’s too late.

Uranus – mysterious blue green planet

2,870,990,000 km (19. 218 AU) from the Sun, Uranus hangs on the wall of space as a mysterious blue green planet. With a mass of 8. 683e25 kg and a diameter of 51,118 km at the equator, Uranus is the third largest planet in our solar system. It has been described as a planet that was slugged a few billion years ago by a large onrushing object, knocked down (never to get up), and now proceeds to roll around an 84-year orbit on its belly. As the strangest of the Jovian planets, the description is accurate.

Uranus has a 17 hour and 14 minute day and takes 84 years to make its way about the sun with an axis tilted at around 90 with retrograde rotation. Stranger still is the fact that Uranus’ axis is almost parallel to the ecliptic, hence the expression “on its belly”. Uranus is so far away that scientists knew comparatively little about it before NASA’s Voyager 2 undertook its historic first encounter with the planet. The spacecraft flew closely past distant Uranus, and came within 81,500 kilometers (50,600 miles) of Uranus’s cloudtops on Jan. 24, 1986.

Voyager 2 radioed thousands of images and mass amounts of other scientific data about Uranus, its moons, rings, atmosphere, interior and magnetic environment. However, while Voyager has revealed much about the gas giant, many questions remain to be answered. The history of the planet’s discovery is the first we have of its kind; Uranus was the first planet to be discovered with a telescope. The circumstances surrounding the discovery of the object are befitting of the odd planet. The earliest recorded sighting of Uranus was in 1690 by John Flamsteed, but the object was catalogued as another star.

On March 13, 1781 Uranus was sighted again by amateur astronomer William Herschel and thought to be a comet or nebulous star. In 1784, Jean-Dominique Cassini, director of the Paris Observatory and prominent professional astronomer, made the following comment: ‘A discovery so unexpected could only have singular circumstances, for it was not due to an astronomer and the marvelous telescopewas not the work of an optician; it is Mr. Herschel, a [German] musician, to whom we owe the knowledge of this seventh principal planet. ‘ (Hunt, 35) Four years passed before Uranus was recognized as a new planet, the first to be discovered in ‘modern’ times.

The discovery poses an interesting question however. Why Herschel and not someone like Cassini – a director of a prominent Observatory? It was by no accident that he discovered the first new planet. William Herschel had more than a passing fancy for the telescope. By purchasing the materials and even grinding the lenses himself, he built telescopes (namely reflectors) of exceptional quality for the time period. That same quality afforded Herschel better observational conditions than his contemporaries, and the result was a changed view of astronomy.

A new planet had been discovered, and our view of the solar system was never to be the same again. The atmosphere and geology of the first new planet is fascinating. Uranus is primarily composed of rock and various ices; with only about 15% hydrogen and a little helium – in contrast to the compositions of Jupiter and Saturn, which are mostly hydrogen. Uranus’ average temperature is around 60 Kelvin (- 350 degrees Fahrenheit) and the atmosphere is made of 83% hydrogen, 15% helium and 2% methane. The blue color we often see is the result of absorption of red light absorbed by methane in the upper atmosphere.

There may be colored bands like Jupiter’s but they are hidden from view by the overlaying methane layer. Just below the clouds visible to earthbound observers are enormous quantities of ammonia, hydrogen sulfide, and water. Still deeper inside Uranus, under the crushing weight of the overlying atmosphere, is an invisible rocky surface – discovered only by its subtle tugs on the planet’s moons. A big Earth-sized planet is hiding down there, swathed in an immense blanket of air. Like the other gas giants, Uranus has bands of clouds that blow around rapidly.

However, they are extremely faint, visible only with radical image enhancement of the Voyager 2 pictures. Recent observations made with the Hubble Space Telescope show larger and more pronounced streaks. In the past two years the speculation has been that the difference is due to seasonal effects. The speed of the winds on Uranus is changing, and while that is not exciting for a person inhabiting the Earth and used to its changeable weather, the news is noteworthy for a gas giant. The winds of Jupiter and Saturn have remained constant over time.

The winds of Uranus blow at velocities of 40 to 160 meters per second (90 to 360 mph); whereas on Earth, jet streams in the atmosphere only blow at about 50 meters per second (110 mph). Astronomers are excited that these observations could foreshadow dramatic atmospheric changes in the future. Compared with recent pictures from space, black and white drawings of Uranus – rendered by visual astronomers in the early 1900’s – “depict a vastly different planet, decorated with bright, broad bands, and even the hint of something that might be a great spot.

Significantly, they were drawn at a time when Uranus was between its solstice and its equinox, the same phase the planet is approaching now. There is more to the puzzling features of Uranus than changing winds. Data from Voyager 2 indicates that Uranus’ magnetic field is not centered on the midway point of the planet and is tilted at nearly 60 degrees with respect to the axis of rotation. The magnetic field of Uranus – which is roughly comparable to that of Earth’s – is not produced by an iron core like other planets.

The magnetic field source is unknown; the electrically conductive, super-pressurized ocean of water and ammonia once thought to lie between the core and the atmosphere now appears to be nonexistent. The magnetic fields of Earth and other planets are believed to arise from electrical currents produced in their molten cores, but if Uranus possessed one, it would be too small and too deep for it to create such a magnetic field. As with Mercury, Earth, Jupiter and Saturn, there is a magnetic tail extending millions of miles behind Uranus.

Voyager measured the tail to be at least 10 million kilometers (6. 2 million miles) behind the planet. The extreme tilt of the magnetic axis, combined with the tilt of the rotational axis, causes the field lines in this cylindrical magnetic tail to be wound into a corkscrew shape that spins like a lawn sprinkler across the galaxy. The exotic magnetosphere of Uranus is contrasted by the planet’s rather mundane ring system. Like the other gas planets, Uranus has rings.

They are very dark in color like Jupiter’s, but more like Saturn’s rings in size and composition with both fine dust and large particles ranging up to 10 meters in diameter. There are 11 known rings, all relatively faint, the brightest of which is known as the Epsilon ring. The Uranian rings were the first after Saturn’s to be discovered – which was of considerable importance since we now know that rings are a more common feature of planets than first thought, and not a peculiarity of Saturn alone.

All nine of the previously known rings of Uranus were photographed and measured by Voyager 2, as were other new rings and ringlets in the Uranian system. These observations showed that while Uranus’s rings shared similarities with the systems of Jupiter and Saturn, they are also distinctly different. Radio measurements from Voyager 2 showed the outermost ring, the epsilon, to be composed mostly of ice boulders several feet across. However, a very tenuous distribution of fine dust also seems to be spread throughout the ring system.

Incomplete rings and the varying opacity in several of the main rings leads scientists to believe that the ring system may be relatively young and did not form at the same time as Uranus. The particles that make up the rings may be remnants of a moon that was broken by a high-velocity impact or torn up by gravitational effects. To date, two new rings have been positively identified. The first, 1986 U1R, was detected between the outermost of the previously known rings – epsilon and delta – at a distance of 50,000 kilometers (31,000 miles) from Uranus’s center.

It is a narrow ring like the others. The second, designated 1986 U2R, is a broad region of material perhaps 3,000 kilometers (1,900 miles) across and just 39,000 kilometers (24,000 miles) from the planet’s center. The number of known rings may eventually grow as a result of observations by the Voyager 2 photopolarimeter instrument. The sensor revealed what may be a large number of narrow rings – or possibly incomplete rings or ring arcs – as small as 50 meters (160 feet) in width. The individual ring particles are not very reflective, which explains why some have remained unseen.

At least one ring, the epsilon, was found to be gray, an unusual color. This ring is surprisingly deficient in particles smaller than the approximate size of a beach ball – the average ring contains smaller dust sized (relatively) particles. This may be due to the atmospheric drag from the planet’s extended hydrogen atmosphere, which may siphon smaller particles and dust from the ring. The sharp edge of the epsilon ring indicates that the ring is less than 150 meters (500 feet) thick and that particles near the outer edge are less than 30 meters (100 feet) in diameter.

Important clues to Uranus’s ring structure may come from the discovery that two small moons – Cordelia and Ophelia – straddle the epsilon ring. This finding hints that small moonlets may be responsible for confining or deflecting material into rings and keeping it from escaping into space. Astronomers expected to find 18 such satellites, but only two were photographed. The satellites of Uranus form two distinct classes: the 10 small very dark inner ones discovered by Voyager 2 and the five large outer ones.

They all have nearly circular orbits in the plane of Uranus’ equator (and hence at a large angle to the plane of the ecliptic). Voyager 2 obtained clear, high-resolution images of each of the five large moons of Uranus known before the encounter: Miranda, Ariel, Umbriel, Titania and Oberon. The two largest, Titania and Oberon, are about 1,600 kilometers (1,000 miles) in diameter, roughly half the size of Earth’s Moon. The smallest, Miranda, is only 500 kilometers (300 miles) across, or just one-seventh the lunar size.

The 10 new moons discovered by Voyager bring the total number of known Uranian satellites to 15. The largest of the newly detected moons, named Puck, is about 150 kilometers (about 90 miles) in diameter, larger than most asteroids. Preliminary analysis shows that the five large moons are ice-rock conglomerates like the satellites of Saturn. The large Uranian moons appear, in fact, to be about 50 percent water ice, 20 percent carbon and nitrogen-based materials, and 30 percent rock. Their surfaces, almost uniformly dark gray in color, display varying degrees of geologic history.

Very ancient, heavily cratered surfaces are apparent on some of the moons, while others show strong evidence of internal geologic activity. Huge fault systems and canyons that indicate an active geologic history, for example, mark Titania. These features may be the result of tectonic movement in its crust. Ariel has the brightest and possibly, the geologically youngest surface in the Uranian moon system. It is largely devoid of craters greater than 50 kilometers (30 miles) in diameter. This indicates that low-velocity material within the Uranian system itself peppered the surface, helping to obliterate larger, older craters.

Ariel also appears to have undergone a period of intense activity that lead to many fault valleys, and what appear to be extensive flows of icy material. Where many of the larger valleys intersect, their surfaces are smooth; this could indicate that the valley floors have been covered with younger icy flows. Umbriel is ancient and dark, appears to have undergone little geologic activity. Large craters pockmark its surface. The darkness of Umbriel’s surface may be due to a coating of dust and small debris somehow created nearby and confined to the vicinity of that moon’s orbit.

The outermost of the moons discovered before Voyager, Oberon, also has an old, heavily cratered surface with little evidence of internal activity other than some unknown dark material apparently covering the floors of many craters. Miranda , innermost of the five large moons, is one of the strangest bodies yet observed in the solar system. Voyager images, which showed some areas of the moon at resolutions of a kilometer or less, consists of huge fault canyons as deep as 20 kilometers (12 miles), terraced layers and a mixture of old and young surfaces.

The younger regions may have been produced by incomplete differentiation of the moon, a process in which upwelling of lighter material surfaced in limited areas. Alternatively, Miranda may be a conglomerate of material from an earlier time when the moon was fractured into pieces by a violent impact. Given Miranda’s small size and low temperature (-335 degrees Fahrenheit or -187 Celsius), the degree and diversity of the tectonic activity on this moon have surprised scientists. It is believed that an additional heat source such as tidal heating caused by the gravitational tug of Uranus must have been involved.

In addition, some means must have mobilized the flow of icy material at low temperatures. The Voyager 2 flyby mission has made plenty information available on the satellites, ring system, atmosphere and geology of Uranus. It would even appear that our knowledge of the gas giant is nearly complete. Yet, Uranus remains a stubbornly mysterious planet that jealously guards its secrets. Many questions taunt the talented astronomers of our day and there is yet much to be learned about the celestial blue green oddball that hangs on its side in outer space.

Why is its axis so unusually tilted? Was it due to a massive collision? Why does Uranus have so much less hydrogen and helium than Jupiter and Saturn? Is it simply because its smaller? Or because it is farther from the Sun? What causes the unusual magnetic field? And sadly, just how many times has Miranda been blown to bits only to coalesce again? These questions and many more remain to encourage the continued study of such a planet. Perhaps in a few years and a couple of space probes later, we will have answers that are more concrete.

Mining in Space

On December 10, 1986 the Greater New York Section of the American Institute of Aeronautics and Astronautics (AIAA) and the engineering section of the New York Academy of Sciences jointly presented a program on mining the planets. Speakers were Greg Maryniak of the Space Studies Institute (SSI) and Dr. Carl Peterson of the Mining and Excavation Research Institute of M. I. T. Maryniak spoke first and began by commenting that the quintessential predicament of space flight is that everything launched from Earth must be accelerated to orbital velocity.

Related to this is that the traditional way to create things in pace has been to manufacture them on Earth and then launch them into orbit aboard large rockets. The difficulty with this approach is the huge cost-per-pound of boosting anything out of this planet’s gravity well. Furthermore, Maryniak noted, since (at least in the near to medium term) the space program must depend upon the government for most of its funding, for this economic drawback necessarily translates into a political problem.

Maryniak continued by noting that the early settlers in North America did not attempt to transport across the Atlantic everything then needed to sustain them in the New World. Rather they brought their tools with them and constructed their habitats from local materials. Hence, he suggested that the solution to the dilemma to which he referred required not so much a shift in technology as a shift in thinking. Space, he argued, should be considered not as a vacuum, totally devoid of everything. Rather, it should be regarded as an ocean, that is, a hostile environment but one having resources.

Among the resources of space, he suggested, are readily available solar power and potential surface mines on the Moon and later other celestial bodies as well. The Moon, Maryniak stated, contains many useful materials. Moreover, it is twenty-two times easier to accelerate a payload to lunar escape velocity than it is to accelerate the identical mass out of the EarthUs gravity well. As a practical matter the advantage in terms of the energy required is even greater because of the absence of a lunar atmosphere.

Among other things this permits the use of devices such as electromagnetic accelerators (mass drivers) to launch payloads from the MoonUs surface. Even raw Lunar soil is useful as shielding for space stations and other space habitats. At present, he noted, exposure to radiation will prevent anyone for spending a total of more than six months out of his or her entire lifetime on the space station. At the other end of the scale, Lunar soil can be processed into its constituent materials. In between steps are also of great interest.

For example, the MoonUs soil is rich in oxygen, which makes up most of the mass of water and rocket propellant. This oxygen could be RcookedS out of the Lunar soil. Since most of the mass of the equipment which would be necessary to accomplish this would consist of relatively low technology ardware, Maryniak suggested the possibility that at least in the longer term the extraction plant itself could be manufactured largely on the Moon. Another possibility currently being examined is the manufacture of glass from Lunar soil and using it as construction material.

The techniques involved, according to Maryniak, are crude but effective. (In answer to a question posed by a member of the audience after the formal presentation, Maryniak stated that he believed the brittle properties of glass could be overcome by using glass-glass composites. He also suggested yet another possibility, that of sing Lunar soil as a basis of concrete. ) One possible application of such Moon-made glass would be in glass-glass composite beams. Among other things, these could be employed as structural elements in a solar power satellite (SPS).

While interest in the SPS has waned in this country, at least temporarily, it is a major focus of attention in the U. S. S. R. , Western Europe and Japan. In particular, the Soviets have stated that they will build an SPS by the year 2000 (although they plan on using Earth launched materials. Similarly the Japanese are conducting SPS related sounding rocket tests. SSI studies have suggested that more than 90%, and perhaps s much as 99% of the mass of an SPS can be constructed out of Lunar materials.

According to Maryniak, a fair amount of work has already been performed on the layout of Lunar mines and how to separate materials on the Moon. Different techniques from those employed on Earth must be used because of the absence of water on the Moon. On the other hand, Lunar materials processing can involve the use of self-replicating factories. Such a procedure may be able to produce a so-called Rmass payback ratioS of 500 to 1. That is, the mass of the manufactories which can be established by this method will equal 500 times the mass of the original RseedS plant emplaced on the Moon.

Maryniak also discussed the mining of asteroids using mass-driver engines, a technique which SSI has long advocated. Essentially this would entail a spacecraft capturing either a sizable fragment of a large asteroid or preferably an entire small asteroid. The spacecraft would be equipped with machinery to extract minerals and other useful materials from the asteroidal mass. The slag or other waste products generated in his process would be reduced to finely pulverized form and accelerated by a mass driver in order to propel the captured asteroid into an orbit around Earth.

If the Earth has so-called Trojan asteroids, as does Jupiter, the energy required to bring materials from them to low Earth orbit (LEO) would be only 1% as great as that required to launch the same amount of mass from Earth. (Once again, moreover, the fact that more economical means of propulsion can be used for orbital transfers than for accelerating material to orbital velocity would likely make the practical advantages even greater. ) However, Maryniak noted that bservations already performed have ruled out any Earth-Trojan bodies larger than one mile in diameter.

In addition to the previously mentioned SPS, another possible use for materials mined from planets would be in the construction of space colonies. In this connection Maryniak noted that a so-called biosphere was presently being constructed outside of Tucson, Arizona. When it is completed eight people will inhabit it for two years entirely sealed off from the outside world. One of the objectives of this experiment will be to prove the concept of long-duration closed cycle life support systems. As the foregoing illustrates, MaryniakUs primary focus was upon mining the planets as a source for materials to use in space.

Dr. PetersonUs principal interest, on the other hand, was the potential application of techniques and equipment developed for use on the Moon and the asteroids to the mining industry here on Earth. Dr Peterson began his presentation by noting that the U. S. mining industry was in very poor condition. In particular, it has been criticized for using what has been described as Rneanderthal technology. S Dr. Peterson clearly implied that such criticism is justified, noting that the sooner r later the philosophy of not doing what you canUt make money on today will come back to haunt people.

A possible solution to this problem, Dr. Peterson, suggested, is a marriage between mining and aerospace. (As an aside, Dr. PetersonUs admonition would appear to be as applicable to the space program as it is to the mining industry, and especially to the reluctance of both the government and the private sector to fund long-lead time space projects. The current problems NASA is having getting funding for the space station approved by Congress and the failure begin now to implement the recommendations of the National Commission n Space particularly come to mind.

Part of the mining industryUs difficulty, according to Dr. Peterson is that is represents a rather small market. This tends to discourage long range research. The result is to produce on the one hand brilliant solutions to individual, immediate problems, but on the other hand overall systems of incredible complexity. This complexity, which according to Dr. Peterson has now reached intolerable levels, results from the fact that mining machinery evolves one step at a time and thus is subject to the restriction that each new subsystem has to be compatible ith all of the other parts of the system that have not changed.

Using slides to illustrate his point, Dr. Peterson noted that so-called RcontinuousS coal mining machines can in fact operate only 50% of the time. The machine must stop when the shuttle car, which removes the coal, is full. The shuttle cars, moreover, have to stay out of each others way. Furthermore, not only are Earthbound mining machines too heavy to take into space, they are rapidly becoming too heavy to take into mines on Earth. When humanity begins to colonize the Moon, Dr. Peterson asserted, it will eventually prove necessary to go below the urface for the construction of habitats, even if the extraction of Lunar materials can be restricted to surface mining operations.

As a result, the same problems currently plaguing Earthbound mining will be encountered. This is where Earth and Moon mining can converge. Since Moon mining will start from square one, Dr. Peterson implied, systems can be designed as a whole rather than piecemeal. By the same token, for the reasons mentioned there is a need in the case of Earthbound mining machinery to back up and look at systems as a whole. What is required, therefore, is a research program aimed at developing echnology that will be useful on the Moon but pending development of Lunar mining operations can also be used down here on Earth.

In particular, the mining industry on Earth is inhibited by overly complex equipment unsuited to todayUs opportunities in remote control and automation. It needs machines simple enough to take advantage of tele-operation and automation. The same needs exist with respect to the Moon. Therefore the mining institute hopes to raise enough funds for sustained research in mining techniques useful both on Earth and on other celestial bodies as well. In this last connection, Dr. Peterson noted that the mining industry is subject to the same problem as the aerospace industry: Congress is reluctant to fund long range research.

In addition, the mining industry has a problem of its own in that because individual companies are highly competitive research results are generally not shared. Dr. Peterson acknowledged, however, that there are differences between mining on Earth and mining on other planetary bodies. The most important is the one already mentioned-heavy equipment cannot be used in space. This will mean additional problems for space miners. Unlike space vacuum, rock does not provide a predictable environment.

Furthermore, the constraint in mining is not energy requirements, but force requirements. Rock requires heavy forces to move. In other words, one reason earthbound mining equipment is heavy is that it breaks. This brute force method, however, cannot be used in space. Entirely aside from weight limitations, heavy forces cannot be generated on the Moon and especially on asteroids, because lower gravity means less traction. NASA has done some research on certain details of this problem, but there is a need or fundamental thinking about how to avoid using big forces.

One solution, although it would be limited to surface mining, is the slusher-scoop. This device scoops up material in a bucket dragged across the surface by cables and a winch. One obvious advantage of this method is that it by passes low gravity traction problems. Slushers are already in use here on Earth. According to Peterson, the device was invented by a person named Pat Farell. Farell was, Peterson stated, a very innovative mining engineer partly because be did not attend college and therefore did not learn what couldnUt be done.

Some possible alternatives to the use of big forces were discussed during the question period that followed the formal presentations. One was the so called laser cutter. This, Peterson indicated, is a potential solution if power problems can be overcome. It does a good job and leaves behind a vitrified tube in the rock. Another possibility is fusion pellets, which create shock waves by impact. On the other hand, nuclear charges are not practical. Aside from considerations generated by treaties banning the presence of nuclear weapons in space, they would throw material too far in a low gravity environment.

The moon – only natural satellite of Earth

The moon is the only natural satellite of Earth. The moon orbits the Earth from 384,400 km and has an average speed of 3700 km per hour. It has a diameter of 3476 km, which is about that of the Earth and has a mass of 7. 35e22 kg. The moon is the second brightest object in the sky after the sun. The gravitational forces between the Earth and the moon cause some interesting effects; tides are the most obvious. The moon has no atmosphere, but there is evidence by the United States Department of Defense Clementine spacecraft shows that there maybe water ice in some deep craters near the moon’s North and South Pole that are permanently shaded.

Most of the moon’s surface is covered with regolith, which is a mixture of fine dust and rocky debris produced by meteor impact. There are two types of terrain on the moon. One is the heavily cratered and very old highlands. The other is the relatively smooth and younger craters that were flooded with molten lava. Throughout the 19th and 20th centuries, visual exploration through powerful telescopes has yielded a fairly comprehensive picture of the visible side of the moon. The hitherto unseen far side of the moon was first revealed to the world in October 1959 through photographs made by the Soviet Lunik III spacecraft.

These photographs showed that the far side of the moon is similar to the near side except that large lunar maria are absent. Craters are now known to cover the entire moon, ranging in size from huge, ringed maria to those of microscopic size. The entire moon has about 3 trillion craters larger than about 1 m in diameter. The moon shows different phases as it moves along its orbit around the earth. Half the moon is always in sunlight, just as half the earth has day while the other half has night. The phases of the moon depend on how much of the sunlit half can be seen at any one time.

In the new moon, the face is completely in shadow. About a week later, the moon is in first quarter, resembling a half-circle; another week later, the full moon shows its fully lighted surface; a week afterward, in its last quarter, the moon appears as a half-circle again. The entire cycle is repeated each lunar month, which is approximately 29. 5 days. The moon is full when it is farther away from the sun than the earth; it is new when it is closer. When it is more than half-illuminated, it is said to be in gibbous phase.

The moon is waning when it progresses from full to new, and waxing as it proceeds again to full. Temperatures on its surface are extreme, ranging from a maximum of 127 C (261 F) at lunar noon to a minimum of -173 C (-279 F) just before lunar dawn. The Harvest moon is full moon at harvest time in the North Temperate Zone, or more exactly, the full moon occurring just before the autumnal equinox on about September 23. During this season the moon rises at a point opposite to the sun, or close to the exact eastern point of the horizon.

Moreover, the moon rises only a few minutes later each night, affording on several successive evenings an attractive moonrise close to sunset time and strong moonlight almost all night if the sky is not clouded. The continuance of the moonlight after sunset is useful to farmers in northern latitudes, who are then harvesting their crops. The full moon following the harvest moon, which exhibits the same phenomena in a lesser degree, is called the hunter’s moon. A similar phenomenon to the harvest moon is observed in southern latitudes at the spring equinox on about March 21.

The Space Shuttle Challenger Disaster

The Space Shuttle Challenger Disaster was a preventable disaster that NASA tried to cover up by calling it a mysterious accident. However, two men had the courage to bring the real true story to the eyes of the public and it is to Richard Cook and Roger Boisjoly to whom we are thankful. Many lessons can be learned from this disaster to help prevent further disasters and to improve on organizations ethics. One of the many key topics behind the Challenger disaster is the organizational culture.

One of the aspects of an organizational culture is the observable culture of an organization that is what one sees and hears when walking around an organization. There are four parts to the observable culture, stories, heroes, rites and rituals and symbols. The first one is stories, which is tales told among an organizations members. In the Challenger Space Shuttle incident there were mainly four organizations thrown together to form one, Morton Thiokol, Marshall Space Flight Center, Johnson Space Center and NASA Headquarters.

All of these organizations had the same type of stories to be told. At Morton Thiokol, they talked about their product and their big deal, which they received from NASA. At NASA, its members retold stories of the previous space missions and being the first people to have landed on the moon. Second are their heroes. At Morton Thiokol, their heroes might have been the founders of the organization or its top executives like Charles Locke or Jerry Mason. At NASA, their heroes might have been Neil Armstrong, staff or any members of the organization.

All of these people that were chosen to be heroes set the standards for that organization and conducted themselves for others to follow. Third are the rites and rituals those members of an organization conduct. Since both of these organizations work together to attain the same goal, a ritual for the organization is the celebration after each successful launch and landing of a space shuttle. A rite or ritual shows a since of group unity and friendship among the organizations members. Finally there are symbols that the organization uses, which has may carry a special meaning through its communication.

Symbols in these organizations are very important because with these organizations line of work, symbols could mean the difference between life and death. For example, in the space shuttle there are different symbols on their controls. If an emergency light goes on they must know these symbols in order to fix the problem or abort the shuttle. All of these four aspects are centered on the organizations core culture. An organizations core culture is the beliefs about the right ways to behave. When Thiokol and NASA first started to plan for Challengers mission, it was part of their core culture, which ultimately caused the Challenger disaster.

To an observer at both of these organizations dealing with the Challenger mission was that everything was perfect and right on schedule. The top executives in these organizations told their employees to be quite and act as if everything was fine. They did this so that the media and the people of the United States would believe and have great admiration for NASA. The Challenger was different then the previous missions because it was the first time a citizen would be going into outer space. At this time in these organizations time, it was essential to their futures to boost Americans opinion of the space program.

The executives of these organizations knew how important this mission was to their success and pushed for the mission to happen and for its employees to convince the people of the programs growth and success. In the direction in organizational culture, worker empowerment was highly stressed although top management did not listen. This was also very important in trying to prevent the Challenger disaster. Both Thiokol and NASA asked for employees opinion on whether the launch should be a go or were their problems that may arise.

When the engineers gave their opinion that I was to dangerous for launch, the top executives refused to listen to them and voted to launch asking only for the top executives to vote. In Challengers case, the engineers were the people who knew whether or not it would be safe for launch. The employees of these organizations had the expertise on the construction of the shuttle; not its top executives. With this matter, the executives should have listened to the experts, instead of making their decision based on their reputation if they were to cancel the launch.

The worker empowerment in these organization is well carried out by its employees, however, its top executives do not hold their part of the bargain and that is one of the many problems that led to the Challenger disaster. Another problem with these organizations culture is the workplace ethics. At the beginning of NASA, they stressed the importance of ethics and that is what transformed NASA in to a successful organization. NASA was concerned for its astronauts and the safety of the members of the organization and the world.

When the American public lost interest in the space program, NASA and Thiokols top executives drifted away from safety which eventually led to the down fall of the Challenger. Even during the Challenger days, its employees followed the organizations ethic codes, except for their top executives. They were the people who stressed ethic, but taught silence to their employees. An organization can not function when its top executives are not making ethical decision and that is what happened to Thiokol and NASA. Another key problem with Thiokol and NASA was their decision-making.

Thiokol and NASA made the worst decisions in the space programs history, one where human lives were lost. The reason that it was a bad organizational decision was that the information known to the organization was sufficient enough to have cancelled the launch, in addition the organizations knew of the technical problem years before the Challenger launch. The organizations knew of the problem at the beginning, however, they went about fixing the problem in the wrong way. The organizations decided that it would be best for the organizations if they tried to fix the problem while continuing with the launches.

In this case, the organization went about fixing the problem in a systematic approach. The organizations formed a task force and they approached the problem in a rational and analytical fashion. The problem was not in the task force but in the top levels of the organization. In the problem solving process there are five steps, find and define the problem, which they did. Second is to generate and evaluate alternative solutions, which they were doing. However, while they were finding solutions and alternatives, they were still continuing to use the shuttle with its problem with the O-rings.

This was a bad managerial decision made by the organizations top executives. These executives knew the risks that they were challenging every time another shuttle took off. Each time they lowered their expectations of the weather and conditions, this eventually led to the disaster. Third is to choose a preferred solution and conduct the ethics double check. This step was never reached because they never found a solution to the problem before the disaster. The fourth and fifth step, which is to implement the solution and evaluate the results, was not achieved until after the disaster.

The decision on whether to launch or not was an escalating commitment. This is the tendency to continue to pursue a course of action, even though it is not working. This was very reflective of the executives decision to launch. All of the previous missions were a success, but from a technical standpoint each mission was a more and more devastating disaster. Since every trip was a success in that there was not a disaster, Thiokol and NASA lowered their conditions for launch, which increased the chance of disaster. This eventually over time led to the Challenger disaster.

The decision to launch was a consultative decision. The executives of Thiokol and NASA sought the expert opinion from the engineers, however the top executives made the final decision. Although among the top executives it was a group decision done by the form of voting. The decision to launch could have been better executed if they followed the ten ways to increase creativity. First, is to look for more than one right answer or best way. Second, is to avoid being too logical; let your thinking roam. Third, to challenge rules, ask why, dont settle for the status quo. Fourth, ask what if questions.

Fifth, let ambiguity help you and others see things differently. Sixth, dont be afraid of error; let trial and error be a path of success, if lives are not at stake. Seventh, take time to play and experiment. Eighth, open up to other viewpoints and perspectives and support nonconformity. Finally, believe in creativity. If Thiokol and NASA followed this then maybe they may have decided to avoid the launch. A third key aspect is the whistleblowers. In the Challenger disaster there where two main whistleblowers, Richard Cook who worked for NASA and Roger Boisjoly who was the SRM Seals Engineer with Thiokol.

Whistleblowers expose the misdeeds of others in organizations. Both Cook and Boisjoly wrote many memos to their bosses and collogues in warning them of the disaster. Also after the Challenger disaster, they continued to write memos expressing themselves to fellow members of the organization. The main significance of these men was when they spoke before the committee. When they were brought before the committee, they revealed that both Thiokol and NASA knew of the O-ring problem and the consequences of the launch at temperatures lower then the limit.

These men made it clear to the nation that there were major organizational problems. These men had the courage to go against the norms of the organization because of the bad ethics being conducted throughout the organization. In Thiokol and NASA, many employees where too scared to come forward or they did not think they were doing anything wrong because the organizations top executives approved of the behavior. A checklist for making ethical decisions is first to recognize the ethical dilemma. Second, is to get the facts. Third is to identify your options and test them.

Fifth is to decide which option to follow and double check your decision. Finally, take action. These steps can help a whistleblower make the right ethical decision. A fourth key aspect is the organizations corporate social responsibility. That is an obligation of an organization to act in ways that serve both its own interests and the interests of its stakeholders. In an American society it is important for an organization to stress social responsibility in order to be a successful organization, this was true with NASA at the beginning of the organizations history.

In the case of the Challenger disaster, NASA and Thiokol assumed the role of the defensive strategy. Thiokol and NASA seemed to do the minimum legally required on the Challenger mission. These organizations were able to bend and break a lot of requirements because in their field no other organization knows what they do because it is a state of the art field of work. This could be one of the reasons why Thiokol got away with their poor ethical standards because no other organization knew whether or not they were practicing unethical behaviors.

Thiokol and NASA made an unethical decision when they decided to risk human lives and proceed with countless launches escaping near tragedies each time. These organizations were pushing the limit on their luck. As an organization, both Thiokol and NASA had a responsibility to protect its crew members from a disaster. Instead they ignored the warnings from a number of employees and ultimately risked human lives. Thiokol and NASA were playing a game of Russian roulette. A fifth key aspect that led to a break down in the organization was the management and technical relationship.

This would be the relationship between the top executives and the designers and engineers of the space shuttle. In the Challenger disaster this relationship was thrown into total chaos. This could have been the biggest reason for the cause of the disaster. Both in Thiokol and NASA, the upper management ignored the warnings by the technical employees. Years before the Challenger launch NASA and Thiokol completely redesigned the space shuttle, turning the shuttle into a countless reusable space shuttle. In this design they had a problem with the sealing of the O-rings, these rings stop the passage of gas in the rockets.

Every time the shuttle came back from space there would be erosion on both of the seals; this became an immediate concern for Thiokol and NASA. However, these organizations decided that it would be better for both companies to continue with the launches so they would not bring alarm to the American people. Over the years leading up to the Challenger launch there were countless memos written by the technical employees of both companies warning of the danger, but they were ignored. The few days before the launch both companies met with technical employees and top executives to discuss the cancellation of the launch.

Although the executives listened to the other employees, they ignored their warnings and decided that the decision should be an executive decision not a technical decision. This is a clear case of the upper level management having too much power. The top executives were willing to take the risk of a disaster in order to save their reputation if they cancelled. The main problem with these organizations was that their top management was making poor decisions based behind poor ethical standards and these filtered down throughout the organization. The relationship between the upper and lower level management was a bad relationship.

The upper level management was too focused on fame and fortune rather then safety. In the case of the Challenger disaster, NASAs matrix organizational structure was not in perfect alignment. The main problem with their structure was the communication between the different project managers. It seemed as though each project manager did not want to disrupt the other and deter form the expected launch date. The general managers or top executives made it clear to the project managers that this launch is very important to the success of the organization.

The power that the general managers had is incomprehensible. They sent fear throughout the different groups that if they did anything against the norm then they would be punished. Another problem with the structure is that there was a sense of do as I say and not what I do. This kind of mentality sent mixed signals down through the organization. It puts the organizations members in a no win situation. They do what their superiors tell them, yet the members no that what they are doing is not illegal but very unethical.

The structure system of this organization is not to blame; it is top executives of the organization who are at fault for this preventable disaster. Even though the structure is spread out over the United States the communication was very good. It is the individuals at the top of the organization that were the problem. If they had listened to the experts on the problem rather then to themselves, the disaster would have been prevented. The heads of the different departments of the organization were very ignorant to fellow employees and to human life. These people took a gamble that blew up in their face.

Out of the twenty or so top executives, only half of them remained after the disaster. Most of these people who left were the people responsible for the launching of the Challenger. From watching the Challenger disaster, I mainly have learned that I would not want to be apart of an organization that practices unethical behaviors. I also do not want to be apart of an organization where the top executives will not listen to the opinion of the workers. This is especially important to me because it is the workers who know what is working and what is not working.

The workers must come into play in the decision making process because they are the people who put in the labor to produce the product or outcome. Finally, I learned the importance of a whistleblower and how to protect yourself if you decide to become one. A whistleblower must gather all of the facts and defend only the facts that you know. Also to make sure that you know what you are getting involved in because it could get ugly. The Challenger disaster was probably one of the most preventable disasters that our nation has ever faced in dealing with an organization.

It is a shame that seven human lives were lost, but the knowledge and lessons that people and fellow organizations have learned from this experience were far greater then people could have imagined. The only good out of this is that it might have prevented another tragedy in an organization. I feel as though that the movie of the Challenger disaster was very interesting and educational to watch. Although I feel as though a better paper could be designed to cover more of the material covered throughout the semester. I did like doing this paper because it was very interesting to me.

Life Beyond Earth

Do you think its possible to find aliens in your lifetime? The chances that an extraterrestrial civilization would actually come to the Earth are slim. However, if they did the best way to find extraterrestrial life is not by space exploration, but by electronic signals. Signals can carry words, numbers, and pictures cheaply and at the speed of light. The amount of time and energy required for the travel would be enormous.

The amount of energy required to accelerate a spacecraft weighing several thousand tons to a speed even a moderate fraction of the speed of light would be billions of times more than the energy needed to send out a radio beacon (Big Ear Radio Observatory 4). Therefore, it’s more likely that they would communicate instead, thats if there are intelligent beings out there who want to communicate with us (Life Beyond Earth). Today we have the means to broadcast messages to the planets and stars but is anyone out there listening?

If we are leaking radio messages into space, the inhabitants of other planets, if they exist, might be doing the same thing. Large radio telescopes on Earth could detect such radio leaks from civilizations in nearby star systems, as well as stronger signals dispatched from planets thousands of light years away (Boyle 2). The idea of searching for extraterrestrial life has been dreamed of through the ages, but the methodology began in 1959 when two physicists from Cornell stated that microwave radio waves could be used for interstellar communication (2).

The first signal was sent from the 1,000-foot Observatory in Puerto Rico on Nov. 16, 1974, during a ceremony celebrating an upgrade of the radio telescope. The signal was sent out toward M13, a globular cluster comprising some 300,000 stars about 21,000 light years from Earth. Astronomers figured the focused 50,000-watt signal would be strong enough to be picked up by an antenna somewhere in that cluster, but ironically, the normal rotation of the galaxy means that M13 will have moved out of the way by the time the signal gets that far (Boyle 3).

Civilizations might communicate in one of two ways. The first way is by sending signals unintentionally. We do this all the time ourselves. For over fifty years now, our first television and radio signals have been radiating out into space like a giant shock wave, or like waves radiating out from a pebble dropped into a pond. Another intelligent civilization could intercept them and wonder what they say. Imagine an alien race picking up one of our television signals, decoding it, and then sending what they believe to be an intelligent reply (Big Ear Radio Observatory 5).

At the time, some observers wondered whether it was wise to send out such a signal. Some thought it was egotistical to transmit a message to aliens without the backing of a worldwide authority like the United Nations, and some worried that the signal might even attract unwelcome extraterrestrial interest. Other astronomers point out that Earth has been leaking radio signals into space for more than fifty years and that at least some of those transmissions are strong enough to alert nearby star systems of our presence (6).

Radio astronomers recently showed off a network of model satellite dishes that will serve as the proving ground for a new generation of radio telescopes. They called this project 1 HT. They said such arrays combined together by computer software would boost their ability to communicate with interplanetary probes, study distant planets and search for alien signals (7). This prototype launches the next generation of SETI research in a bold way, Tarter, the institutes director of SETI research, said in a written statement.

There is also tremendous potential for other radio astronomy. The 1HT is a fundamentally new way to build radio telescopes, and its not an overstatement to say that the world astronomy community is paying very close attention to this project (Boyle 4). Tarter emphasized the wider applications of the 1HT and the Square Kilometer Array. She said a telescope array with an area equivalent to a square kilometer couldidentify Jupiter-size planets beyond our solar system; as far away as thirty light-years from Earth.

With this kind of size of telescope its possible to map the winds and jets created during star formation, and analyze the chemistry of the dusty disks that serve as the birthplaces of stars and planets. Serve as the model for a next-generation Deep Space Network that would communicate with robotic explorers. Produce radar images of near-Earth asteroids that are ten times better than currently possible and finallyexpand the SETI search to up to a million star systems. The Square Kilometer Array would be unlike any existing radio telescope.

An alien message would also most likely be what we call a narrowband signal. This means a signal at a very precise frequency. Radio stations are examples of narrowband signals. Between radio stations, you hear a hissing sound. This is a broadband noise. The stars (and other celestial objects) also put out broadband noise. An intelligent, communicating civilization would probably use a narrowband signal rather than a broadband one for a beacon, since they wouldn’t want their message to be mistaken for regular, ordinary star noise (Billinham 4).

Perhaps there are civilizations that are very much more advanced than we are. Just imagine the wealth of knowledge that would be at our fingertips if we were to discover such a signal and decipher it (Life Beyond Earth). Perhaps it would teach us how to build a space ship that travels close to the speed of light. Or maybe it would tell us how to solve our planetary ecological crisis. It might be able to solve our global political problems? The benefits of such a discovery could be beyond our wildest dreams!

Star Traveling To The Millennium

Now as we are rapidly approaching the Millenium many people are getting the blues. This seems absurd because this offers all of us a perfect chance to start again. NASA is embracing this chance to grow and expand their departments. The phrase, “Space, the final frontier,” expresses the world’s obsession with space travel, that started centuries before it even became popular 30 years ago in Gene Roddenberry’s TV series “Star Trek. ” Science fiction has entertained our culture for years.

Movies such as Star Wars and Planet of the Apes have helped fuel our desire to get off the planet earth, find new life forms, and conquer the stars. Science-fiction dreams of worlds beyond our solar system have taken on a more realistic aspect since astronomers discovered that the universe contains planets in surprisingly large numbers. Studying those distant planets might show how special Earth really is and tell us more about our place in the universe (NASA homepage). Finding a planet that can support human life would revolutionize our society into the Jetson’s.

These ideas are soon to become our realities. NASA is currently experimenting with many methods to try to explore the outer edges of the galaxy. In order to understand NASA’s excitement about star traveling, we will first fly through current projects concerning space travel, second explore three possible technologies being experimented with for the year 2000, finally take a trip into our future and experience how star traveling will change our lives as we approach the end of the second millenium.

NASA’s goal of faster, better, cheaper has been the motivation for them to develop new mission concepts, and to validate never-before-used technologies in space. The new technologies, if proven to work, will revolutionize space exploration in the next century. According to NASA’s New Millennium Program home page, last updated on September 16,1999, NASA’s current project of Deep Space 1 demonstrates some of their most exotic technologies. One of the most impressive is the testing of an ion engine that is supposed to be 10 times more efficient than liquid or solid rocket engines.

Deep Space 1 was launched on October 24, 1998. It is the first mission under NASA’s New Millennium Program, which features flight testing of new technology, rather than science as its main focus (Rayman 4). These new technologies will make spacecraft of the future smaller, more economical, reliable, and closer to the goal of efficient space travel. According to Dr. Marc Rayman, the deputy mission manager and chief mission engineer for Deep Space 1, there are 12 advanced technologies onboard the spacecraft and seven have completed testing (5).

Despite some glitches, the great majority of the advanced technologies have worked extremely well. Rayman also said, “Mission designers and scientists can now confidently use them on future missions”(4). All of this testing is now paving the way for star traveling. The great stumbling block in this road to the stars, however, is the sheer difficulty of getting anywhere in space. Merely achieving orbit is an expensive and risky proposition. Current space propulsion technologies make it a stretch to send probes to distant destinations within the solar system.

Spacecrafts have to follow multiyear, indirect trajectories that loop around several planets in order to gain velocity from gravity assists. Then, the craft lacks the energy to come back. Fortunately, engineers have no shortage of inventive plans for new propulsion systems that might someday expand human presence beyond this planet. Anti-matter, compact nuclear rockets, and light sails are three ideas that engineers are experimenting with. But these ideas are in their embryonic stages and it is already more than apparent that the task is as difficult as it could possibly be, but still remain possible.

Robert Frisbee, a researcher at NASA’s Jet Propulsion Lab said, “right now, based on our current level of ignorance, all three energy sources are equally impossible or possible” (DiChristina 2). Some of these ideas are just radical refinements of current rocket or jet technologies. Others harness nuclear energies or ride on powerful laser beams. Even the equivalents of “space elevators” used for hoisting cargoes into orbit are on the drawing boards. Out of all the ideas that have been brought up, NASA is seriously exploring three.

One of the first possibilities but the hardest to obtain is anti-matter. When antimatter comes into contact with regular matter they annihilate and the mass is converted into energy. Stephanie Leifer of the Jet Propulsion Lab stated in the June 1999 issue of Popular Science Magazine that, “The antimatter-matter reaction has the highest energy density we know of”(55). The reaction releases charged particles that could be directed out the back of the spacecraft for thrust using magnetic “nozzles. ” A small problem is that engineers don’t know how to make the nozzle big enough for antimatter engine.

Then add another problem of making thousands of tons of antimatter when only mere a nanogram of antimatter is made at special laboratories like Fermilab and CERN. The largest problem to add on to this is antimatter cannot make contact with matter. Currently it has been extremely difficult to store more than a tiny amount of antimatter in magnetic traps. These magnetic traps keep charged particles from hitting the matter containment walls and annihilating. To solve the problem physicist Gerald Smith and his team at Penn State decided to tackle the problem on several fronts.

They were able to trap shoebox-size antimatter and hold 100 million antiprotons (Beardsley 5). But until scientist can contain over a ton the antimatter-matter reaction will be put away. A second energy source is nuclear fission, also called compact nuclear rockets. These rockets can impart a maximum velocity increment of up to about 22 kilometers a second even, though it is not even close to the amount of energy the anti-matter reaction can create. Hydrogen, the key element in fission, is much easier to obtain and engineers are closer to building a rocket motor that can be powered with nuclear fission.

According to the Scientific American web page last updated on September 12, 1999, James Powell and his colleagues have designed a compact nuclear rocket engine that they call Mitee (4). In reality this rocket can be built in six years and would cost about 600 million dollars, which is modest in context of past space launches. Another key attraction to nuclear propulsion is that its propellanthydrogenis widely available in gaseous forms on the giant planets of the outer solar system and in the water and ice of distant moons and planets.

Because the nuclear fuel would be relatively long lasting, a nuclear-powered craft could in theory tour the outer solar system for 10 or 15 years, thus replenishing its hydrogen propellant as necessary (7). Its reactor would start up well away from Earth. A nuclear-powered spacecraft could actually be made safer than some deep-space probes that are powered by chemical thrusters. In the near term, only nuclear rockets could give us the kind of power, reliability, and flexibility that we would need to dramatically improve our understanding of the still largely mysterious worlds at the far edges of our solar system.

The last chief option is to leave the engine at home and power the spacecraft with solar sails most commonly called light sails. Light sails may be initially more promising than anti-matter or fission. According to the previous mentioned issue of Popular Science, Robert Forward, a retired Hughes physicist who now consults for NASA, concluded that, “in terms of the closest and cleanest development program light sails may be the first step”(3). The sail literally allows the shuttle to be pushed through space by photons from a laser or the sun.

When the photon collides with the sail, it will either simply be absorbed by the sail material or will reflect off the photon. Both processes impart acceleration, but reflection imparts twice as much as absorption. Thus, the most efficient sail is a reflective one. Like other propulsion methods light sails are limited in their performance by the thermal properties and the strength of materials, as well as by our limited ability to design anything that consists of a polished, thin metal film. However, a good deal of work relevant to light sails has already been done.

The Department of Defense has developed high-powered lasers and precision-pointing capability as part of its research into ballistic-missile defenses and possible anti-satellite weaponry. Closer to home the US National Oceanic and Atmospheric Administration announced it’s planning to launch within four years a spacecraft powered by a light sail. NASA is now evaluating plans to develop laser light sails as a possible low-cost alternative to conventional rockets. We see in light sails a possible glimpse of the future, an inexpensive access to the remote solar system and beyond.

In time they could make travel to distant stars a reality. Now that we have seen how close to star traveling we really are, let’s get aboard the perfect spacecraft and let our imagination become reality. Every one loads the new and improved star traveling vehicle. Buckle up it is going to be a bumpy ride. We travel for about two light years which seems like an extremely quick trip. Our craft lands on a neighboring planet to our solar system. As a team we start to explore and record the data we are finding on this new planet.

After our explorations, we head back to Earth. When we return, we find that every one has gotten several years older and technology has just exploded to mind boggling heights. Another space race has begun but this time it is to colonize planets. Our knowledge and understanding of who we are would be forever altered. The next step would be to start exploring possibilities for intersolar travel. Going back and forth between Jupiter, Pluto, and the Moon to retrieve energy sources or visit a friend that is now a Lunar citizen.

Jupiter is converted into a large gas station were the spacecrafts could stop and refuel with the indefinite supply of Hydrogen gas that makes up this large planet. Our planet could start to mine the Asteroid belt for old energy sources and find new ones. The biggest change for our world would be social standards. How will we treat people when they introduce themselves as a citizen of Pluto? Picture all of the new art forms and sporting events that could take shape in zero gravity conditions. Would our society expand and have states on distant moons of Jupiter?

Students in school on Venus could look out their windows when they are tired of reading Antigone and see the outline of the Earth instead of a boring playground. When the technology is ready, our world as we know it will be completely turned upside down thinking about colonizing planets and other solar systems dozens of light years away. Space enthusiasts look to the day when ordinary people, as well as professional astronauts and members of Congress, can leave Earth behind and head for a space resort, or maybe a base on the moon or Mars.

The Space Transportation Association, an industry lobbying group, recently created a division devoted to promoting space tourism on their web page which was last updated on August 12, 1999. They see space travel as a viable way to spur economic development beyond Earth. Just imagine, someday we may be able to leave Earth and head to a planet that can support human life and have new energy sources for faster and more efficient way of doing simple task in life.

Our imaginary trip will soon become a reality for future generations, even though this is still a science fiction goal to us. Serious investigators continue to look for ways to turn each of these concepts into a reality. If one of the energy sources work, it will change our ideas about the universe radically. Then possibly space would no longer be the final frontier. Instead of getting the Millenium blues, everyone should take a look at NASA’s enthusiasm and jump on the new space race bandwagon.

What is cosmogony

Cosmogony can be defined as a study of the physical universe in terms of its originating time and space. In other words, cosmogony is the study of the universe and its origins. The origin and the nature of the universe have been one of the most debated topics throughout history. Both the scientific and theological communities have yet to ascertain a common ground on how the universe came into being and whether it was an act of God or merely a spontaneous and random phenomenon. New discoveries in the scientific world provide new viewpoints on the creation of the universe and its relevance to a supreme intelligent Creator.

Due to mankind’s constantly changing perspective of the world by scientific means, the argument on the origin of the universe is also forced to progress and develop itself. Through the analysis of the works by Thomas Aquinas, David Hume, and John Haught, the development of the theory on the originating cause of the universe, through the course of history, can be easily identified. A very early interpretation on the origin of the universe and the existence of a Creator can be found in Thomas Aquinas’ Summa Theologica. Thomas Aquinas, in Summa Theologica, indirectly offers his own views on the origin of the universe.

The term indirectly is used because his arguments are found in his five proofs for the existence of God and are not directly targeted at establishing a viewpoint on the origin of the universe. Aquinas’ first implication on the origin of the universe can be found in his first proof. Aquinas states that in the world some things are in motion. Anything that is in motion, therefore, must have been placed in motion by something else. This chain of movement, however, can not go on to infinity for there would be no first or any intermediate movers.

Therefore there exists a first unmoved mover that is the cause of all in motion (Aquinas, Q. 2, art. 3, I answer). Aquinas, in mentioning the first unmoved mover, is referring to God. Although Aquinas’ first proof can be read in a literal sense one must analyze it figuratively in order to deduce his viewpoint on cosmogony. The act of the first unmoved mover putting the first object into motion is symbolic of Aquinas’ belief that God created the universe. God, in putting the first object into motion, created the universe.

Consequently, other objects were put into motion within that universe. This is the chain of motion discussed in Aquinas’ proof. In other words, to Aquinas, the existence of our universe in motion is a result of an act of God (the creator of the universe). Several observations can be made in examining Aquinas’ viewpoint on cosmogony. First of all, the argument takes a very linear path. The proof is too simple for such a large task as proving the existence of God. It does not take into account complex ideas that obviously declare this proof erroneous.

For example, it is common knowledge today that all things are made of atoms and that all atoms are in constant motion. Therefore, there is no such thing as an inanimate object in existence. Another problems with Aquinas’ viewpoint is that it does not consider the possibility that motion, and not rest, is the natural order of things. For if everything is in motion, would it not make more sense to declare motion as the natural order? (Hume, VIII. 4) Although a seemingly dysfunctional argument on Aquinas’ part, one must take into account the time period in which this proof was constructed.

Aquinas lived and wrote in the 13th century, before the existence of atomic science and other scientific theories. In this, one could easily see how the lack of science and other future knowledge contribute to a very primitive insight on cosmogony. Furthermore, with the development of worldly knowledge, the argument on the originating cause of the universe is also forced to develop in order to accommodate such changes. David Hume, for example, in Dialogues and Natural History of Religion, discusses cosmogony in a modern 18th century light.

In the text, Hume creates three characters each representing a different viewpoint of religious belief. Demea represents the orthodox believer, Cleanthes represents the modern 18th century deist, and Philo represents Hume’s position, the skeptic. By using the three characters, Hume is able to argue all sides of a certain issue, and through the character Philo, is able to voice his own views. Hume employs this method for the discussion of cosmogony as well. Hume voices the opinion of the deist empiricist on the origin of the universe through Cleanthes.

The order and arrangement of nature, the curious adjustment of final causes, the plain use and intention of every part and organ; all these bespeak in the clearest language an intelligent cause or author. (Hume, IV. 7) For the 18th century deist, the order of nature, the final causes produced in the universe, and the specific purpose of everything in existence, is enough evidence to assume an intelligent being created the universe. For example, the way in which the food chain maintains all of nature’s beings in balance or the way that every organ on our body has a specific and purposeful use.

These accommodations could not possibly be a coincidence or accident. On the contrary, everything works out because the Creator meant it to work out. Cleanthes views the universe as a well oiled machine that was built by God with all the intentions present in nature. Hume/Philo, however, is reluctant to put any fine point on the origins of the universe. The discoveries by microscopes, as they open a new universe in miniature, are still objections, according to you (Cleanthes); arguments according to me.

The farther we push our researches of this kind, we are still led to infer the universal cause of All to be vastly different from mankind, or from any object of human experience and observation. (Hume, V. 4) In this passage Hume displays his own viewpoint that mankind can not comprehend the power in which this universe was created by. He neither denies nor advocates the existence of an original being. Instead, he takes the agnostic position in that all we are capable of learning only leads us to more questions, and that by human experience it is impossible to comprehend the true divine power.

The agnostic approach taken by Hume is characteristic of the 18th century Enlightenment. In contrast to Aquinas, Hume advocates an empiricist method in which all knowledge must be traced back to an original sense perception. The employment of the empiricist principle is the prime reason we can not know anything about God or the creation of the universe. The acknowledgement of different religious viewpoints, the establishment of the agnostic position, and the use of the empiricist principle, are new ideas used in the argument for the origin of the universe.

The 18th century Enlightenment values are highly evident in Hume’s text. It is obvious how the 13th century argument presented by Aquinas has changed in order to accommodate the new viewpoints available in the 18th century. Through the analysis of Hume’s work, and put in comparison with earlier views, the development of the argument for the origin of the universe is easily identifiable. John F. Haught in Science and Religion: From Conflict to Conversation, further develops the cosmogonical argument. In the text, Haught discusses to great extent, the relationship between the scientific and the theological communities.

Similarly to David Hume’s dialogue approach, Haught employs four different viewpoints in which science and religion can be related. These can be identified as Conflict, Contrast, Contact, and Confirmation. Conflict- the conviction that science and religion are fundamentally irreconcilable; Contrast- the claim that there can be no genuine conflict since religion and science are each responding to radically different questions Contact- an approach that looks for dialogue, interaction, and possible consonance between Science and religion Confirmation- the ways in which religion supports and nourishes the entire scientific enterprises.

Haught, p. 9) Employing these four viewpoints, Haught discusses our current 20th century views on cosmogony. Perhaps the largest part of Haught’s argument comes from the Big Bang Theory. The big bang is hypothesized to be the cosmic explosion that marked the origin of the universe and the beginning of time. Haught acknowledges the big bang as a possible cause of the universe and moves even further to state that the big bang would justify the biblical idea of divine creation as depicted in Genesis. (Haught, p. 101) However, similarly to Hume, Haught also acknowledges the possibility that the universe may not have come into existence at all.

He states, Perhaps the universe always was and always will be. (Haught, p. 101) This point of view would seriously challenge large portions of Christian doctrine. Haught employs the four relations in order to clarify and mediate between the two extreme views of cosmogony. The conflict argument states that it is not at all self-evident that just because the universe had a beginning it also had to have a creatorEthe cosmos may have had a beginning, but it could have burst into existence spontaneously, without any cause. (Haught, p. 6)

Haught brings up the possibility of nothing having existed prior to the big bang. The idea of a spontaneous explosion creating the universe is not characteristic of either Aquinas’ or Hume’s eras. Furthermore, Haught’s explanation puts the purpose of our existence into question. If the universe is a product of a Creator than we exist for the purpose of carrying out the Creators expectations. This is similar to how a clock maker puts every single gear and spring into a specific position in order for the clock to run.

However, if we are merely a result of a random cosmic explosion than we are all products of a gigantic cosmic accident. Haught concludes the Conflict position by stating that although the big bang theory seems to smooth over religious/scientific conflicts, the constant changing nature of science discredits the validity of the relation. (Haught, p. 109) Again it is obvious that 20th century science and observations has contributed in the development of the cosmogonical argument. Haught, in demonstrating the Contrast relationship, brings up the idea that the big bang physics provides no new ammunition for theology.

Haught, p. 109) He goes on further to say that creation is not about chronological beginnings so much as it is about the world’s being grounded continuously in the graciousness of God. (Haught, p. 111) Haught discusses the idea that the big bang actually has no basis for a theological proof and that it has entirely nothing to do with creation itself. Instead, we exist in a universe that is solely dependent on God and that above the importance of creation itself we should show gratitude for our existence.

Without the knowledge of the big bang or other scientific evidences, this idea on the nature of the universe could not be conceived. Thus we can say that Haught’s Contrast relationship is a product of 20th century thinking and that it further puts the argument of cosmogony into development. Haught’s Contact relationship, however, differs slightly from that of the Contrast and Conflict relationships. The Contact relationship states that Although we do not wish to base our faith directly on the scientific ideas, our reserve does not mean that the big bang cosmology is theologically irrelevant.

Haught, p. 114) Haught states here that the big bang, although not the sole aspect of creation, is still a large piece of the cosmogonical puzzle. He also brings up the idea that according to scientist the big bang is not over and done with. It is still happening. (Haught, p. 117) It is the idea that the universe is in constant creation by God, and that although the big bang may have been the beginning, it can not be defined as creation itself. This is yet another demonstration of a scientific bullet in a theological gun.

Once again, this development of the cosmogonical argument accurately reflects the time period it was conceived in. Thomas Aquinas, David Hume, and John Haught all posses their own ideas and beliefs on the origination of the universe. Their arguments reflect the knowledge and logic of each person’s era. The cosmogonical argument is constantly in development as the world changes in terms of the knowledge at hand. With Aquinas we see a linear and logical argument, with an absence of scientific foundation. Hume develops three different arguments with the empiricist principle at hand.

Haught, similarly to Hume, uses different viewpoints in order to convey his opinions on the originating cause of the universe. He incorporates the big bang theory with the theological argument of Genesis. As history progresses, our knowledge of the world progresses, and thus our views on cosmogony progress. This development of the cosmogonical argument can be easily traced through the works of Aquinas, Hume, and Haught. Undoubtedly, new discoveries in our near future will lead us to new insights on the origin of the universe.

The International Space Station

The International Space Station is the doorway to the future of mankind and the world as it is known. The scientific and medical discoveries that will be made on the station could create billions of dollars annually. A plan like this, arranged to benefit the whole world economy, should sound like a good idea to every person, but some believe that the ISS is too risky, too ineffective, or too costly to create. Whether or not the space station is worth the money, time, and effort, one thing is clear, everyone is interested in this virtual floating laboratory and what assets or liabilities it will bring.

The future of cientific experimentation and exploration may be located, not on earth, but on the man made island called the International Space Station. Of all the factors that go into building a space station, construction of the massive object is the most tedious objective. During the building of the ISS, tensions have run high several times when deadlines were missed or funds were not available. This space station is the most expansive mission the world has ever encountered. The International Space Station will be a fifteen country mission.

When finished, it will boast over an acre of solar panels for heating and energy, have a volume oughly sizable to two jumbo jets, and contain four times the electrical power of the Russian space station, Mir. It will take approximately forty-five flights over the next five years to assemble the one hundred pieces of the station while circling the orbit of the earth (Goldin 11). This floating station, the size of a large football stadium, which is traveling at over 17,500 miles per hour around the earth, will have a minimum life expectancy of only ten years, although scientists hope for a much longer time.

The station is so large that it will sometimes be visible by the naked eye during the night (Chang 12). Many eople agree with the idea of some sort of space laboratory, but wonder why it has to cost so many tax dollars. Some estimates for the station confirm that the cost has been underestimated by billions of dollars. Late last year Boeing beat out several other competitors for the prestigious position of main contractor. NASA agreed to sign a 5. 6 billion dollar contract with Boeing to build many of the essential parts of the space station. Russia is also placing trust in this airplane superpower.

They signed a 180 million dollar contract to build the Functional Cargo Block, the unit that will provide power to stabilize the tation (Bizony 87). The International Space Station may provide many scientific discoveries, but everyone will pay for it. This project will become the most expensive project in space since the 1969 mission of Apollo 13 to the moon. The total estimated cost will be over twenty billion dollars (8). On the International Space Station, there will be a large variety of experiments ranging from improvements of industry to medical advances.

The largest portion of time will be devoted to scientific experimentation and discovery. The ISS will create advances that will assist scientists to better understand the ysteries of the physical, chemical, and biological world. Without gravity they may conceive the technological discoveries that will boost all economies (Goldin 11). One thing the astronauts will use in their pursuit of knowledge is remote telescience. It is an advanced technology that allows scientists on the ground to monitor the progress of the experiments on the station.

This will keep people on Earth up to date on the data collection that is occurring in space. Telescience will use interactive data and video links to make the connection as realistic as possible (Science Facilities 7). The populous sometimes asks hat the station will do scientifically. The International Space Station will try to answer questions that have bothered deep thinkers for years. The affect of no gravity on living things, any mental and physical affect on humans in space, and the growth of better materials in space that will create better products on Earth will all be explored in hopes of becoming better understood.

Hopefully, scientists will be able to answer these questions and many more on the International Space Station (Chang 12). NASA has confirmed that microgravity, the almost weightless condition of space, is one of the largest factors in the xperiments that will occur aboard the International Space Station. The affects of gravity and microgravity on animals, plants, cells, and microorganisms will be studied on the station. Artificial gravity can be adapted from 0. 01 G, almost entirely weightless, to 2G, twice the earths gravity, to fit the experiment (Science Facilities 1).

Gravitational biology is a major research topic dealing with microgravity. It is the study of gravitys influence on living things such as plants and animals. This will help astronauts conclude definitely what will happen to humans on long trips and stays on Mars (6). The Optical Window is special window that will monitor natural events with cameras, sensors, and other devices. The window will be able to track such disasters as oil spills, hurricanes, and forest fires (3). The ISS is our best effort to date to provide a laboratory environment where combustion can be fully understood.

Combustion will help to better understand air pollution, global environmental heating, propulsion, and hazardous waste incineration (Industrial Processes 3). The new experiments on combustion could save the United States millions of dollars annually because it controls the heating of homes, powering of cars, and he production of a large range of synthetic materials (4). Real combustion cannot take place on Earth. So combustion will be studied in space. On the ground gravity feeds flames with oxygen and takes away heat.

The space stations combustion facility will do research on gas, liquid, and solid fuels which already yields eighty-five percent of the worlds energy production (Science Facilities 4). Combustion is the release of chemical energy. Combustion research on the station will lead to less pollution in the industries and the production of advanced materials which will have very large payoffs for the economy (2). Since the computer age began, microchips have become smaller and more efficient every year. On the International Space Station, The multinational crew will spend one half of its time working on science.

Many experiments will deal with improving the microchip and the silicon that makes it up (DiChristina 77). Finding cures for many medical problems is one hope for the space station. One of the most important missions of the station will be to search for cures to earthly diseases. The International Space Station will provide information that might lead to cures for AIDS, Cancer, Diabetes, Emphysema, and Osteoporosis Lawler 50). Scientists will find this mission an ideal spot to study the human biology without the restraints of gravity.

A cure for Cancer might be found in space. In orbit, Cancer cells grow in solid clumps because gravity does not flatten them. Therefore the space station will be the best effort yet to search for treatments because they can be studied three dimensionally (Bizony 120). Some think that protein crystals are the answer to many cures. They are associated with every disease on Earth. If they can be understood deadly diseases may be curable (DiChristina 77). Aging has always been a large concern or many people on Earth. Soon that may not be as large of a problem.

Research from previous shuttle missions and Mir have shown that some of the processes that occur in a person when they get older are the same processes that also affect the astronauts on a mission in space. Problems such as weakening of the heart, muscles, and bones, disturbed sleep patterns, abnormal immune system, and problems balancing affect both groups the same ways. Therefore, some aspects of aging may be curable if scientists can find a connection between the two (Our health 1). NASA is trying to learn more about the affects of weightlessness n their astronauts during long missions.

They plan to use the International Space Station as one means to do so. On the space station, scientists will discover ways to predict the effects of weightlessness on travelers to Mars (7). It has already been 1earned that living in space for any extended period of time is not good on a persons health. Scientists plan to explore why and will study the effects of prolonged weightlessness on the human body and mind (Bizony 113). Developers working for NASA are creating some of the most technologically advanced machines ever made which are specially formulated for experimentation n the International Space Station.

It would be very hazardous and not practical for NASA to return an ill or hurt member of the station back to the United States. Therefore, NASA is improving computer systems that will be able to diagnose and help treat any injury that may occur while on the station. These machines, along with medically trained astronauts, will create a very small need for return missions to Earth due to health considerations (Our health 3). To deliver the best medicine possible, the space station will be equipped with virtual reality and medical monitoring systems like those found in hospitals.

Cyber surgery, a type of virtual reality, will be used in the extreme cases of a medical emergency (4). The human body will be thoroughly studied while in space to conclude if there are any long term affects on the body during space travel. The experiments will be broken into three basic categories: bones, tissues, and cells. The space station will have a special laboratory called the Bioreactor which will reproduce human tissues and cells. The Bioreactor will be able to create Cancerous cell tissue called tumors from a few individual cancer cells.

Some think that while in space, the space station will find healthier lternatives to a cure for Cancer other than the very harmful chemotherapy. Scientists plan to make healthy tissue outside the body in a space laboratory and then transfer the tissue into a body with damaged tissue. This is possible because in space cells are not crushed by the pressure of gravity. They grow three dimensionally like those which occur naturally in the body (Our health 5). Besides cures for cancer, how to fight bone loss will be considered.

Bone loss is one of the major reasons astronauts cannot stay in space for extended periods of time. Once bone loss can be controlled, NASA will e capable of bringing all men and women back in perfect health (Chang 13). I view the space shuttle program as a stepping stone to the ultimate program that will guarantee prolonged efforts in microgravity… Ultimately our hope is to be able to crystallize proteins in microgravity, conduct all x-ray data collection experiments in space and transmit the data to Earth for processing.

This can only be done in a space station (Our health 9). One of the reasons the International Space Station is so important is because it offers the chance for experiments to be done at all times during the year instead of making scientists ait for weeks, months, or even years to get their experiments tested. The space station will mean that industry will become more involved in the space research which will benefit both areas into the future (6). On the space station, real industrial improvement will be a reality.

Semiconductors have already been grown that are of a record quality. They have all been created as a thin film by the Wakeshield Facility. It creates a vacuum larger than one ever created on Earth. This leads to improvements in the computer industry. The improvements could mean billions of dollars for industry (Industrial processes 7). Oil refinement rovides the United States will over 90 billion dollars revenue annually. Zeolights, which are used in the refinement process of petroleum products, are being enhanced dramatically in space.

This could create a 400 million dollar increase in the United States economy (7). All industry will be effected because space does not have the limitations of Earth. Due to microgravity, crystal production is not hindered. Ergo, scientists can concentrate on the phenomenon of solidification, crystal growth, fluid flow, and combustion like they have never been able to before which translates into large profits for all countries nvolved in construction and care of the International Space Station (8).

Microgravity is very unique. It will be the factor that affects and inspires the production of many of the products that have not even been dreamed of yet by inventors and scientists. By delving into the practical uses of microgravity, it is easy to see that many new alloys, ceramics, glasses, polymers, and semiconductors can be improved, redesigned, or created (Industrial Processes 1). When viewing the whole spectrum of the space station, many scientists have agreed that microgravity is the most important factor.

It could create reatments for previously incurable diseases, better materials and alloys for industry, and more efficient forms of fuel and petroleum products. Microgravity is an important factor because compared to the gravity on earth, space has one-millionth the force pressing down on objects. This accounts for the willingness of people to spend billions of dollar on experiments that can only be performed in one place, the International Space Station (DiChristina 77).

Besides increasing the worth of the economy, the International Space Station will also help earth science research. The Optical Window will help study the aily buildup and positions of the clouds. This may help to predict droughts and therefore prepare vegetation. Also available will be key information about our atmosphere. To test in the greatest detail some of the fundamental laws that govern our physical world, it is often necessary to go beyond the surface of the Earth…

Thus, NASA and its academic, industrial, and international partners will use the microgravity environment of the ISS to conduct fundamental scientific studies that have as their goal to contribute to the worlds supply of knowledge, which will one day enable the technology of future generations Fundamental Knowledge 6) Many people consider the International Space Station a excellent chance for improvement of medical, technological, and industrial processes, but there are a few skeptics who are unsure of any benefit at all.

These people have questioned the purposes and objectives of the scientists who believe it is a service. Many state that there is no substantial evidence to prove that the International Space Station will fulfill any of the aspirations people have pressed upon it. Others point to the fact that it may only be able to remain in space for ten years which is not long enough to do anything that is ery conclusive. Still, others contend that the International Space Station is a small price to pay for the usefulness it will provide in the future. The mixed reviews have left nothing but confusion in their path.

No matter what anyone says or thinks, only time will tell. Since the Cold War, Russia has fallen from the status of superpower, and its space program has become almost non-existent. Since Russia first joined the International Space Station team, the Russians have been continually behind schedule but always promising the job is almost finished. They have frequently run out of funds which had to be given to them by he United States and other partners, and several times they threatened to drop out of the project all together. Russia is only the United States first problem.

Although it is called the International Space Station and is considered a multinational project, it is obvious that United States dollars are supporting most of the sagging project (89). The cost is one of the most debated topics on the subject. Once having a price as low as eight to fifteen billion dollars, the cost has since skyrocketed to the exorbitant amount of over 96 billion dollars by the end of the project in 2006. The International Space Station is already ver a decade late, and the price is soaring into the billions of dollars over the original price tag.

Some are beginning to wonder if the International Space Station is a floating white elephant. Boeing admitted it has underestimated the potential cost by as much as 800 million dollars just for its contract alone (88). On the other hand, some people believe it is worth the price. For nine dollars less than the average American pays for snack or junk food yearly, NASA can build the International Space Station which could become a scientific pinnacle from which the benefits would be felt for years (Abatemarco 9).

Some critics wonder whether any scientific progress that the space station will make will create a high enough profit to make up for the gigantic cost of the station. Some scientists are concerned that the space station will have no redeeming qualities whatsoever. Most of the functions of the space station have disappeared. NASA is mortgaging its future for the next twenty years (Kluger 90). Some even disagree on whether microgravity is a valid determinant. Not everyone thinks that research on microgravity will be the least beneficial. Some say that it is one of the least important factors you can ave (DiChristina 76).

In the end it is debatable whether the International Space Station will give us enough scientific information to make the planning, cost, and maintenance worthwhile. One of the main topic to be researched, protein crystals are fragile and with a single bump, which happens a lot in space, the project is ruined (Kluger 91). There is no information of yet that is going to tell anyone the outcome of the experiments on the International Space Station. It is a complex machine that may create fantastic results or become a large waste of time and effort, but until the world tries, it will never know hich might be the most disappointing thing of all.

For now, all anyone has is hope for a brighter future, and the chance that the International Space Station may bring them a step closer to that reality. The orbiting laboratory serves as a symbol of our future. A future that embodies the dreams of our children and that promises untold discoveries for the next millennium. One that fulfills our innate human nature to explore. And one that benefits all people of all nations. (Goldin 11) The International Space Station is the beginning. It is the beginning of a world that is working towards a better understanding of everything around it.

Black Holes Essay

Every day we look into the night sky, wondering and dreaming what lies beyond our galaxy. Within our galaxy alone, there are millions upon millions of stars. This may be why it interest us to learn about all that we cannot see. Humans have known the existence of stars since they have had eyes, and see them as white glowing specks in the sky. The mystery lies beyond the white glowing specks we see but, in the things we cannot see in the night sky such as black holes. Before I begin to speak about black holes, I will have to explain what the white glowing specks in the sky are.

Without a star a black hole could not be formed. In the beginning of a star life a hydrogen is a major part of its development. Stars form from the condensation of clouds of gas that contain hydrogen. Then atoms of the cloud are pulled together by gravity. The energy produced from the cloud is so great when it first collides, that a nuclear reaction occurs. The gasses within the star starts to burn continuously. The hydrogen gas is usually the first type of gas consumed in a star and then other gas elements such as carbon, oxygen, and helium are consumed.

This chain reaction of explosions fuels the star for millions or billions of years depending on the amount of gases there are. Stars are born and reborn from an explosion of a previous star. The particles and helium are brought together the same way the last star was born. Throughout the life of a star, it manages to avoid collapsing. The gravitational pull from the core of the star has to equal the gravitational pull of the gasses, which form a type of orbit. When this equality is broken, the star can go into several different stages.

Some stars that are at least thirty times larger than our sun can form black holes and other kinds of stars. Stars explode at the end of their lifetime, sometimes when they explode the stars leave a remnant of gasses and, dust behind. What the gasses come together to form depend on the size of the remnant. If the remnant is less than 1. 4 solar masses it will become a white dwarf, a hot dead star that is not bright enough to shine. If the remnant is roughly 1. 4 solar masses, it will collapse. The protons and electrons will be squashed together, and their elementary particles will recombine to form neutrons.

What results from this reaction is called a neutron star. The neutrons in the neutron star are very close together, so close the pressure prevents the neutron star to collapse onto itself. If the remnant of this giant exploding star is larger than three solar masses or ten times our sun, it becomes a black hole. A black hole is one of the last option that a star may take. In the 18th century scientists started to research the after effects of a large star such as a supernova exploding. What happens of the gas and dust left behind after such a big star died?

The idea of mass concentration so dense that even light would be trapped goes all the way back to Laplace in the 18th century. The first scientist to really take an in depth look at black holes and the collapsing of stars, was a professor, Robert Oppenheimer and his student Hartland Snyder, in the early nineteen hundreds. They came up with the basics of a black hole from Einsteins theory of relativity that if the speed of light was the most speed over any massive object, then nothing could escape a black hole once in its grasp.

These researchers showed that when a sufficiently massive star runs out of fuel, it is unable to support itself against its own gravitational pull, and it should collapse into a black hole. In general theory of relativity, gravity is a manifest of the curvature of the space-time. Einstein general theory of relativity showed that light, though it does not react to gravity in the same way as ordinary matter, is nevertheless affected by strong gravitational fields. In fact, light itself cannot escape from inside this region. (Internet Public Television family science show)

Massive objects distort space and time, so that the usual rules of geometry dont apply anymore. Near a black hole, this distortion space is extremely severe and causes black holes to have some very strange properties. A black hole is a region of space that has so much mass concentrated in it that there is no way for a nearby object to escape its gravitational pull. Afer a black hole is created, the gravitational force continues to pull in space debris and other type of dust to help add to the mass of the core, making the hole stronger and more powerful.

Most black holes are spinning; The spinning of the black hole allows more debris to become a part of its ring which is called the Event horizon. The debris spins within the ring until it becomes a part of the center of the black hole adding to the mass of the core, making the hole stronger and more powerful. The event horizon is also known as the boundary. The event horizon is the point where the black holes gravitational pull begins. Once you cross the event horizon, there is no turning back. The way that someone can escape a black holes event horizon once it has entered, is by exceeding the escape velocity.

The escape velocity means moving faster than the speed of light. Since moving faster than the speed of light is impossible, so is escaping a black holes gravitational pull. So in order for the black hole to swallow something up, that thing will have to pass the event horizon. If someone were to fall into the event horizon, they will begin spinning around the center of the black hole at the speed of light. As the person gets closer to the center, the singularity effect takes place. This theory means that once you are in the event horizon the gravitational pull at the center of the black hole is greater at your feet than your head.

This singularity effect will stretch the person out to infinite thinness until you are torn apart, thus killing you. The time it takes a person to die depends on the size of the black hole. A smaller black hole means that its singularity is not far away from the core thus killing you faster. A larger black hole will allow you to stretch slower giving you time to look around the inside of a black hole. If one were able to look around in the event horizon images would be distorted. And since light can go into a black hole, you can see outside images fine. But light wont be able to bounce of you and go back, so no one would be able to see you.

Even though it is impossible for someone to experience this, scientists speculate that this is what would happen. Basically you would be in a place where time does not exist and all of Einsteins laws will fail. Even though we cannot see black holes scientist know they are really exist. Scientists have not actually discovered black holes but, there are some speculations as to what they think black holes are. If there is a large quantity of mass in a small area, there is a good chance it is a black hole. A black hole emits radiation, and the energy to emit this radiation comes from the black holes mass similarly, to a star.

Scientists are aware of this radiation field so, they use technological advancements for measuring such things like radiation. The core of the black hole appears to be purely black on all readings even through the uses of radiation detection devices. Another idea scientists use to speculate black holes exist stance, is by observing other stars. Stars in the sky revolve around other stars and sometimes planets. Just like our planets revolve around our sun. Our sons gravitational force keeps our planets in their revolutions. Now imagine our son was a black hole. The black hole has the same characteristics of a star but you just cant see it.

So when scientists see a star revolving but, cannot see what is causing its evolution; This may be another sign that the star may be revolving around a black hole. Just recently a major discovery was found with the help of a device known as The Hubble Telescope. This telescope has just recently found what many astronomers believe to be a black hole, After being focuses on a star orbiting empty space. Several pictures of various radiation fluctuations and other diverse types of readings that could be read from that area which the black hole is suspected to be in.

The Moon – the only natural satellite of Earth

The Moon is the only natural satellite of Earth: orbit: 384,400 km from Earth diameter: 3476 km mass: 7. 35e22 kg Called Luna by the Romans, Selene and Artemis by the Greeks, and many other names in other mythologies. The Moon, of course, has been known since prehistoric times. It is the second brightest object in the sky after the Sun. As the Moon orbits around the Earth once per month, the angle between the Earth, the Moon and the Sun changes; we see this as the cycle of the Moon’s phases.

The time between successive new moons is 29. 5 days (709 hours), slightly different from the Moon’s orbital period (measured against the stars) ince the Earth moves a significant distance in its orbit around the Sun in that time. Due to its size and composition, the Moon is sometimes classified as a terrestrial “planet” along with Mercury, Venus, Earth and Mars. The Moon was first visited by the Soviet spacecraft Luna 2 in 1959. It is the only extraterrestrial body to have been visited by humans. The first landing was on July 20, 1969 (do you remember where you were? ); the last was in December 1972.

The Moon is also the only body from which samples have been returned to Earth. In the summer of 1994, the Moon was very extensively mapped by the little pacecraft Clementine and again in 1999 by Lunar Prospector. The gravitational forces between the Earth and the Moon cause some interesting effects. The most obvious is the tides. The Moon’s gravitational attraction is stronger on the side of the Earth nearest to the Moon and weaker on the opposite side. Since the Earth, and particularly the oceans, is not perfectly rigid it is stretched out along the line toward the Moon. From our perspective on the Earth’s surface we see two small bulges, one in the direction of the Moon and one directly opposite.

The effect is much stronger in the ocean water than in the solid crust o the water bulges are higher. And because the Earth rotates much faster than the Moon moves in its orbit, the bulges move around the Earth about once a day giving two high tides per day. But the Earth is not completely fluid, either. The Earth’s rotation carries the Earth’s bulges get slightly ahead of the point directly beneath the Moon. This means that the force between the Earth and the Moon is not exactly along the line between their centers producing a torque on the Earth and an accelerating force on the Moon.

This causes a net transfer of rotational energy from the Earth to the Moon, slowing down the Earth’s rotation y about 1. 5 milliseconds/century and raising the Moon into a higher orbit by about 3. 8 centimeters per year. (The opposite effect happens to satellites with unusual orbits such as Phobos and Triton). The asymmetric nature of this gravitational interaction is also responsible for the fact that the Moon rotates synchronously, i. e. it is locked in phase with its orbit so that the same side is always facing toward the Earth. Just as the Earth’s rotation is now being slowed by the Moon’s influence so in the distant past the Moon’s rotation was slowed by the action of the Earth, but in that case the effect was much tronger. When the Moon’s rotation rate was slowed to match its orbital period (such that the bulge always faced toward the Earth) there was no longer an off-center torque on the Moon and a stable situation was achieved.

The same thing has happened to most of the other satellites in the solar system. Eventually, the Earth’s rotation will be slowed to match the Moon’s period, too, as is the case with Pluto and Charon. Actually, the Moon appears to wobble a bit (due to its slightly non-circular orbit) so that a few degrees of the far side can be seen from time to time, but the majority of the far side (left) was ompletely unknown until the Soviet spacecraft Luna 3 photographed it in 1959. (Note: there is no “dark side” of the Moon; all parts of the Moon get sunlight half the time.

Some uses of the term “dark side” in the past may have referred to the far side as “dark” in the sense of “unknown” (eg “darkest Africa; but even that meaning is no longer valid today! ) The Moon has no atmosphere. But evidence from Clementine suggested that there may be water ice in some deep craters near the Moon’s south pole which are permanently shaded. This has now been confirmed by Lunar Prospector.

There is apparently ice at the north pole as well. The cost of future lunar exploration just got a lot cheaper! The Moon’s crust averages 68 km thick and varies from essentially 0 under Mare Crisium to 107 km north of the crater Korolev on the lunar far side. Below the crust is a mantle and probably a small core (roughly 340 km radius and 2% of the Moon’s mass). Unlike the Earth’s mantle, however, the Moon’s is only partially molten. Curiously, the Moon’s center of mass is offset from its geometric center by about 2 km in the direction toward the Earth. Also, the crust is thinner on the near side.

There are two primary types of terrain on the Moon: the heavily cratered and very old highlands and the relatively smooth and younger maria. The maria (which comprise about 16% of the Moon’s surface) are huge impact craters that were later flooded by molten lava. Most of the surface is covered with regolith, a mixture of fine dust and rocky debris produced by meteor impacts. For some unknown reason, the maria are concentrated on the near side. Most of the craters on the near side are named for famous figures in the history of science such as Tycho, Copernicus, and Ptolemaeus.

Features on the far have more modern references such as Apollo, Gagarin and Korolev (with a distinctly Russian bias since the first images were obtained by Luna 3). In addition to the familiar features on the near side, the Moon also has the huge craters South Pole-Aitken on the far side which is 2250 km in diameter and 12 km deep making it the the largest impact basin in the solar system and Orientale on the western limb (as seen from Earth; in the center of the image at left) which is a splendid example of a multi-ring crater. A total of 382 kg of rock samples were returned to the Earth by the Apollo and Luna programs.

These provide most of our detailed knowledge of the Moon. They are particularly valuable in that they can be dated. Even today, 20 years after the last Moon landing, scientists still study these precious samples. Most rocks on the surface of the Moon seem to be between 4. 6 and 3 billion years old. This is a fortuitous match with the oldest terrestrial rocks which are rarely more than 3 billion years old. Thus the Moon provides evidence about the early history of the Solar System not available on the Earth. Prior to the study of the Apollo samples, there was no consensus about the origin of the Moon.

There were three principal theories: co-accretion which asserted that the Moon and the Earth formed at the same time from the Solar Nebula; fission which asserted that the Moon split off of the Earth; and capture which held that the Moon formed elsewhere and was subsequently captured by the Earth. None of these work very well. But the new and detailed information from the Moon rocks led to the impact theory: that the Earth collided with a very large object (as big as Mars or more) and that the Moon formed from the ejected material. There are still details to be worked out, but the impact theory is now widely accepted.

The Moon has no global magnetic field. But some of its surface rocks exhibit remanent magnetism indicating that there may have been a global magnetic field early in the Moon’s history. With no atmosphere and no magnetic field, the Moon’s surface is exposed directly to the solar wind. Over its 4 billion year lifetime many hydrogen ions from the solar wind have become embedded in the Moon’s regolith. Thus samples of regolith returned by the Apollo missions proved valuable in studies of the solar wind. This lunar hydrogen may also be of use someday as rocket fuel.

Star Formation Essay

Our lives are intimately linked to the stars, but in ways much more down to earth than the romantic views of them. As we all know, our sun is a star and the thermonuclear reactions that are continuously taking place inside it are what provide and sustain life on our planet. What do we get from the sun? We get carbon, oxygen, calcium and iron, courtesy of stars that disappeared billions of years ago (Naeye, 1998). Star formation is a study in contradictions because the formation of a star begins with atoms and molecules floating freely through space that are brought together through gravity to form masses that become stars.

Stars go through three major stages of development in their transformation from infancy to adult stars: a collection of dust and gases, protostar, full-blown star. Pictures of these various stages are mind-boggling in their beauty and bring one to an immense sense of awe at the machinations of the universe. Scientists believe that stars begin as a collection of interstellar dust and gases (Frank, 1996). This mass of dust and gases forms a cloud that begins shrinking and rotating until it eventually develops into what is called a protostar.

Once the protostar reaches sufficient ass, it then begins the process of converting hydrogen to helium through a series of nuclear reactions, or nuclear fusion until it becomes a full-blown star (Astronomy, 1995). Those protostars that are too small to complete the nuclear fusion die out to become what are known as brown dwarfs (refer to photo at right). Thanks to an image from the Mt. Palomar observatory, astronomers have obtained the first image of a brown dwarf, named Gliese 229B (or GL229B).

It is a small companion to the red star, Gliese 229, which is approximately 19 light-years from Earth in the constellation Lepus. GL229B is too hot and massive to be classified as a planet, yet at the same time it is too small and cool to be able to shine like a typical star in fact, it is actually at least 100,000 times dimmer than our own sun and is the faintest object ever to be discovered orbiting another star. As a star forms, it is this fusion-powered heat and radiation emanating from the core of the star which keeps the star whole (Watery Nurseries, 1997).

If it werent for this, the star would actually collapse under the stress of its own weight. However there is a balancing act that takes place within the star between radiation nd gravity (which provides fuel for the star) that prevents this and makes it possible for star to have a life span of billions of years. The big question, though, is how does this whole process get started and what actually makes it possible for these masses meld together to form a star, instead of just exploding back into cosmic particles?

What actually happens is that the clouds of gas and dust are actually drawn into compaction through self-created gravitational collapse. As the picture at left (from the Hubble Telescope) shows, these clouds go through continuous implosion to become solid masses. Scientifically speaking, it is logical to assume that this implosion should actually generate so much heat that the gas and dust expand, rather than come together and yet this is not the case. The reason why, scientists believe, is due to water molecules that are formed during this process.

It is the addition of these charged molecules, called hydronium, that they believe provide the ingredient necessary to prevent further expansion of the gasses and dust, thereby allowing the continuance of implosion until the star finally forms a solid mass. Hydronium is made up of three hydrogen atoms and one oxygen ion. In theory, it has the ability to transform into water (H2O) plus one independent hydrogen atom, as long as it is able to capture a free- floating electron from somewhere.

It takes hundreds of millions of years for the particles of dust and gas to come together into these gigantic clouds that can span hundreds of light-years in size. The clouds are dominated by their two prime elements of hydrogen and helium while particles of dust make up about one percent of a clouds mass. In addition, there are other molecules present that contribute to the molecular structure of the cloud, such as ammonia and other arbon-based elements. Each cloud contains enough elements to create approximately ten thousand new stars.

It takes many millennia for a collapsing gas cloud to fragment into thousands of dense, rotating clumps of gas that will eventually become newborn stars. The cores of these gaseous clumps are continuously compacting more and more as their rotation becomes faster and faster and, over time, the cores become elongated. Some of these elongated cores are hypothesized to eventually become binary and multiple star systems by virtue of the fact that the cloud is stretched out so much.

Over time, stars naturally change. Once the star enters its maturity, a stage where nuclear reactions begin to stabilize, it will spend the majority of its existence there. As they age and enter the late evolution stage, they often swell and become red giants which can evolve into novas, planetary nebulas, or supernovas. By the end of its life, a star will change into a white dwarf, black dwarf, or neutron star depending upon the composition of its original stellar mass.

Thanks to NASA’s Hubble Space Telescope we have gained new insight into how stars might have formed many billions of years ago in he early universe. This picture from the Hubble shows a pair of star clusters, which might be linked through stellar evolution processes. There are actually a pair of star clusters in this picture which are located approximately 166,000 light- years from the Large Magellanic Cloud (LMC) in the southern constellation Doradus. According to astronomers, the clusters, for being so distinctly separate, are unusually close together.

In the past, observations such as this were restricted to clusters within our own Milky Way galaxy. Because of the fact that the stars in he Large Magellaniv Cloud do not have many heavy elements in their composition, they are considered to be much more primordial than other newly forming stars and, therefore, more like scientists speculate stars were like in the early universe. There is an ongoing debate among astronomers as to the importance of disks in the formation process.

Many astronomers believe that most of the matter that makes up the star actually starts off inside a disk which spirals inward until it coheres into a star. There have actually been observations of massive disks as they orbit infant stars and it is these observations which have ed scientists to believe that disk accretion is very important to the process of star formation. The key to understanding star formation is the correlation between young stars and clouds of gas and dust.

Usually the youngest group of stars have large clouds of gas illuminated by the hottest and brightest of the new stars. The old theory of gravity predicts that the combined gravitational attraction of the atoms in a cloud of gas will squeeze the cloud, pulling every atom toward the center. Then, we might expect that every cloud would eventually collapse and ecome a star; however, the heat in the cloud resists collapse. Most clouds do not appear to be gravitationally unstable, but such a cloud colliding with a shock wave can be compressed disrupted into fragments.

Theoretical calculations show that some of these fragments can become dense enough to collapse and form stars. Astronomers have found a number of giant molecular clouds where stars are forming in a repeating cycle. Both high-mass and low -mass stars form in such a cloud, but when the massive stars form, their intense radiation or eventual supernova explosion push back the surrounding gas and compressive period. This compression in turn can trigger the formation of more stars, some of which will be massive.

Thus a few massive stars can drive a continuing cycle a star formation in a giant molecular cloud. While low-mass stars do form in such clouds along with massive stars, low-mass stars also form in smaller clouds of gas and dust. Because lower mass stars have lower luminosities and do not develop quickly into supernova explosions, low-mass stars alone can not drive a continuing cycle a star formation. Collapsing clouds of gases do not form a single object; because of nstabilities, it fragments producing an association of ten to a thousand stars.

The association drifts apart within a few million years. The sun probably formed in such a cluster about five billion years ago. Stars are supported by the outward flow Of energy generated by nuclear fusion in their interiors. The energy generated Keeps each layer of the star hot enough so that the gas pressure can support the weight of the layers above. Each layer in the star must be in hydrostatic equilibrium; that is, the inward weight is balanced by outward pressure. Stars are elegant in their simplicity.

Black Holes Paper

Within our galaxy alone, there are millions upon millions of stars. Within our universe, there are millions upon millions of galaxies. Humans have known the existence of stars since they have had eyes. Although interpretations may have differed on what they were, they were always thought of as white glowing specks in the sky, but the mystery does not lie within what we can see, but what we can not see. There are billions of stars lighting the darkness of our universe, but the question lies in what happens when one of these enormous lamps burns out. Upon many speculations, one of the most fascinating is the black hole theory.

Not any star can become a Black Hole. For instance, the possibility of our sun becoming a black hole is highly unlikely, simply because it is too small. Only a very large star has the potential to become a black hole. The definitions of black boles are somewhat skeptical. Generally, a black hole is an area of super-concentrated mass. So concentrated, that no object can escape its gravitational pull. In other words, once you get caught by it’s gravitational pull, you aren’t getting out again. The velocity you need to break away from a ravitational pull is called the “escape velocity”. Roughly, earth’s escape velocity is about 25,000 M. P. H. (11. 2 kilometers/second).

Earth’s mass is nothing compared to the mass of a star that has the potential to become a black hole. A black hole has so much mass in such a small area, that its escape velocity is greater than the speed of light. So if were all living on earth, and earth was a black hole, we would need to go at the speed of light in order to get to the moon. Even though a black hole’s gravitational pull is enormous, it does have its boundary. This boundary is called the “event horizon”. This event horizon is the point where the black hole’s gravitational pull begins.

Once you cross the event horizon, there is no turning back. As stated before, the escape velocity of a black hole exceeds the speed of light, and since going faster than the speed of light is impossible, so is escaping a black hole’s gravitational pull. Inside the event horizon is where the major speculation begins. Just what happens once you cross the event horizon? Well, once you cross the event horizon, you’ll be spinning around the center at the speed of light. As you get closer to the center, or what scientists call the singularity”, the theory of the spaghetti effect comes into play.

That is, the gravitational pull of the center of the black hole is greater at your feet than your head, thus pulling stronger at your feet, and stretching you out to a point of infinite thinness. This same force is what causes the tides in our ocean, hence the name “tidal forces”. The time in which it takes you to witness this effect depends on the size of the black hole. A smaller black hole means that its singularity is not far away, thus killing you quicker. If you could somehow get into a horizon safely and look around, images around ou would be distorted.

And since light can go into a black hole, you can see outside images fine. But light won’t be able to bounce of you and back, so no one would be able to see you. Although living long enough to reach the singularity is just about impossible, if you could reach it; no one knows what would happen. Basically, you would be in a place where time does not exist and all of Einstein’s laws will fail. Evidence that black holes are real does exist. Even though you cannot see a black hole, as light cannot escape it, you can measure how much mass there is in an area.

And if you have a large quantity of mass in a small area, there is a good chance it is a black hole. Black holes do not live forever, and as stars, they die. Speculation on their deaths is extremely speculative. The theory of black hole evaporation seems to be a popular theory on how black holes die. Black holes emit radiation, and the energy to emit this radiation comes from the black hole’s mass, thus shrinking the black hole. Gradually, a black hole wears itself out into nothing. Stephen Hawkings presented this idea in the 1970s which was a great contribution to physics.

Space Exploration: From The Past To The Future

Ever since the beginning of time, mankind has been fascinated with wonders of space. Before the mid-1900s, all mankind could do was gaze at the stars from Earth and wonder what it would be like to go into space. Man would look through telescopes and make theories on how the universe worked. During the mid-1900s, mankind finally was able to send a man into space and explore the wonders of space first hand. So why do humans explore space? Well, it is our fascination with the unknown. At first, all mankind did was look up and wonder how things became what they are now.

We started to think that all celestial bodies revolved around the Earth, and the Earth was the center of the entire universe. Galileo Galelie later disproved this theory. Even with growing knowledge in the field, it was not until 1957 when the first Earth orbiter, the Soviet’s Sputnik 1, was sent into space and placed in orbit at an altitude of 1,370 miles and weighed 184 pounds. Later in that year, the Soviets sent Sputnik 2 into space with a dog named Laika. Laika was the first animal to venture into space.

Then in 1985, the United States successfully sent their very own satellite into space. In 1960, the Soviets launched to dogs into space and successfully returned them to Earth. From this point started the space race. The space race was a challenge between the USSR and the United States to see who could land a man on the moon first. In 1961, the first man in space was cosmonaut Yuri Gagarin who was in space for 60 minutes before returning to Earth in Vostok 1 and was sent by the USSR. Astronaut Shepard flew the first manned sub-orbital space-flight by the Americans.

The first true American orbital flight was by John Glen and he stayed in space for five hours in Mercury 6 in 1962. Then in 1963, the USSR sent the first woman into space; her name was Valentina Tereshkova-Nikaleva. They also had the first person to take a space-walk in 1965. In 1968, the National Aeronautics and Space Administration or NASA tested the first Saturn 5 rocket, which would be used for the Apollo missions. The first manned Apollo missions and the first flight around the moon took place in 1968.

Finally, on July 21, 1969, the United States placed the first man on the moon winning the space race. The challenge for mankind at present is placing a human on Mars. We have already sent probes on to Mars and roamed some of its terrain with the rover known as Sojourner. Sojourner was taken to Mars on NASAs Mars Pathfinder and was the first wheeled vehicle to operate on another planetary surface. The Mars pathfinder sent photographs, atmospheric measurements, and a few other important data that will contribute to taking a man to Mars.

While pathfinder sent data, Sojourner examined rocks and soil samples with a camera and Alpha Proton X-ray Spectrometer, providing useful data on chemical compositions and radiation bounced back from rocks and dust. The mission finally ended when the Pathfinder stopped responding to commands from NASA. NASA has sent two other probes to Mars, but both malfunctioned and were destroyed on impact on the Martian surface. The US and a few other countries have joined together and are constructing the International Space Station or the ISS. The ISS is scheduled to be completed in 2004 and will be continuously occupied by up to seven crewmembers.

The space station is envisioned to be a world-class research facility in which scientist can study Earth and space, as well as explore the medical effects of long durations of weightlessness in space and the behavior of materials in a weightlessness environment, and the practicality of space manufacturing techniques. Now, the future of space exploration depends on many factors. Some of these factors are as followed: how much technology advances, how political forces change rivalries as well as partnerships with other nations, and how important space exploration is to the general public.

NASA is working on a single-stage-to-orbit (SSTO) vehicle, but until it is until then, NASA plans to us the space shuttle fleet to the year 2012. It is clear that mankind has devoted itself to the exploration of the unknown and that we are committed to find new planets on which man cal live and prosper. Missiles were first used to take man into space. In the Gemini and Mercury missions, missiles without warheads and a compartment for the astronauts was used to get into space. On the other hand, the Saturn 5 Rockets were used in the Apollo missions to reach orbit and land on the moon.

This method became too expensive so NASA was forced to develop a reusable space craft. As a result, NASA designed the space shuttle, which is used today. The shuttle takes off like a rocket with an external fuel tank and two solid rocket boosters. The two rocket boosters, which are attached to the external fuel tank, provide additional thrust during lift off and are discarded after first two minutes from take off. They are then retrieved from the ocean to be fixed, refueled, and reused. The external fuel tank supply additional fuel to the three onboard engines and is detached from the shuttle, orbiter, after reaching orbit.

Another spacecraft is the X-33 is a single stage orbiter and a smaller version of the venture star. Both of these crafts will be launched like a shuttle but without any additional boosters. They will be able to land, get refueled and loaded with cargo, positioned vertically, and take off within hours. It will make travel to space faster, safer and cheaper. The two crafts will use the spike engine that allows the craft to reach orbit without gong through different stages during lift off. The X-33 and the Venture Star will allow companies to put satellites in orbit for a cheaper cost.

Space travel does take its effect on humans. Piloted space flights have to supply oxygen, food, and water for their occupants and even longer flights need to have a way to dispose of or recycle waste. The even longer flights, spacecrafts will eventually need to become mostly self-sufficient. The astronauts will have to exercise and since the astronauts will be weightless, the shuttle will need to provide more than just the core physical needs for the astronauts to stay healthy. The weight of the craft is so important that it plays a crucial role in the amount of food supplied by the spacecraft.

Most food provided to the astronauts is dehydrated, which is rehydrated by a device that is some what like a water gun, to save space as well as weight. However, some foods are given in their conventional form such as fruits, candy, and bread. Water is usually provided by fuel cells that also provide electricity to the whole ship. The reaction between Hydrogen and oxygen provides the electricity and creates water as well. A small amount of water is also carried onboard in case of emergencies. Water is also recycled on long duration missions aboard space stations.

In this process, drinkable water is extracted from a combination of waste water, urine, and moister from the atmosphere in the cabin. This system will be recycled was used on the MIR space station and is planned to be used on the International Space Station. Skylab, the first US space station, was the fist to offer its crew a chance to bathe in space by installing a collapsible shower. To prevent water from escaping and floating around the cabin, the astronauts sealed the shower once inside. Astronauts had a hand-held nozzle to dispense water and then a vacuum to remove the water.

On the space shuttle on Mir, where the showers malfunctioned, astronauts and cosmonauts have had to take sponge bathes in order to stay clean. On the International Space Station, showers will be provided in the habitual module. Most piloted spacecrafts carry their oxygen in the liquid state of matter in onboard tanks that keep the oxygen at cryogenic or super-cold temperatures to save space on the craft. The Russian space station MIR used special onboard generators to separate water into oxygen and hydrogen. On the Mercury, Gemini, and Apollo missions, the cabin atmosphere consisted of one hundred percent oxygen.

This gave the cabin a pressure of 0. 3 kg/sq cm (about 5 lbs/sq in). However, on the space shuttle and the MIR space station, the atmosphere is a mixture of oxygen and nitrogen to provide an atmospheric pressure of 1. 01 kg/sq cm (14. 5 lbs/sq in), which is slightly lower than that of sea level. Before an astronaut could go on space walks, astronauts and cosmonauts had to breathe pure oxygen to rid their bloodstreams of nitrogen. This eliminated the chance that the space walker would get decompression sickness because of the different pressures from the space suit and the cabin.

Since the suit had a pressure of 0. 30 kg/sq. cm, the sudden decompression would cause nitrogen bubbles to form in the bloodstream and other bodily organs and would result in a painful and potentially lethal condition. This atmospheric mixture is planned to be used in the International Space Station. A problem with micro gravity and weightlessness is that in long duration missions in this type of environment creates problems with the muscles weakening. In space, muscles do not have to exert much force to move and in turn get weaker.

This creates a problem with a mission to Mars, the long duration in a weightless environment will weaken the muscles and when craft lands on Mars, the crew will have a difficult time adjusting to the new environment and completing their jobs. To make this problem less severe, scientist are working on a way to create artificial gravity. This will make the environment in which the astronauts travel in more like that of Earth, so when the crew lands on Mars, they will have an easier time adjusting to the gravity of Mars.

Space exploration has come a long way since the beginning. mankind has gone to the moon and back, we have sent probes to the furthest reaches of our solar system, we have sent a robot to roam the Martian terrain, we have made spaceships that are reusable, and we can see other galaxies that are billions of light years away. Now we brainstorm on how to explore space even further. Man kind is destined to go to the far reaches of the universe and make contact with other life forms. With all things considered, humans are not far from colonizing space.

The Twin Supergiants: Alpha and Beta Centauri

Hadar, also known as Beta Centauri, is the 10th brightest stars (11th as viewed from Earth). Hadar is a blue-white super giant in the constellation Centaurus (Cen). In about 4,000 years, the motion of Alpha Centauri, who’s proper name is Rigel Kentaurus, will carry it close enough to Hadar that they will appear to be a magnificent double star. Because of the distance away from Earth that Alpha and Beta Centauri are (approximately 90 parsecs), they will be an optical double. As they sit today, the two stars look like a pair of eyes, the right one being Hadar and the left being Rigel Kentaurus.

These two stars are considered pointer stars. A “pointer star” is a star that points towards the Southern Cross. Some of the Australian aboriginals call this pair “The two men that once were lions”. Other aboriginals consider them to be the twins that created the world. ” Hadar is a proper name of unknown meaning, and has been paired with the name “Wezen,” the two applied to the two bright stars in Centaurus as well as to stars in Columba, “Wezen” now commonly used for Delta Canis Majoris. Hadar, less often known as Agena (from the “knee” of the Centaur), is quite the magnificent star.

At a distance of 525 light years, blue class B (B1) Hadar is 130 times farther away than Rigel Kentaurus, and is bright because it is truly and very generously luminous, shining (accounting for the ultraviolet radiated from the 25,500-Kelvin surface) 112,000 times more brightly than the Sun. Hadar, however, is not one star, but two. Sophisticated observations that rely on the interference properties of light show that the single point of light actually consists of a pair of nearly identical stars each some 55,000 times more luminous than the Sun separated (from our perspective) by only 2. stronomical units.

The temperature and luminosity show each to contain 15 solar masses. Spectra suggest an orbital period of not quite a year, this and the masses rendering them an actual 3 astronomical units apart. Twin Hadar also has a fourth magnitude sibling 1. 3 seconds of arc away that, because of the brightness difference, is difficult to see and study. A class B dwarf, Hadar- B is a grand star in its own right, a star of 5 solar masses 1500 times more luminous than the Sun; it only pales by comparison with Hadar (or the Hadars) proper.

Hadar-B orbits the close pair at a minimum distance of 210 astronomical units, taking at least 600 years to make the trip. Conjure a hypothetical planet orbiting Hadar-B. For us to survive, it would have to be as far from the star as Pluto is from the Sun. From there, the distant twins (each 12 solar diameters across) would appear as tiny disks two minutes of arc across separated by half a degree (the angular diameter of the Moon), each shining as much energy on the mythical planet as the Sun does upon us.

The twins of Hadar appear to at the edge of shutting down their internal hydrogen fusion (if they have not done so already), and are beginning to evolve and die. Now some 12 million years old, they will quickly expand to become red super giants and will surely affect each other quite profoundly. Within the next million or so years at least one may explode as a grand supernova. If it were to go off where it is today (which it will not), it would shine in our sky with nearly the brightness of the full Moon.

One of the twins is also a variable of the “Beta Cephei” type (which includes Mirzam), the star subtlety chattering away with multiple periods of less than a day. Here is a nice short piece of information. When are the Hadars going to die? Well that can be answered very easily: now! The Hadars are actually shutting down and dying as we speak! They are already shutting down their internal hydrogen fusion and are beginning to evolve and die.

The Orion Nebula

The Orion Nebula contains one of the brightest star clusters in the night sky. With a magnitude of 4, this nebula is easily visible from the Northern Hemisphere during the winter months. It is surprising, therefore, that this region was not documented until 1610 by a French lawyer named Nicholas-Claude Fabri de Peiresc. On March 4, 1769, Charles Messier inducted the Orion Nebula, M42, into his list of stellar objects. Then, in 1771, Messier released his list of objects for its first publication in Memoires de lAcademie. 1 The Orion Nebula is one of the closest stellar regions to the Earth.

Using parallax measurements, it has been estimated that this nebula is only 1,500 light years away. In addition, the Orion Nebula is a relatively young star cluster, with an approximate age of less than one million years. It has even been speculated that some of the younger stars within the cluster are only 300,000 years old. The Orion Nebula is an emission nebula because of the O-type and B-type stars contained within it. These high-temperature stars emit ultraviolet (UV) light that ionizes the surrounding hydrogen atoms into protons (H+) and electrons (e-).

When the protons and electrons recombine, the electrons enter a higher energy level (n=3). Then, when the electron drops from the n=3 level to the n=2 level, an H?? photon is emitted. 2 This photon has a wavelength of 6563 , and therefore corresponds to the red portion of the visible spectrum. It is these H? photons which give the nebula the distinctive red color which we see. The extreme brightness of the O-type and B-type stars, coupled with the Earths atmosphere, has always made high-resolution imaging of the star-forming region difficult.

But recent advances in adaptive optics and the repair of the Hubble Space Telescope have allowed for incredible detail into the center of the dust cloud. 3 The technological advances have also helped reveal several faint stars within the center of the nebula. The Orion Nebula is a spectacular sight. Consequently, it has been a preferred target of the Hubble Space Telescope (HST) over recent years. The HST has provided a great deal of insight into the complicated process of star formation.

In June of 1994, C. Robert ODell, a Rice University astronomer, discovered the presence of protoplanetary disks around some stars of the Orion Nebula. After surveying 110 M42 stars, ODell found that 56 of them had disks around them. It has been speculated that the disks identified in the Hubble survey are a missing link in the understanding of how planets, like those in our planetary system, form. 4 According to current theories, the dust contained within the protoplanetary disks eventually condenses to form planets.

Furthermore, the abundance of the protoplanetary disks reinforces the assumption that planetary systems are common throughout the universe. The suggestion that the Orion Nebula may eventually lead to planetary formation has become the basis for much discussion. More specifically, Doug Johnstone, an NSERC Post-Doctorate Fellow at the University of Toronto, developed an opposing perspective. At a meeting of the American Astronomical Society, on January 14, 1997, Johnstone suggested that the disks around young cluster stars may not survive long enough for planets to form within them.

Furthermore, he concluded that certain favorable conditions must exist in order to promote planetary formation, and that the hostile environment of the Orion Nebula may actually inhibit the creation of planets. With the present limited knowledge of nebulae, no conclusive evidence exists to support either argument. On April 9, 1998, Cornell University astrophysicist Martin Harwit published his discovery of the presence of massive amounts of water in the Orion Nebula. This was the first time that water has been found in a star-forming region.

The find demonstrates that water plays a vital role in star formation. In addition, this discovery implies that water is prevalent in space. Harwit speculates that the water acts as a coolant, by carrying heat away from the condensing clouds. It is believed that this process is necessary to slow down the particles in order to allow the compression of the particles into new stars. 6 The discovery of water in the Orion Nebula will undoubtedly provide the basis for further study. More specifically, it will prompt scientists to search for water in other regions of space at different stages of star formation.

Then, if water is present in each, it may suggest that the oceans of Earth are older than even the planet that now contains them. 7 Several unresolved problems remain concerning the Orion Nebula. The fate of the protoplanetary disks, for example, is presently impossible to predict. Without a more detailed understanding of how planets actually form, it cannot be assumed that the events within the Orion Nebula are analogous to the events that led to the formation of the planets in the solar system.

Furthermore, the detection of water in the nebula has revealed the need to revise the theory of star formation to include water as a major component. Despite the fact that great progress is being made in terms of observational techniques and investigation, a great deal of information about the universe remains a mystery. Further analysis of the Orion Nebula, however, may help unravel some of the mysteries, including the origin of the solar system.

The net Mars

The net Mars is an interesting and mysterious planet. It is often referred to as the Red Planet. The rocks, soil, and sky all have a red hue on account of rust. Mars is the fourth planet from the sun at about 141 million miles (228 million kilometers) and the last terrestrial planet from the Sun. Mars follows closely behind Earth but is comparatively smaller, with about half the diameter of Earth (6,794-km) and about one-tenth of Earths mass (6. 419 x 1023 kg). Thus the force of gravity on Mars is about one-third of that on Earth.

Mars is probably the planet we know the most about since it is so close to Earth, though what we know now is not even close to everything about the planet. As time goes on, our knowledge of this mysterious planet will expand. Atmosphere The atmosphere of Mars is quite different from that of Earth. It is composed primarily of carbon dioxide with small amounts of other gases. The six most common components of the atmosphere are Carbon Dioxide at 95. 32%; Nitrogen at 2. 7%; Argon at 1. 6%; Oxygen at 0. 13%; Water at 0. 03%; and Neon at 0. 00025 %.

Martian air contains only about 1/1,000 as much water as our air, but even this small amount can condense out, forming clouds that rise high in the atmosphere or swirl around the slopes of towering volcanoes. Local patches of early morning fog can form in valleys. At the Viking Lander 2 site, a thin layer of water frost covered the ground each winter. There is evidence that in the past a denser Martian atmosphere may have allowed water to flow on the planet. Physical features closely resembling shorelines, gorges, riverbeds and islands suggest that great rivers once marked the planet.

Temperature Mars is smaller and, because of its greater distance from the Sun, cooler than the eearth. It has seasons similar to Earth’s because the tilt of its rotational axis to the plane of its orbit about the Sun is about the same as earth’s. Interestingly, unlike Earth the significant elliptical shape of the Martian orbit means that the seasons on Mars are also affected by varying distance from the Sun. In the case of earth, because of its almost circular orbit, our seasons result simply from the tilt of the earth’s rotational axis.

The average recorded temperature on Mars is -81 F (-63 C) with a maximum temperature of 68 F (20 C) and a minimum of -220 F (-140 C). Barometric pressure varies at each landing site on a semiannual basis. Carbon dioxide, the major component of the atmosphere, freezes out to form an immense polar cap, alternately at each pole. The carbon dioxide forms a great cover of snow and then evaporates again with the coming of spring in each hemisphere. The Interior The current understanding of the interior of Mars suggests that it has a thin crust, similar to Earth’s, a mantle and a core.

Using four criteria, the Martian core size and mass can be determined. However, only three out of the four are known and include the total mass, size of Mars, and the moment of inertia. Mass and size were determined accurately from early missions. The moment of inertia was determined from Viking lander and Pathfinder Doppler data. The fourth parameter, needed to complete the interior model, will be obtained from future spacecraft missions. With the three known parameters, the model is significantly confined.

If the Martian core were composed of iron similar to Earth’s or meteorites thought to originate from Mars, then the minimum core radius would be about 1300 kilometers. If the core were made out of less-dense material such as a mixture of sulfur and iron, the maximum radius would probably be less than 2000 kilometers. The Surface Although it is much smaller, Mars does have the same surface land area as Earth. Other than Earth, Mars posses the most highly varied and interesting known terrain in our solar system.

The surface of Mars is a very hostile place; however, it is more like Earths surface than any other planet in our solar system. Much of the Martian surface is rough and full of craters, but expansive flat plains and smooth hills can also be found. Unlike any other planet, there is a striking difference between the northern and southern hemispheres of Mars; one is extremely rough and old while the other is young and relatively smooth. The southern hemisphere is scattered with ancient craters of all sizes and is also elevated by a several kilometers, which creates a visible boundary.

On the opposite end, the northern hemisphere consists of a wider variety of geological features, but is obviously smoother and much younger. There are large volcanoes, a great rift valley, and a variety of channels. Volcanoes Volcanism is a geological process that occurs on earth today, and has on many planetary bodies throughout the history of the solar system. No volcanism is occurring on the surface of Mars today. In the past, however, volcanism was one of the main forces creating and reshaping the surface of the planet.

All of the rocks that have been observed by the Viking landers and the Mars Pathfinder Rover are generally agreed to be volcanic in origin. Tharsis is the largest volcanic region on Mars. It is approximately four thousand kilometers across, ten kilometers high, and contains twelve large volcanoes. The largest volcanoes in the Tharsis region are four shield volcanoes named Ascraeus Mons, Pavonis Mons, Arsia Mons, and Olympus Mons. The Tharsis Montes (Ascraeus, Pavonis, and Arsia) are located on the crest of the crustal bulge and their summits are about the same elevation as the summit of Olympus Mons, the largest of the Tharsis volcanoes.

While not the largest of the Tharsis volcanoes, Arsis Mons has the largest caldera on Mars, having a diameter of one hundred twenty kilometers! The largest of the volcanoes in the Tharsis region, as well as all known volcanoes in the solar system, is Olympus Mons. Olympus Mons is a shield volcano 624-km in diameter and 25-km high. A caldera 80-km wide is located at the summit of Olympus Mons. To compare, the largest volcano on earth is Mauna Loa. Mauna Loa is a shield volcano 10 km high and 120 km across.

The volume of Olympus Mons is about one hundred times larger than that of Mauna Loa. In fact, the entire chain of Hawaiian Islands would fit inside Olympus Mons! The main difference between the volcanoes on Mars and Earth is their size; volcanoes in the Tharsis region of Mars are ten to one hundred times larger than those anywhere on Earth. The lava flows on the Martian surface are observed to be much longer, probably a result of higher eruption rates and lower surface gravity. The less the gravitational pull, the higher volcanoes can grow without collapsing under their own weight.

Valleys Valles Marineris, or Mariner Valley, is a vast canyon system that runs along the Martian equator just east of the Tharsis region. Valles Marineris is 4000-km long and reaches depths of up to 7 km! For comparison, the Grand Canyon in Arizona is about 800 km long and 1. 6 km deep. In fact, the extent of Valles Marineris is as long as the United States and it spans about 20 percent of the entire distance around Mars! The canyon extends from the Noctis Labyrinthus region in the west to the chaotic terrain in the east.

Most researchers agree that Valles Marineris is a large tectonic crack in the Martian crust, forming as the planet cooled, affected by the rising crust in the Tharsis region to the west, and then widened by erosional forces. However, near the eastern flanks of the rift there appear to be some channels that may have been formed by water. The Tharsis bulge has a profound effect on the appearance, weather, and climate of Mars. Its enormous mass may have dramatically changed the climate by changing the rotation of Mars. Moons Mars has two small moons: Phobos and Deimos.

They were named after the sons of the Greek war god Ares, who was the counterpart to the Roman war god Mars. American astronomer Asaph Hall discovered both moons in 1877. The moons appear to have surface materials similar to many asteroids in the outer asteroid belt, which leads most scientists to believe that Phobos and Deimos are captured asteroids. Phobos mean distance from Mars is 9,377-km and Deimos is 23,436-km. The mass of Phobos is 10. 8 x 1015 kg and the mass of Deimos is 1. 8 x 1015 kg, which is quite small. This also suggests their being asteroids pulled into orbit around Mars.

Extraterrestrial Life? Mars has been the subject of much discussion lately, mostly because of the bacteria-like material found in a piece of a meteorite from Mars in 1996. Before space exploration, Mars was considered the best candidate for harboring extraterrestrial life. Astronomers thought they saw straight lines crisscrossing its surface. This led to the popular belief that irrigation canals on the planet had been constructed by intelligent beings. Another reason for scientists to expect life on Mars had to do with the apparent seasonal color changes on the planet’s surface.

This phenomenon led to speculation that conditions might support a bloom of Martian vegetation during the warmer months and cause plant life to become dormant during colder periods. In July of 1965, the Mariner 4 transmitted 22 close-up pictures of Mars. All that was revealed by these pictures was a surface containing many craters and naturally occurring channels, but no evidence of artificial canals or flowing water. Finally, in July and September 1976, Viking Landers 1 and 2 touched down on the surface of Mars.

The three biology experiments aboard the landers discovered unexpected chemical activity in the Martian soil, but provided no clear evidence for the presence of living microorganisms in the soil near the landing sites. According to mission biologists, Mars is self-sterilizing. They believe the combination of solar ultraviolet radiation that saturates the surface, the extreme dryness of the soil, and the oxidizing nature of the soil chemistry prevent the formation of living organisms in the Martian soil. The question of life on Mars at some time in the distant past remains open.

Mars is a planet full of mysteries just waiting to be discovered. It could perhaps hold the answer to questions we have been asking ourselves for years, such as the origin of life on earth. Is it possible that in the past there was water running on Mars, and when the end came, the beings there moved to earth? The answer is yes, for when we are dealing with space, anything is conceivable. We must keep our minds open to anything, for as we continue to search the space around us, we will continue to make new discoveries. The best way to say this is to use a quote from Star Trek, Space: The final frontier.

Disasters in Space Flight

On January 27, 1967, the three astronauts of the Apollo 4, were doing a test countdown on the launch pad. Gus Grissom was in charge. His crew were Edward H. White, the first American to walk in space, and Roger B. Chaffee, a naval officer going up for the first time. 182 feet below, R. C. A technician Gary Propst was seated in front of a bank of television monitors, listening to the crew radio channel and watching various televisions for important activity. Inside the Apollo 4 there was a metal door with a sharp edge.

Each time the door was open and shut, it scraped against an environmental control unit ire. The repeated abrasion had exposed two tiny sections of wire. A spark alone would not cause a fire, but just below the cuts in the cable was a length of aluminum tubing, which took a ninety-degree turn. There were hundreds of these turns in the whole capsule. The aluminum tubing carried a glycol cooling fluid, which is not flammable, but when exposed to air it turns to flammable fumes. The capsule was filled with pure oxygen in an effort to allow the astronauts to work more efficiently.

It also turns normally not so flammable items to highly flammable items. Raschel netting that was highly flammable in the pure oxygen environment was near the exposed section of the wires. At 6:31:04 p. m. the Raschel netting burst into an open flame. A second after the netting burst into flames, the first message came over the crew’s radio channel: “Fire,” Grissom said. Two Seconds later, Chaffee said clearly, “We’ve got a fire in the cockpit. ” His tone was businesslike (Murray 191).

There was no camera in the cabin, but a remote control camera, if zoomed in on the porthole could provide a partial, shadowy view of the interior of the space craft. There was a lot of motion, Propst explained, as White seemed to fumble with something and then quickly pull his arms back, then reach out again. Another pair of arms came into view from the left, Grissom’s, as the flames spread from the far left-hand corner of the spacecraft toward the porthole (Murray 192). The crew struggled for about 30 seconds after their suits failed, and then died of asphyxiation, not the heat.

To get out of the capsule astronauts had to remove three separate hatches, atleast 90 seconds was required The IB Saturn rocket contained no fuel, so no chance of fire was really hought of, so there were no fire crews or doctors standing by. Many people were listening to the crew’s radio channel, and would have responded, but were caught off guard and the first mention of fire was not clearly heard by anyone. On January 28, 1986 the space shuttle Challenger was ready to launch. The lead up to the launch had not been without its share of problems. The talk of cold weather, icicles, and brittle and faulty o-rings were the main problems.

It was revealed that deep doubts of some engineers had not been passed on by their superiors to the shuttle director, Mr. Moore. Something was unusual about that morning in Florida: it was uncommonly cold. The night before, the temperature had dropped to twenty-two degrees fahrenheit. Icicles hung from the launch pad, it was said that the icicles could have broken off and damaged the space shuttle’s heat tiles. It had been the coldest day on which a shuttle launch had ever been attempted. Cold weather had made the rubber O-ring seals so brittle that they no longer sealed the joint properly.

People feared a reduction in the efficiency of the O-ring seals on the solid rocket boosters. Level 1 authorities at NASA had received enough information about faulty O-rings by August 1985 that they should have ordered discontinuation of flights. The shuttle rocketed away from the icicle laden launch pad, carrying a New Hampshire school teacher, NASA’s first citizen in space. It was the worst accident in the history of NASA in nearly 25 years. 11:38 a. m. cape time, the main engine ignition followed by clouds of smoke and flame came from the solid fuel rocket boosters.

Unknown to anyone in the cabin or on the ground, there was jet of flame around the giant orange fuel tank coming from the right-hand booster rocket. Seventy-three seconds after lift-off the Challenger suddenly disappeared amid a cataclysmic explosion which ripped the fuel tank from nose to tail (Timothy 441). The explosion occurred as Challenger was 10. 35 miles high and 8. 05 miles downrange from the cape, speeding toward space at 1,977 mph. Lost along with the $1. 2 billion spacecraft were a $100 million satellite that was to have become an important part of NASA’s communications network (Associated Press 217).

Pictures taken revealed that even after the enormous explosion occurred the cockpit remained somewhat intact. Aerodynamic pressure exerted on the human passengers would have killed anyone who survived the explosion. The remains of the shuttle were spread over miles of ocean. Over In comparison, both disasters were preventable. Both disasters had a main explosion or malfunction, but even if there were survivors they would have died because there was no escape. The Challenger disaster was mainly a lot of people wanting to get better jobs and more money, or simply to get on the good ide of someone.

The Apollo 4 had many problems which should have been caught. Apollo 4 had many deficiencies: loose, shoddy wiring, excessive use of combustible materials in spite of a 100 percent oxygen atmosphere, inadequate provisions for rescue, and a three layer, ninety plus second hatch. The Challenger had faulty O-rings, icicles, and bad management which threatened to bring the entire american astronaut program to an end. Over a billion dollars Both disasters could have been prevented if the time, effort, and funding was spent.

Stars And Galaxies

Stars and galaxies began to form about one billion years following the Big Bang, and since then the universe has simply continued to grow larger and cooler, creating conditions conducive to life. Three excellent reasons exist for believing in the big-bang theory. First, and most obvious, the universe is expanding. Second, the theory predicts that 25 percent of the total mass of the universe should be the helium that formed during the first few minutes, an amount that agrees with observations.

Finally, and most convincing, is the presence of the cosmic background radiation. The big-bang theory predicted this remnant radiation, which now glows at a temperature just 3 degrees above absolute zero, well before radio astronomers chanced upon it. Friedmann made two simple assumptions about the universe: that when viewed at large enough scales, it appears the same both in every direction and from every location. From these assumptions (called the cosmological principle) and Einsteins equations, he developed the first model of a universe in motion.

The Friedmann universe begins with a Big Bang and continues expanding for untold billions of yearsthats the stage were in now. But after a long enough period of time, the mutual gravitational attraction of all the matter slows the expansion to a stop. The universe then starts to fall in on itself, replaying the expansion in reverse. Eventually all the matter collapses back into a singularity, in what physicist John Wheeler likes to call the Big Crunch.

Gravitational attraction is a fundamental property of matter that exists throughout the known universe. Physicists identify gravity as one of the four types of forces in the universe. The others are the strong and weak nuclear forces and the electromagnetic force. More than 300 years ago, the great English scientist Sir Isaac Newton published the important generalization that mathematically describes this universal force of gravity. Newton was the first to realize that gravity extends well beyond the boundaries of Earth.

Newton’s realization was based on the first of three laws he had formulated to describe the motion of objects. Part of Newton’s first law, the Law of Inertia, states that objects in motion travel in a straight line at a constant velocity unless they are acted upon by a net force. According to this law, the planets in space should travel in straight lines. However, as early as the time of Aristotle, the planets were known to travel on curved paths. Newton reasoned that the circular motions of the planets are the result of a net force acting upon each of them.

That force, he concluded, is the same force that causes an apple to fall to the ground–gravity. Newton’s experimental research into the force of gravity resulted in his elegant mathematical statement that is known today as the Law of Universal Gravitation. According to Newton, every mass in the universe attracts every other mass. The attractive force between any two objects is directly proportional to the product of the two masses being measured and inversely proportional to the square of the distance separating them.

If we let F represent this force, r the distance between the centers of the masses, and m1 and m2 the magnitude of the two masses, the relationship stated can be written symbolically as: is defined mathematically to mean “is proportional to. “) From this relationship, we can see that the greater the masses of the attracting objects, the greater the force of attraction between them. We can also see that the farther apart the objects are from each other, the less the attraction. It is important to note the inverse square relationship with respect to distance.

In other words, if the distance between the objects is doubled, the attraction between them is diminished by a factor of four, and if the distance is tripled, the attraction is only one-ninth as much. Newton’s Law of Universal Gravitation was later quantified by eighteenth-century English physicist Henry Cavendish who actually measured the gravitational force between two one-kilogram masses separated by a distance of one meter. This attraction was an extremely weak force, but its determination permitted the proportional relationship of Newton’s law to be converted into an equation.

The American space shuttle, Challenger

The five men and two women – including the first civilian in space – were just over a minute into their flight from Cape Canaveral in Florida when the Challenger blew up. The astronauts’ families, at the airbase, and millions of Americans witnessed the world’s worst space disaster live on TV. The danger from falling debris prevented rescue boats reaching the scene for more than an hour. In 25 years of space exploration seven people have died – today that total has been doubled. President Ronald Reagan has described the tragedy as “a national loss”.

The Challenger’s flight, the 25th by a shuttle, had already been delayed because of bad weather. High winds, then icicles caused the launch to be postponed from 22 January. But Nasa officials insist safety remains their top priority and there was no pressure to launch the shuttle today. The shuttle crew was led by Commander Dick Scobee, 46. School teacher Christa McAuliffe, 37, married with two children, was to be the first civilian in space – picked from among 10,000 entries for a competition.

Speaking before the launch, she said: “One of the things I hope to bring back into the classroom is to make that connection with the students that they too are part of history, the space programme belongs to them and to try to bring them up with the space age. President Reagan has put off his state of the union address. He was meeting senior aides in the Oval Office when he learned of the disaster He has called for an immediate inquiry into the disaster but he said the space programme would go on – in honour to the dead astronauts.

Vice-President George Bush has been sent to Cape Canaveral to visit the victims’ families. This evening, the president went on national television to pay tribute to the courage and bravery of the seven astronauts. He said: “We will never forget them, nor the last time we saw them this morning as they prepared for their journey and waved goodbye and slipped the surly bonds of earth to touch the face of God. “

Pluto: A Planet

Many issues have arisen from the debate whether or not Pluto is a planet. Some astronomers say that Pluto should be classified as a “minor planet” due to its size, physical characteristics, and other factors. On the other hand, some astronomers defend Pluto’s planet status, citing several key features. Indeed, most of the problem is that there is no formal definition of a planet. Furthermore, it is very difficult to invent one that would allow the solar system to contain all nine planets. I suggest that for an object to be classified as a planet, it must embody three characteristics.

It must be in orbit around a star (thus removing the larger satellites from contention), it must be too small to generate heat by nuclear fusion (so dwarf stars are excluded) and it must be massive enough to have collapsed to a more or less spherical shape (which excludes comets, and most of the asteroids). These criteria would admit a few of the larger asteroids and probably some of the Kuiper belt objects as well, but adding a requirement for a planet to have a minimum diameter of 1,000 km would remove the larger asteroids from contention while retaining Pluto.

Below are some brief reasons as to why Pluto may not be considered a planet with my rebuttal. Pluto is small compared to the other planets. Pluto is about half the size of the next smallest planet, Mercury. However, there is no scientific reason whatsoever to pick the size of Mercury as being the size of the smallest object to be called a planet. Mercury itself is less than half the size of Mars, and Mars is only about half the size of Earth or Venus. Earth and Venus are only about one-seventh the size of Jupiter. Why not pick one-tenth the size of Jupiter as the size of the smallest planet, if the cutoff is going to be chosen arbitrarily?

In that case, Mars, Mercury and Pluto would all have to be classified as asteroids. If the size-cutoff between asteroids and planets is going to be randomly chosen, the cutoff value should be agreed upon in open debate among interested scientists. Pluto is smaller than 7 moons in the solar system. Pluto is smaller than Earth’s Moon, Jupiter’s moons Io, Europa, Ganymede, and Callisto, Saturn’s moon Titan, and Neptune’s moon Triton. On the other hand, Pluto is larger than the other 40 known moons in the solar system.

There is no scientific reason to arbitrarily distinguish between planets and asteroids based on the sizes of the moons that happen to be present in a planetary system. The only limit on the size of the moons of a planet is that they must be smaller than the planet. Thus, it is coincidence that Jupiter’s and Saturn’s large moons are as small as they are: if Jupiter happened to have a moon one-fourth of its own size (as Earth does), that moon would be larger than Earth, Venus, Mars, Mercury and Pluto, and all of these “planets” would have to be classified as asteroids.

If Jupiter happened to have a moon half its own size (as Pluto does), that moon would be larger than all of the other planets except Saturn, and we would have a two-planet solar system with seven very large asteroids. The problems with this classification criterion are that they are arbitrary and non-general. Pluto is unlike the other planets in that it has an icy surface instead of a rocky surface, like the inner 4 (terrestrial) planets, or a deep atmosphere, like the next 4 (gas giant) planets.

Pluto has a crust believed to be composed mostly of water ice, with a relatively thin layer of nitrogen ice mixed with small fractions of methane and carbon. However, there is no particular scientific reason why this should exclude Pluto from being classified as a planet. It is just as reasonable to claim that all planets must have rocky surfaces, like the terrestrial planets: then Jupiter, Saturn, Uranus and Neptune would have to classified as something other than “planets” (perhaps they would be minor planets? ).

Alternatively, it could be declared that all planets must have thick, gaseous envelopes like the giant planets, in which case the four inner planets, including Earth, would have to be classified as non-planets. Why shouldn’t there be three “kinds” of planets: terrestrial, giant, and icy? In a planetary system that formed from a more massive cloud of gas and dust, it is highly likely that a number of larger bodies may have formed far from the primary star, and in such systems having icy planets would make perfect sense.

Pluto was discovered by Lowell Observatory astronomers searching for what was then known as “Planet X”, yet Pluto is far to small to be Planet X. Its planet-hood was, and still is primarily due to a PR campaign launched by the Observatory at the time of discovery (1930), rather than Pluto’s properties. Pluto was discovered as the result of an intensive, groundbreaking telescopic survey initiated by Percival Lowell and carried out in large part by Clyde Tombaugh.

The survey was initiated based on the entirely erroneous observation of disturbances to the orbit of Neptune (Neptune was thought to be deviating slightly from the position where it was predicted to be at a given time in its orbit. Disturbances to the orbit of Uranus, which were caused by the gravitational pull of an as yet unknown planet beyond Uranus, led to the discovery of that unknown, which we now know as Neptune, in 1846. ) The supposed disturbances in Neptune’s orbit were attributed to the gravitational tug of an unknown planet beyond Neptune, dubbed “Planet X”.

Lowell predicted the position of Planet X based on the erroneous disturbances to Neptune’s orbit, and the portion of the sky covered by the telescopic survey was influenced by these calculations. Because the disturbances were erroneous, and therefore Planet X was discovered to be non-existant, the discovery of Pluto as a result of a search for Planet X must now be considered to be completely unexpected. However, at the time of Tombaugh’s discovery of Pluto, astronomers had every reason to believe that Pluto was indeed Planet X: it wasn’t known until much later that the disturbances in Neptune’s orbit did not exist.

Pluto was discovered relatively near the predicted position; and the size of Pluto, calculated based on its observed brightness and a reasonable assumption for the reflectivity of it’s surface, was quite large. Thus, at the time of discovery, it was natural to think that Pluto was indeed a planet. Pluto is more like Kuiper Belt Objects or comets, than it is like the other planets. Kuiper Belt Objects (KBOs) are small bodies that orbit the Sun beyond Neptune’s orbit. Approximately 60 have been discovered so far.

Our state of knowledge concerning KBOs is completely analogous to our state of knowledge about Pluto shortly after it was discovered. We know where a tiny sampling of them are, and we know how bright those particular ones are, we have some information about what color a few of the brightest ones are, we know something about why a fraction of the known ones have the orbits that they do, and that is it. We do not know how big KBOs are, what they are composed of, how frequently they collide, how many of them there are, how large the largest one is, and how many of them there are overall, or as a function of size.

That is a lot of things we do not know. On the other hand, we know a lot about Pluto: how big it is, how reflective it is, what its surface is composed of, how thick its atmosphere is, how big its moon is, where it came from, and why it has the orbit it does. Also, it is much bigger, with a now well-established diameter of 2300km and a mass 1/500 of the Earth. Pluto is small in planetary terms, but still several times bigger than its nearest rival in the Kuiper Belt. Furthermore, Pluto has a satellite and at the time no other trans-Neptunian object was known to have one.

This is no longer a clinching argument as some main belt asteroids such as 243 Ida have tiny satellites, and a binary Kuiper Belt object was discovered in 2000. However Pluto’s satellite Charon is large. In fact, Charon is so large compared with Pluto that many astronomers refer to the system as the “Pluto-Charon binary” regarding it a sort of double planet rather than an ordinary planet plus a moon. Finally, Pluto has an atmosphere, albeit a thin one that will freeze out on the surface sometime in the early decades of the 21st century as Pluto recedes from the Sun due to its orbit.

Mining in Space

On December 10, 1986 the Greater New York Section of the American Institute of Aeronautics and Astronautics (AIAA) and the engineering section of the New York Academy of Sciences jointly presented a program on mining the planets. Speakers were Greg Maryniak of the Space Studies Institute (SSI) and Dr. Carl Peterson of the Mining and Excavation Research Institute of M. I. T. Maryniak spoke first and began by commenting that the quintessential predicament of space flight is that everything launched from Earth must be accelerated to orbital velocity.

Related to this is that the traditional way to create things in space has been to manufacture them on Earth and then launch them into orbit aboard large rockets. The difficulty with this approach is the huge cost-per-pound of boosting anything out of this planet’s gravity well. Furthermore, Maryniak noted, since (at least in the near to medium term) the space program must depend upon the government for most of its funding, for this economic drawback necessarily translates into a political problem.

Maryniak continued by noting that the early settlers in North America did not attempt to transport across the Atlantic everything then needed to sustain them in the New World. Rather they brought their tools with them and constructed their habitats from local materials. Hence, he suggested that the solution to the dilemma to which he referred required not so much a shift in technology as a shift in thinking. Space, he argued, should be considered not as a vacuum, totally devoid of everything.

Rather, it should be regarded as an ocean, that is, a hostile environment but one having resources. Among the resources of space, he suggested, are readily available solar power and potential surface mines on the Moon and later other celestial bodies as well. The Moon, Maryniak stated, contains many useful materials. Moreover, it is twenty-two times easier to accelerate a payload to lunar escape velocity than it is to accelerate the identical mass out of the EarthUs gravity well.

As a practical matter the advantage in terms of the energy required is even greater because of the absence of a lunar atmosphere. Among other things this permits the use of devices such as electromagnetic accelerators (mass drivers) to launch payloads from the MoonUs surface. Even raw Lunar soil is useful as shielding for space stations and other space habitats. At present, he noted, exposure to radiation will prevent anyone for spending a total of more than six months out of his or her entire lifetime on the space station.

At the other end of the scale, Lunar soil can be processed into its constituent materials. In between steps are also of great interest. For example, the MoonUs soil is rich in oxygen, which makes up most of the mass of water and rocket propellant. This oxygen could be RcookedS out of the Lunar soil. Since most of the mass of the equipment which would be necessary to accomplish this would consist of relatively low technology hardware, Maryniak suggested the possibility that at least in the longer term the extraction plant itself could be manufactured largely on the Moon.

Another possibility currently being examined is the manufacture of glass from Lunar soil and using it as construction material. The techniques involved, according to Maryniak, are crude but effective. (In answer to a question posed by a member of the audience after the formal presentation, Maryniak stated that he believed the brittle properties of glass could be overcome by using glass-glass composites. He also suggested yet another possibility, that of using Lunar soil as a basis of concrete. One possible application of such Moon-made glass would be in glass-glass composite beams. Among other things, these could be employed as structural elements in a solar power satellite (SPS). While interest in the SPS has waned in this country, at least temporarily, it is a major focus of attention in the U. S. S. R. , Western Europe and Japan. In particular, the Soviets have stated that they will build an SPS by the year 2000 (although they plan on using Earth launched materials.

Similarly the Japanese are conducting SPS related sounding rocket tests. SSI studies have suggested that more than 90%, and perhaps as much as 99% of the mass of an SPS can be constructed out of Lunar materials. According to Maryniak, a fair amount of work has already been performed on the layout of Lunar mines and how to separate materials on the Moon. Different techniques from those employed on Earth must be used because of the absence of water on the Moon.

On the other hand, Lunar materials processing can involve the use of self-replicating factories. Such a procedure may be able to produce a so-called Rmass payback ratioS of 500 to 1. That is, the mass of the manufactories which can be established by this method will equal 500 times the mass of the original RseedS plant emplaced on the Moon. Maryniak also discussed the mining of asteroids using mass-driver engines, a technique which SSI has long advocated.

Essentially this would entail a spacecraft capturing either a sizable fragment of a large asteroid or preferably an entire small asteroid. The spacecraft would be equipped with machinery to extract minerals and other useful materials from the asteroidal mass. The slag or other waste products generated in this process would be reduced to finely pulverized form and accelerated by a mass driver in order to propel the captured asteroid into an orbit around Earth.

If the Earth has so-called Trojan asteroids, as does Jupiter, the energy required to bring materials from them to low Earth orbit (LEO) would be only 1% as great as that required to launch the same amount of mass from Earth. (Once again, moreover, the fact that more economical means of propulsion can be used for orbital transfers than for accelerating material to orbital velocity would likely make the practical advantages even greater. ) However, Maryniak noted that observations already performed have ruled out any Earth-Trojan bodies larger than one mile in diameter.

In addition to the previously mentioned SPS, another possible use for materials mined from planets would be in the construction of space colonies. In this connection Maryniak noted that a so-called biosphere was presently being constructed outside of Tucson, Arizona. When it is completed eight people will inhabit it for two years entirely sealed off from the outside world. One of the objectives of this experiment will be to prove the concept of long-duration closed cycle life support systems.

As the foregoing illustrates, MaryniakUs primary focus was upon mining the planets as a source for materials to use in space. Dr. PetersonUs principal interest, on the other hand, was the potential application of techniques and equipment developed for use on the Moon and the asteroids to the mining industry here on Earth. Dr Peterson began his presentation by noting that the U. S. mining industry was in very poor condition. In particular, it has been criticized for using what has been described as Rneanderthal technology. S Dr. Peterson clearly implied that such criticism is justified, noting that the sooner or later the philosophy of not doing what you canUt make money on today will come back to haunt people. A possible solution to this problem, Dr. Peterson, suggested, is a marriage between mining and aerospace. (As an aside, Dr. PetersonUs admonition would appear to be as applicable to the space program as it is to the mining industry, and especially to the reluctance of both the government and the private sector to fund long-lead time space projects.

The current problems NASA is having getting funding for the space station approved by Congress and the failure begin now to implement the recommendations of the National Commission on Space particularly come to mind. ) Part of the mining industryUs difficulty, according to Dr. Peterson is that is represents a rather small market. This tends to discourage long range research. The result is to produce on the one hand brilliant solutions to individual, immediate problems, but on the other hand overall systems of incredible complexity.

This complexity, which according to Dr. Peterson has now reached intolerable levels, results from the fact that mining machinery evolves one step at a time and thus is subject to the restriction that each new subsystem has to be compatible with all of the other parts of the system that have not changed. Using slides to illustrate his point, Dr. Peterson noted that so-called RcontinuousS coal mining machines can in fact operate only 50% of the time. The machine must stop when the shuttle car, which removes the coal, is full.

The shuttle cars, moreover, have to stay out of each others way. Furthermore, not only are Earthbound mining machines too heavy to take into space, they are rapidly becoming too heavy to take into mines on Earth. When humanity begins to colonize the Moon, Dr. Peterson asserted, it will eventually prove necessary to go below the surface for the construction of habitats, even if the extraction of Lunar materials can be restricted to surface mining operations. As a result, the same problems currently plaguing Earthbound mining will be encountered.

This is where Earth and Moon mining can converge. Since Moon mining will start from square one, Dr. Peterson implied, systems can be designed as a whole rather than piecemeal. By the same token, for the reasons mentioned there is a need in the case of Earthbound mining machinery to back up and look at systems as a whole. What is required, therefore, is a research program aimed at developing technology that will be useful on the Moon but pending development of Lunar mining operations can also be used down here on Earth.

In particular, the mining industry on Earth is inhibited by overly complex equipment unsuited to todayUs opportunities in remote control and automation. It needs machines simple enough to take advantage of tele-operation and automation. The same needs exist with respect to the Moon. Therefore the mining institute hopes to raise enough funds for sustained research in mining techniques useful both on Earth and on other celestial bodies as well. In this last connection, Dr. Peterson noted that the mining industry is subject to the same problem as the aerospace industry: Congress is reluctant to fund long range research. In addition, the mining industry has a problem of its own in that because individual companies are highly competitive research results are generally not shared. Dr. Peterson acknowledged, however, that there are differences between mining on Earth and mining on other planetary bodies.

The most important is the one already mentioned-heavy equipment cannot be used in space. This will mean additional problems for space miners. Unlike space vacuum, rock does not provide a predictable environment. Furthermore, the constraint in mining is not energy requirements, but force requirements. Rock requires heavy forces to move. In other words, one reason earthbound mining equipment is heavy is that it breaks. This brute force method, however, cannot be used in space.

Entirely aside from weight limitations, heavy forces cannot be generated on the Moon and especially on asteroids, because lower gravity means less traction. NASA has done some research on certain details of this problem, but there is a need for fundamental thinking about how to avoid using big forces. One solution, although it would be limited to surface mining, is the slusher-scoop. This device scoops up material in a bucket dragged across the surface by cables and a winch.

One obvious advantage of this method is that it by passes low gravity traction problems. Slushers are already in use here on Earth. According to Peterson, the device was invented by a person named Pat Farell. Farell was, Peterson stated, a very innovative mining engineer partly because be did not attend college and therefore did not learn what couldnUt be done. Some possible alternatives to the use of big forces were discussed during the question period that followed the formal presentations.

One was the so called laser cutter. This, Peterson indicated, is a potential solution if power problems can be overcome. It does a good job and leaves behind a vitrified tube in the rock. Another possibility is fusion pellets, which create shock waves by impact. On the other hand, nuclear charges are not practical. Aside from considerations generated by treaties banning the presence of nuclear weapons in space, they would throw material too far in a low gravity environment.

Supernova Star

A supernova is a STAR that explodes. It suddenly increases in brightness by a factor of many billions, and within a few weeks it slowly fades. In terms of the human lifespan, such explosions are rare occurrences. In our Milky Way galaxy, for example, a supernova may be observed every few hundred years. Three such explosions are recorded in history: in 1054, in 1572, and in 1604. The CRAB NEBULA consists of material ejected by the supernova of 1054.

Such materials, known as supernova remnants, are common The supernovas observed in modern times have all occurred in other galaxies, the most distant yet having been detected in 1988 in a galaxy 5 illion light-years away. The most interesting supernova of recent times was detected in the relatively nearby Large MAGELLANIC CLOUD, on Feb. 23, 1987, by an astronomer at Chile’s Las Campanas Observatory.

It quickly became an object of intense study by all the means available to modern A supernova may radiate more energy in a few days than the Sun does in 100 million years, and the energy expended in ejecting material is much greater even than this. In many cases, including the Crab nebula supernova, the stellar remnant left behind after the explosion is a NEUTRON STAR–a tar only a few kilometers in diameter having an enormously large density and consisting mainly of neutrons–or a PULSAR, a pulsating neutron star. There are two common types of supernovas, called type I and type II.

Type I occurs among old stars of small mass, whereas type II occurs among very young stars of large mass. It is not known how a small-mass star can release the very large amounts of energy needed to explain type I supernovas. Scientists generally believe that this must involve binary systems–two stars revolving around each other. In such a system one of the stars is a WHITE DWARF, a small, dense star that is near the end of its uclear burning phase. After attracting matter from the companion star for some time, the white dwarf eventually collapses with a great rush, becoming a neutron star, and ejecting matter outward.

This rebound of matter is Stars with large masses burn their nuclear fuel very rapidly. Within a million years or less, such stars build cores containing much iron. When the iron eventually burns, energy is quickly drained from the core, and the star cannot continue to support itself against gravity. It suffers a mighty collapse analogous to that of a type I supernova, and the rebound causes matter to be ejected in a type II supernova explosion. Stars ending in this way are typically red SUPERGIANTS, but the one that exploded as 1987A was a blue star, named Sanduleak, with a mass only about 15 times that of the Sun.

Its pattern of brightening and fading also varied notably from that of typical type II supernovas, and an as yet unexplained “mystery spot” appeared some time after the explosion, apparently near to Sanduleak’s former location. In 1989 astronomers thought that they had detected an extremely fast-spinning pulsar at that location, but much further data is still needed before this finding is confirmed. Cosmologists estimate that the Universe came into existence about 15 illion years ago. This involved the initial creation of hydrogen and helium.

Since then nuclear fusion in stars has changed some of the original hydrogen and helium into heavier elements (see STELLAR EVOLUTION). Supernovas have played an important role both in producing the heavy elements and in ejecting material back into space, where it has been used to make new stars and, probably, PLANETARY SYSTEMS. It is possible that one or more supernovas exploded shortly before the formation of our solar system. Elements ejected from these explosions could have mixed with the solar nebula, eventually becoming part of the structures of the Sun, the

Alexander Sandy Calder

Alexander Sandy Calder was born into a family of renowned artists who encouraged him to create from a very young age. As a boy, he had his own workshop where he made toys for himself and his sister. He received a degree in mechanical engineering in 1919 but soon after decided to pursue a career as an artist. Calder attended classes at the Art Students League in New York from 1923 to 1926, supporting himself by working as an illustrator.

In 1926 Calder arrived in Paris where he developed his Cirque Calder,a work of performance art employing small-scale circus figures he sculpted from wire, wood, cloth, and other materials. Through these elaborate performances, Calder met members of the Parisian avant-garde. At the same time, Calder sculpted three- dimensional figurative works using continuous lengths of wire, which critics described as drawings in space. He explored ways to sculpt volume without mass and to captured the essence of his subject through an economy of line and articulated movement.

Calder’s wire works then became increasingly gestural, implying motion. By the end of 1930, this direction yielded his first purely abstract sculptures. After translating drawing into three dimensions, Calder envisioned putting paintings into motion. He developed constructions of abstract shapes that can shift and change the composition as the elements respond to air currents. These sculptures of wire and sheet metal (or other materials) are called mobiles. A mobile laid flat exists only as a skeleton, a reminder of its possibilities, but when suspended it seems to come alive.

Calder also developed stabiles, static sculptures that suggest volume in multiple flat planes, as well as standing mobiles, in which a mobile is balanced on top of a stabile. Calder furthered his work by developing a monumental scale. His later objects were huge sculptures of arching lines and graceful abstract shapes that now inhabit public plazas worldwide. Calder was an artist of great originality who defined volume without mass and incorporated movement and time in art.

His inventions redefined certain basic principles of sculpture and have established him as the most innovative sculptor of the twentieth century. Alexander Calder, America’s first abstract artist of international renown, is forever associated with his invention of the mobile. Born into a Philadelphia family of sculptors, he studied first as a mechanical engineer and then as a painter in the style of the Ashcan School. In 1926, Calder left for Paris, then Europe’s cultural capital.

There he attracted the attention of the avant-garde with his amusing performances with a partly-mechanized miniature circus of wire and cloth figures. By 1930 he had developed freely moving sculptures of arcs and spheres. Calder’s mobiles were squarely within the spirit of the times, from their engagement with machine technology to their use of abstraction as a universal language of creative truth. Linked to Dada and Surrealism by playfulness and chance arrangement, his sculpture responded to Constructivism by energizing art’s elements in the viewer’s space.

Science Competition: Space Timeline

This below is my timeline of space it should explain the many theories of how the universe came to be. It should explain about how galaxy was formed and what stage our star; the sun is in at this point in this present moment. The Time Line will take you from the moment it was created to the moment it will die. It will show each step in as much detail as I can find. The Big Bang Theory I am going to start the timeline with the big bang theory as people/scientists believe that it was at this point at which our universe was created The diagram below shows the early stages of the universe after the big bang.

I am going to show you step by step how each stage happened and what it meant. The short section of the timeline below shows the short period of time, 300 million years after the big bang. Stage 1 The Big Bang-The universe began with an explosion that generated space and time, as well as all the matter and energy the universe has and will ever hold. For a small fraction of a second, the universe was an infinitely dense, hot fireball. The present theory described a peculiar form of energy that could suddenly push out the fabric of space.

On a rare occasion, a process called “Inflation” can cause a vast expansion of space filled with this energy. The inflationary expansion could only be stopped when this energy had transformed into matter and energy as we know it. – Stage 2 Universe Shaped- After inflation, one millionth of a second after the Big Bang, the universe continued to expand but not nearly as quickly as it had done. As it expanded, it became less dense and cooled down. The most basic forces in nature were discovered: first gravity, then the strong forces then the weak followed by the electromagnetic forces.

By the first second, the universe was made up of elementary (basic) particles and energy basic elements such as: quarks, electrons, photons, neutrinos and less familiar types. These particles smashed together to form protons and neutrons. Stage 3 Basic Elements Formed- 3 seconds after the universe had shaped Protons and neutrons came together to form the nuclei of simple elements such as: hydrogen, helium and lithium. It took another 300,000 years for electrons to be captured into orbits around those nuclei to form stable atoms.

Stage 4 The Radiation Era – The first major era in the history of the universe was one in which most of the energy was in the form of radiation — different wavelengths of light, X rays, radio waves and ultraviolet rays. This energy was the remnant of the primordial fireball, and as the universe expands, the waves of radiation are stretched and diluted until today; they make up the faint glow of microwaves which bathe the entire universe. Stage 5 Matter Domination Era- At that moment, the energy in the matter and the energy in the radiation were equal.

But as the vast expansion continued, the waves of light were stretched to lower energy, whilst the matter travelled onwards largely unaffected. At about this time, neutral atoms had formed as electrons link up with hydrogen and helium nuclei. The microwave background radiation hails from this moment, and thus gave us a direct picture of how matter was distributed at this early time. Stage 6 Stars and Galaxies Formed- Gravity amplified the slight irregularities in the density of the primordial gas. Even as the universe continued to expand rapidly, pockets of gas became more and more dense.

Stars ignited within these pockets, and groups of stars became the earliest galaxies. This point was still perhaps 12 to 15 billion years before the present. The Hubble Space Telescope recently captured some of the earliest galaxies ever viewed. They appear as tiny blue dots in the Hubble Deep Field, the image on the left. Here are some of the pictures of Galaxies I managed to find. That concludes the first part of my timeline about the period of time after the big bang. The next part of my timeline is the period of time which was before the present time but after the Big Bang Period.

The diagram below shows the period of time called Time before Present. Stage 7 Birth of the Sun- 5 Billion years after the Big bang our Sun was born. The sun was formed within a cloud of gas in a spiralling arm of the Milky Way Galaxy. A vast disk of gas and debris that swirled around our new star gave birth to our planets, moons, and asteroids. The image on the left was taken from the Hubble Space Telescope, showing a newborn star in the Orion Nebula surrounded by a disk of dust and gas that may one day collapse into planets, moons and asteroids. Just like our sun did 5 billion years ago. Stage 8 Earliest Life- 3. million after the birth of our sun the planet earth had cooled and an atmosphere was created. Microscopic living cells, neither plants nor animals, began to evolve and flourish in earth’s many volcanic environments. Stage 9 Primitive Animals Appeared- These were mostly flatworms, jelly fish and algae. By 570 million years before the present, large numbers of creatures with hard shells suddenly appeared Stage 10 The First Mammals Appeared- The first mammals evolved from a class of reptiles which evolved with mammalian traits (characteristics), such as a segmented jaw and a series of bones that make up the inner ear.

Stage 11 Dinosaurs Became Extinct-An asteroid or comet slammed into the northern part of the Yucatan Peninsula in Mexico 65 million years after the mammals had first evolved. This world-wide cataclysm brought the end to the long age of the dinosaurs, and allowed mammals to diversify and expand their ranges. Stage 12 Homo sapiens Evolved- Our earliest ancestors evolved in Africa from a line of creatures that descended from apes. Stage 13 Supernova 1987A Explodes-A star exploded in a dwarf galaxy known as the Large Magellanic Cloud that lies just beyond the Milky Way.

The star, was a blue super giant/red giant 25 times more massive than our Sun. Such an explosion distributed all the common elements such as Oxygen, Carbon, Nitrogen, Calcium and Iron into space where they enriched clouds of Hydrogen and Helium which formed new stars. They also created the heavier elements (such as gold, silver, lead, and uranium) and distributed these as well. Their remnants generated the cosmic rays which lead to mutation and evolution in living cells. These supernovae(plural), then, were key to the evolution of the Universe and to life itself.

That concludes that part of the time line the next part of the time line is the part which is closest to the time in which we are living in now. We probably find these discoveries and events the most familiar because they are the closest to the present. This part of the timeline is called the AD era because the were all after the death of Christ. Most of the dates will make more sense probably than the last couple of parts of the timeline. (1054) Crab Super Nova Appeared- A new star in the constellation Taurus had appeared which was brighter than Venus.

Chinese, Japanese, and Stage Native American observers recorded the appearance of a supernova (image at left). The remnants of this explosion are visible today as the Crab Nebula. Within the nebula, astronomers have found a pulsar, the ultra-dense remains of a star that blew up. 1609 Five years after the appearance of the great supernova of 1604, Galileo builds his first telescope. He saw the moons of Jupiter, Saturn’s rings, the phases of Venus, and the stars in the Milky Way. He published the news the following year in The Starry Messinger. 665 At the age of 23, young Isaac Newton realized that the gravitational force accounted for falling bodies on earth as well as the motion of the moon and the planets in orbit. This was a revolutionary step in the history of thought. One set of laws, discovered and tested on our planet, would be seen to govern the entire universe. 1905 Roughly 3 centuries after Isaac Newton’s discovery about gravity Albert Einstein, scientist, replaced Newton’s model of gravity with his own theory of relativity.

Predictions of black holes and an expanding Universe are immediate consequences of this revolutionary theory which remains unchallenged today. 1929 The astronomer Edwin Hubble used the new 100-inch telescope on Mt. Wilson in Southern California to discover that the farther away a galaxy is, the more its light is shifted to the red. And the redder a galaxy’s light, the faster it is moving away from us. Discovery Of Quasars (1960)-2 astronomers Allan Sandage and Thomas Matthews found sources of intense radio energy, they decided to call them Quasi Stellar Radio Sources.

Four years later, Maarten Schmidt discovered that these sources lie at the edge of the visible universe. In recent years, astronomers have realized that there are gigantic black holes at the centres of young galaxies into which matter is heated to high temperatures and glows brightly as it rushes in. The picture on the rite shows what the black hole looks like when it is sucking in high temperature matter. Microwave Background Radiation (1964) – Scientist discovered that microwave radiation bathes our earth from all over space. The radiations remain as the afterglow of the big bang.

Discovery Of Pulsars (1967)- A graduate student, Jocelyn Bell, and her professor, Anthony Hewish discovered intense pulsating sources of radio energy, known as pulsars. Pulsars were the first known examples of neutron stars, extremely dense objects that form in the wake of some supernovae. The crab pulsar, the tiny star in the middle of the Hubble Space Telescope image on the left, is the remnant of the bright supernova. Light From Supernova Reaches Earth-(1987) A light from this supernova reached earth, 170,000 years after the star exploded, 1987.

Astronomers rushed to telescopes in the southern hemisphere to study the progress of the explosion/supernova and view the perfect models describing the violent deaths of large stars. Hubble Space Telescope launched (1990) The twelve-ton telescope, equipped with a 94-inch mirror, and was sent into orbit by astronauts aboard the space shuttle Discovery. Within two months, a flaw in its mirror was discovered, placing in jeopardy. It was the largest investment ever in astronomy. Big Bang Theory Confirmed (1990) – Astronomers used the new Cosmic Background Explorer satellite to take a detailed reading of the microwave background radiation.

Those reading proved the big bang theory. Two years later, scientists used the same instrument to discover small variations in the background radiation. That showed the first evidence of the universes structure. Hubble Telescope Repaired (1993)- The final part of this section of the timeline, the closest event that’s happened during our lifetime is the repair of the Hubble space telescope. Astronauts aboard the space shuttle Endeavor succeed in correcting Hubble’s flawed optics This is the final part of my universe timeline explain scientifically what man kind can expect from the universe in the years to come.

Stellar Error Ends-Astronomers assume that the universe will gradually die away, provided it keeps on expanding and does not recollapse under the pull of its own gravity. From 10,000 years to 100 trillion years after the Big Bang, most of the energy generated by the universe is and will be in the form of stars burning hydrogen and other elements in their cores. Degenerate Era Begins-This era will be up to Ten Trillion, Trillion, Trillion years after the Big Bang. Most of the mass that we can currently see in the universe, e. g. stars and planets, will be locked up in degenerate stars, those that have blown up and collapsed into black holes and neutron stars, or have withered into white dwarfs. Black Hole Era- After the degenerated era where all the protons have decayed and all particles have given all the energy. The only stellar-like objects remaining are black holes of widely disparate masses, which are actively evaporating during this era. The Dark Era- At this late time, protons have decayed and black holes will have evaporated. Only the waste products from these processes will remain, the universe as we know it has dissipated.

Pluto – a planet or?

Pluto Come wander with me, she said, Into regions yet untrod; And read what is still unread In the manuscripts of God. – Longfellow Although Pluto was discovered in 1930, limited information on the distant planet delayed a realistic understanding of its characteristics. Today Pluto remains the only planet that has not been visited by a spacecraft, yet an increasing amount of information is unfolding about this peculiar planet. The uniqueness of Pluto’s orbit, rotational relationship with its satellite, spin axis, and light variations all give the planet a certain appeal.

Pluto is usually farther from the Sun than any of the nine planets; however, due to the eccentricity of its orbit, it is closer than Neptune for 20 years out of its 249 year orbit. Pluto crossed Neptune’s orbit January 21, 1979, made its closest approach September 5, 1989, and will remain within the orbit of Neptune until February 11, 1999. This will not occur again until September 2226. As Pluto approaches perihelion it reaches its maximum distance from the ecliptic due to its 17-degree inclination.

Thus, it is far above or below the plane of Neptune’s orbit. Under these conditions, Pluto and Neptune will not collide and do not approach closer than 18 A. U. to one another. Pluto’s rotation period is 6. 387 days, the same as its satellite Charon. Although it is common for a satellite to travel in a synchronous orbit with its planet, Pluto is the only planet to rotate synchronously with the orbit of its satellite. Thus being tidally locked, Pluto and Charon continuously face each other as they travel through space.

Unlike most planets, but similar to Uranus, Pluto rotates with its poles almost in its orbital plane. Pluto’s rotational axis is tipped 122 degrees. When Pluto was first discovered, its relatively bright south polar region was the view seen from the Earth. Pluto appeared to grow dim as our viewpoint gradually shifted from nearly pole-on in 1954 to nearly equator-on in 1973. Pluto’s equator is now the view seen from Earth During the period from 1985 through 1990, Earth was aligned with the orbit of Charon around Pluto such that an eclipse could be observed every Pluto day.

This provided opportunity to collect significant data which led to albedo maps defining surface reflectivity, and to the first accurate determination of the sizes of Pluto and Charon, including all the numbers that could be calculated therefrom. The first eclipses (mutual events) began blocking the north polar region. Later eclipses blocked the equatorial region, and final eclipses blocked Pluto’s south polar region. By carefully measuring the brightness over time, it was possible to determine surface features.

It was found that Pluto has a highly reflective south polar cap, a dimmer north polar cap, and both bright and dark features in the equatorial region. Pluto’s geometric albedo is 0. 49 to 0. 66, which is much brighter than Charon. Charon’s albedo ranges from 0. 36 to 0. 39. The eclipses lasted as much as four hours and by carefully timing their beginning and ending, measurements for their diameters were taken. The diameters can also be measured directly to within about 1 percent by more recent images provided by the Hubble Space Telescope.

These images resolve the objects to clearly show two separate disks. The improved optics allow us to measure Pluto’s diameter as 2,274 kilometers (1413 miles) and Charon’s diameter as 1,172 kilometers (728 miles), just over half the size of Pluto. Their average separation is 19,640 km (12,200 miles). That’s roughly eight Pluto diameters. Average separation and orbital period are used to calculate Pluto and Charon’s masses. Pluto’s mass is about 6. 4 x 10-9 solar masses. This is close to 7 (was 12 x’s) times the mass of Charon and approximately 0. 0021 Earth mass, or a fifth of our moon.

Pluto’s average density lies between 1. 8 and 2. 1 grams per cubic centimeter. It is concluded that Pluto is 50% to 75% rock mixed with ices. Charon’s density is 1. 2 to 1. 3 g/cm3, indicating it contains little rock. The differences in density tell us that Pluto and Charon formed independently, although Charon’s numbers derived from HST data are still being challenged by ground based observations. Pluto and Charon’s origin remains in the realm of theory. Pluto’s icy surface is 98% nitrogen (N2). Methane (CH4) and traces of carbon monoxide (CO) are also present.

The solid methane indicates that Pluto is colder than 70 Kelvin. Pluto’s temperature varies widely during the course of its orbit since Pluto can be as close to the sun as 30 AU and as far away as 50 AU. There is a thin atmosphere that freezes and falls to the surface as the planet moves away from the Sun. NASA plans to launch a spacecraft, the Pluto Express, in 2001 that will allow scientists to study the planet before its atmosphere freezes. The atmospheric pressure deduced for Pluto’s surface is 1/100,000 that of Earth’s surface pressure.

Pluto was officially labeled the ninth planet by the International Astronomical Union in 1930 and named for the Roman god of the underworld. It was the first and only planet to be discovered by an American, Clyde W. Tombaugh. The path toward its discovery is credited to Percival Lowell who founded the Lowell Observatory in Flagstaff, Arizona and funded three separate searches for “Planet X. ” Lowell made numerous unsuccessful calculations to find it, believing it could be detected from the effect it would have on Neptune’s orbit. Dr.

Vesto Slipher, the observatory director, hired Clyde Tombaugh for the third search and Clyde took sets of photographs of the plane of the solar system (ecliptic) one to two weeks apart and looked for anything that shifted against the backdrop of stars. This systematic approach was successful and Pluto was discovered by this young (born 4 Feb 1906) 24 year old Kansas lab assistant on February 18, 1930. Pluto is actually too small to be the “Planet X” Percival Lowell had hoped to find. Pluto’s was a serendipitous discovery. Pluto & Charon This view of Pluto was taken by the Hubble Space Telescope.

It shows a rare image of tiny Pluto with its moon Charon, which is slightly smaller than the planet. Because Pluto has not yet been visited by any spacecraft, it remains a mysterious planet. Due to its great distance from the sun, Pluto’s surface is believed to reach temperatures as low as -240C (-400F). From Pluto’s surface, the Sun appears as only a very bright star. (Courtesy NASA) Hubble Telescope Image This is the clearest view yet of the distant planet Pluto and its moon, Charon, as revealed by the Hubble Space Telescope (HST). The image was taken on February 21, 1994, when the planet was 4. billion kilometers (2. 7 billion miles) from the Earth.

The HST corrected optics show the two objects as clearly separate and sharp disks. This now allows astronomers to measure directly (to within about 1 percent) Pluto’s diameter of 2,320 kilometers (1,440 miles) and Charon’s diameter of 1,270 kilometers (790 miles). The HST observations show that Charon is bluer than Pluto. This means that the worlds have different surface composition and structure. A bright highlight on Pluto indicates that it might have a smoothly reflecting surface layer.

A detailed analysis of the HST image also suggests that there is a bright area parallel to the equator of Pluto. However, subsequent observations are needed to confirm that this feature is real. The new HST image was taken when Charon was near its maximum elongation from Pluto (0. 9 arcseconds). The two worlds are 19,640 kilometers (12,200 miles) apart. (Courtesy NASA/ESA/ESO) The Surface of Pluto The never-before-seen surface of the distant planet Pluto is resolved in these NASA Hubble Space Telescope pictures.

These images, which were made in blue light, show that Pluto is an unusually complex object, with more large-scale contrast than any planet, except Earth. Pluto probably shows even more contrast and perhaps sharper boundaries between light and dark areas than is shown here, but Hubble’s resolution (just like early telescopic views of Mars) tends to blur edges and blend together small features sitting inside larger ones. The two smaller inset pictures at the top are actual images from Hubble. North is up. Each square pixel (picture element) is more than 100 miles across.

At this resolution, Hubble discerns roughly 12 major “regions” where the surface is either bright or dark. The larger images (bottom) are from a global map constructed through computer image processing performed on the Hubble data. Opposite hemispheres of Pluto are seen in these two views. (Courtesy NASA/ESA/ESO) Pluto, Charon, and USA Comparison This image shows the approximate size of Pluto and Charon by overlaying them on an Advanced Very High Resolution Radiometer (AVHRR) image of the United States of America. Pluto is about 2274 kilometers (1410 miles) in diameter and Charon 1172 kilometers (727 miles) in diameter.

The image of Pluto is based upon Hubble observations taken of Pluto in June and July of 1994. The Charon image is based upon photometric measurements acquired by Marc Buie of Lowell Observatory. (Copyright 1998 by Calvin J. Hamilton) Map of the Surface of Pluto This is the first image-based surface map of the solar system’s most remote planet, Pluto. The map, which covers 85% of the planet’s surface, confirms that Pluto has a dark equatorial belt and bright polar caps, as inferred from ground-based light curves obtained during the mutual eclipses that occurred between Pluto and its satellite Charon in the late 1980s.

The brightness variations in this map may be due to topographic features such as basins and fresh impact craters. However, most of the surface features are likely produced by the complex distribution of frosts that migrate across Pluto’s surface with its orbital and seasonal cycles and chemical byproducts deposited out of Pluto’s nitrogen-methane atmosphere. Names may later be proposed for some of the larger regions. Image reconstruction techniques smooth out the coarse pixels in the four raw images to reveal major regions where the surface is either bright or dark.

The black strip across the bottom corresponds to the region surrounding Pluto’s south pole, which was pointed away from Earth when the observations were made, and could not be imaged. (Courtesy NASA/ESA/ESO) Ground vs. Hubble Comparison This image shows a comparison between a ground based view (left) and a Hubble Space Telescope view (right) of Pluto and Charon. Nordic Optical Telescope This image of Pluto was taken with the 2. 6 meter Nordic Optical Telescope, located at La Palma, Canary Islands. It is a good example of the best imagery that can be obtained from Earth-based telescopes.

Copyright Nordic Optical Telescope Scientific Association — NOTSA) Pluto Express This is a painting by Pat Rawlings of the Pluto Express mission scheduled for launch in 2001 to arrive at Pluto around 2006-2008. The mission will consist of a pair of small, fast, relatively cheap spacecraft weighing less than 100 kilograms (220 pounds) each. The spacecraft will pass within 15,000 kilometers (9,300 miles) of Pluto and Charon. (Courtesy Pat Rawlings/NASA/JPL) pluto Pluto Come wander with me, she said, Into regions yet untrod; And read what is still unread In the manuscripts of God. – Longfellow

Although Pluto was discovered in 1930, limited information on the distant planet delayed a realistic understanding of its characteristics. Today Pluto remains the only planet that has not been visited by a spacecraft, yet an increasing amount of information is unfolding about this peculiar planet. The uniqueness of Pluto’s orbit, rotational relationship with its satellite, spin axis, and light variations all give the planet a certain appeal. Pluto is usually farther from the Sun than any of the nine planets; however, due to the eccentricity of its orbit, it is closer than Neptune for 20 years out of its 249 year orbit.

Pluto crossed Neptune’s orbit January 21, 1979, made its closest approach September 5, 1989, and will remain within the orbit of Neptune until February 11, 1999. This will not occur again until September 2226. As Pluto approaches perihelion it reaches its maximum distance from the ecliptic due to its 17-degree inclination. Thus, it is far above or below the plane of Neptune’s orbit. Under these conditions, Pluto and Neptune will not collide and do not approach closer than 18 A. U. to one another. Pluto’s rotation period is 6. 87 days, the same as its satellite Charon. Although it is common for a satellite to travel in a synchronous orbit with its planet, Pluto is the only planet to rotate synchronously with the orbit of its satellite. Thus being tidally locked, Pluto and Charon continuously face each other as they travel through space. Unlike most planets, but similar to Uranus, Pluto rotates with its poles almost in its orbital plane. Pluto’s rotational axis is tipped 122 degrees. When Pluto was first discovered, its relatively bright south polar region was the view seen from the Earth.

Pluto appeared to grow dim as our viewpoint gradually shifted from nearly pole-on in 1954 to nearly equator-on in 1973. Pluto’s equator is now the view seen from Earth During the period from 1985 through 1990, Earth was aligned with the orbit of Charon around Pluto such that an eclipse could be observed every Pluto day. This provided opportunity to collect significant data which led to albedo maps defining surface reflectivity, and to the first accurate determination of the sizes of Pluto and Charon, including all the numbers that could be calculated therefrom. The first eclipses (mutual events) began blocking the north polar region.

Later eclipses blocked the equatorial region, and final eclipses blocked Pluto’s south polar region. By carefully measuring the brightness over time, it was possible to determine surface features. It was found that Pluto has a highly reflective south polar cap, a dimmer north polar cap, and both bright and dark features in the equatorial region. Pluto’s geometric albedo is 0. 49 to 0. 66, which is much brighter than Charon. Charon’s albedo ranges from 0. 36 to 0. 39. The eclipses lasted as much as four hours and by carefully timing their beginning and ending, measurements for their diameters were taken.

The diameters can also be measured directly to within about 1 percent by more recent images provided by the Hubble Space Telescope. These images resolve the objects to clearly show two separate disks. The improved optics allow us to measure Pluto’s diameter as 2,274 kilometers (1413 miles) and Charon’s diameter as 1,172 kilometers (728 miles), just over half the size of Pluto. Their average separation is 19,640 km (12,200 miles). That’s roughly eight Pluto diameters. Average separation and orbital period are used to calculate Pluto and Charon’s masses. Pluto’s mass is about 6. x 10-9 solar masses. This is close to 7 (was 12 x’s) times the mass of Charon and approximately 0. 0021 Earth mass, or a fifth of our moon. Pluto’s average density lies between 1. 8 and 2. 1 grams per cubic centimeter. It is concluded that Pluto is 50% to 75% rock mixed with ices. Charon’s density is 1. 2 to 1. 3 g/cm3, indicating it contains little rock. The differences in density tell us that Pluto and Charon formed independently, although Charon’s numbers derived from HST data are still being challenged by ground based observations. Pluto and Charon’s origin remains in the realm of theory.

Pluto’s icy surface is 98% nitrogen (N2). Methane (CH4) and traces of carbon monoxide (CO) are also present. The solid methane indicates that Pluto is colder than 70 Kelvin. Pluto’s temperature varies widely during the course of its orbit since Pluto can be as close to the sun as 30 AU and as far away as 50 AU. There is a thin atmosphere that freezes and falls to the surface as the planet moves away from the Sun. NASA plans to launch a spacecraft, the Pluto Express, in 2001 that will allow scientists to study the planet before its atmosphere freezes.

The atmospheric pressure deduced for Pluto’s surface is 1/100,000 that of Earth’s surface pressure. Pluto was officially labeled the ninth planet by the International Astronomical Union in 1930 and named for the Roman god of the underworld. It was the first and only planet to be discovered by an American, Clyde W. Tombaugh. The path toward its discovery is credited to Percival Lowell who founded the Lowell Observatory in Flagstaff, Arizona and funded three separate searches for “Planet X. ” Lowell made numerous unsuccessful calculations to find it, believing it could be detected from the effect it would have on Neptune’s orbit.

Dr. Vesto Slipher, the observatory director, hired Clyde Tombaugh for the third search and Clyde took sets of photographs of the plane of the solar system (ecliptic) one to two weeks apart and looked for anything that shifted against the backdrop of stars. This systematic approach was successful and Pluto was discovered by this young (born 4 Feb 1906) 24 year old Kansas lab assistant on February 18, 1930. Pluto is actually too small to be the “Planet X” Percival Lowell had hoped to find. Pluto’s was a serendipitous discovery. Pluto & Charon

This view of Pluto was taken by the Hubble Space Telescope. It shows a rare image of tiny Pluto with its moon Charon, which is slightly smaller than the planet. Because Pluto has not yet been visited by any spacecraft, it remains a mysterious planet. Due to its great distance from the sun, Pluto’s surface is believed to reach temperatures as low as -240C (-400F). From Pluto’s surface, the Sun appears as only a very bright star. (Courtesy NASA) Hubble Telescope Image This is the clearest view yet of the distant planet Pluto and its moon, Charon, as revealed by the Hubble Space Telescope (HST).

The image was taken on February 21, 1994, when the planet was 4. 4 billion kilometers (2. 7 billion miles) from the Earth. The HST corrected optics show the two objects as clearly separate and sharp disks. This now allows astronomers to measure directly (to within about 1 percent) Pluto’s diameter of 2,320 kilometers (1,440 miles) and Charon’s diameter of 1,270 kilometers (790 miles). The HST observations show that Charon is bluer than Pluto. This means that the worlds have different surface composition and structure.

A bright highlight on Pluto indicates that it might have a smoothly reflecting surface layer. A detailed analysis of the HST image also suggests that there is a bright area parallel to the equator of Pluto. However, subsequent observations are needed to confirm that this feature is real. The new HST image was taken when Charon was near its maximum elongation from Pluto (0. 9 arcseconds). The two worlds are 19,640 kilometers (12,200 miles) apart. (Courtesy NASA/ESA/ESO) The Surface of Pluto The never-before-seen surface of the distant planet Pluto is resolved in these NASA Hubble Space Telescope pictures.

These images, which were made in blue light, show that Pluto is an unusually complex object, with more large-scale contrast than any planet, except Earth. Pluto probably shows even more contrast and perhaps sharper boundaries between light and dark areas than is shown here, but Hubble’s resolution (just like early telescopic views of Mars) tends to blur edges and blend together small features sitting inside larger ones. The two smaller inset pictures at the top are actual images from Hubble. North is up. Each square pixel (picture element) is more than 100 miles across.

At this resolution, Hubble discerns roughly 12 major “regions” where the surface is either bright or dark. The larger images (bottom) are from a global map constructed through computer image processing performed on the Hubble data. Opposite hemispheres of Pluto are seen in these two views. (Courtesy NASA/ESA/ESO) Pluto, Charon, and USA Comparison This image shows the approximate size of Pluto and Charon by overlaying them on an Advanced Very High Resolution Radiometer (AVHRR) image of the United States of America. Pluto is about 2274 kilometers (1410 miles) in diameter and Charon 1172 kilometers (727 miles) in diameter.

The image of Pluto is based upon Hubble observations taken of Pluto in June and July of 1994. The Charon image is based upon photometric measurements acquired by Marc Buie of Lowell Observatory. (Copyright 1998 by Calvin J. Hamilton) Map of the Surface of Pluto This is the first image-based surface map of the solar system’s most remote planet, Pluto. The map, which covers 85% of the planet’s surface, confirms that Pluto has a dark equatorial belt and bright polar caps, as inferred from ground-based light curves obtained during the mutual eclipses that occurred between Pluto and its satellite Charon in the late 1980s.

The brightness variations in this map may be due to topographic features such as basins and fresh impact craters. However, most of the surface features are likely produced by the complex distribution of frosts that migrate across Pluto’s surface with its orbital and seasonal cycles and chemical byproducts deposited out of Pluto’s nitrogen-methane atmosphere. Names may later be proposed for some of the larger regions. Image reconstruction techniques smooth out the coarse pixels in the four raw images to reveal major regions where the surface is either bright or dark.

The black strip across the bottom corresponds to the region surrounding Pluto’s south pole, which was pointed away from Earth when the observations were made, and could not be imaged. (Courtesy NASA/ESA/ESO) Ground vs. Hubble Comparison This image shows a comparison between a ground based view (left) and a Hubble Space Telescope view (right) of Pluto and Charon. Nordic Optical Telescope This image of Pluto was taken with the 2. 6 meter Nordic Optical Telescope, located at La Palma, Canary Islands.

It is a good example of the best imagery that can be obtained from Earth-based telescopes. ( Copyright Nordic Optical Telescope Scientific Association — NOTSA) Pluto Express This is a painting by Pat Rawlings of the Pluto Express mission scheduled for launch in 2001 to arrive at Pluto around 2006-2008. The mission will consist of a pair of small, fast, relatively cheap spacecraft weighing less than 100 kilograms (220 pounds) each. The spacecraft will pass within 15,000 kilometers (9,300 miles) of Pluto and Charon. (Courtesy Pat Rawlings/NASA/JPL)

Extra Terrestrial

In an ever-expanding galaxy, humans cannot be the only intelligent life forms. Somewhere, in some universe exists a form of life equal, or superior to, intellectual capability and performance of humans. Many people have seen unexplained occurrences that could only be classified as alien life forms. Thousands of humans claim to have seen aliens, and hundreds more say they can communicate with them. These sitings have intrigued people all over the world, and have led to a debate on weather UFO’s are real. There is no doubt they exist. For many decades people have claimed to have spotted UFO’s.

There are many theories on whether UFO’s exists ranging from government cover-up to the another dimension breaking the barrier between our worlds…. , from UFO’s traveling through black holes to there being millions of other universes with intelligent life. Yet the only evidence of any extraterrestrial beings come from sightings by ordinary people. World governments tried to keep the phenomenon of extraterrestrial life forms secret. Due to the fact our society is gullible believing every myth that one might tell to another. This confidentiality only lead to more curiosity among the people.

So far, in spite of a concerted effort by not only the U. S. government, but other governments worldwide to discredit the whole notion of unidentified flying objects; in spite of the ridicule heaped on anyone who even hints they may have seen something remotely resembling a UFO; in spite of official debunkings and even threats, people still step out of the shadows of their own fear to tell their amazing stories. (Seller, ix) People do talk, and every time something is being repeated a couple words are being added. Hence, not to many people know the truth. One example of an UFO sighting was

For many years humans have criticized one another if their beliefs were not that of the common public. For example Christopher Columbus and his theory on the shape of the earth or Galileo and his theory on the rotations of the planets. Both were ridiculed, as are most believers of extraterrestrial life today. However both were proven correct, as will be believers in extraterrestrial life. With every century more and more knowledge came to people, they gained understanding of many natural and physical phenomena. The time will pass, and we will understand what UFOs are all about, and who are the ones who pilot them.

Extra terrestrial In an ever-expanding galaxy, humans cannot be the only intelligent life forms. Somewhere, in some universe exists a form of life equal, or superior to, intellectual capability and performance of humans. Many people have seen unexplained occurrences that could only be classified as alien life forms. Thousands of humans claim to have seen aliens, and hundreds more say they can communicate with them. These sitings have intrigued people all over the world, and have led to a debate on weather UFO’s are real. There is no doubt they exist. For many decades people have claimed to have spotted UFO’s.

There are many theories on whether UFO’s exists ranging from government cover-up to the another dimension breaking the barrier between our worlds…. , from UFO’s traveling through black holes to there being millions of other universes with intelligent life. Yet the only evidence of any extraterrestrial beings come from sightings by ordinary people. World governments tried to keep the phenomenon of extraterrestrial life forms secret. Due to the fact our society is gullible believing every myth that one might tell to another. This confidentiality only lead to more curiosity among the people.

So far, in spite of a concerted effort by not only the U. S. government, but other governments worldwide to discredit the whole notion of unidentified flying objects; in spite of the ridicule heaped on anyone who even hints they may have seen something remotely resembling a UFO; in spite of official debunkings and even threats, people still step out of the shadows of their own fear to tell their amazing stories. (Seller, ix) People do talk, and every time something is being repeated a couple words are being added. Hence, not to many people know the truth. One example of an UFO sighting was

For many years humans have criticized one another if their beliefs were not that of the common public. For example Christopher Columbus and his theory on the shape of the earth or Galileo and his theory on the rotations of the planets. Both were ridiculed, as are most believers of extraterrestrial life today. However both were proven correct, as will be believers in extraterrestrial life. With every century more and more knowledge came to people, they gained understanding of many natural and physical phenomena. The time will pass, and we will understand what UFOs are all about, and who are the ones who pilot them.

Discovering Comets

Before the seventeenth century, comets were considered portents-warning shots fired at a sinful Earth from the right hand of an avenging God. However, in the post-Newtonian era, when their paths were understood to intersect that of the Earth, they were considered actual agents of destruction. Experts have described comets as the carriers of both life-seeds to the early Earth and horrific missiles that will one day snuff out life as we know it. At one time or another, people have blamed comets for war and held responsible for the deaths of men, the birth of good wine, the London fire of 1666, severely cold weather, etc.

If one central theme runs throughout history of comets, it must be the public concern they have commanded. Comets are ancient objects, formed in the outer reaches of the Solar System from the ice of gases such as methane, water vapor, and ammonia, combined with dust from primitive rock compounds. Sometimes comets are described as “dirty snowballs” because they are icy lumps, or wandering icebergs. Comets are relatively tiny- just a few miles across on average. Their nuclei are very different from glowing balls of light, with multimillion-mile-long tails.

This is ne reason comets occasionally visit the inner Solar System. Astronomers divide comets into long-period types with orbits of more than 200 years and short-period types with orbits of less than 200 years (as cited in Branley 1988 p. 43). All comets begin their journey as long- period types. Gravitational fields of planets then capture long- period comets. Comets can have orbits at any angle because they can come from any region. Once comets are captured, they fall into line with the movement of planets, staying close to the ecliptic, orbiting the sun in the same direction as the planets.

One exception is Halley’s Comet. It is a short-period comet with an orbital period of about seventy-six years- known as retrograde orbits (as cited in Branley 1988 p. 44). Retrograde orbits are simply clockwise orbital motion, as seen from the north pole of a planet. Most Solar System orbits are counter clockwise. Like people, comets group too. When several comets with different periods travel in nearly the same orbit, experts say that they are members of a comet group.

One well-known group includes the spectacular Sun-grazing comet, Ikeya-Seki, of 1965, and seven others having periods of nearly a thousand years. Brian G. Marsden, an American astronomer, has concluded that a 1965 comet and the even brighter comet of 1882 split from a parent comet, possibly the one of 1106 (as cited in Yeomans 1991 p. 184). One interesting contribution of the comet is the solar effect. The process starts by a comet approaching the sun. Once the comet approaches the sun, solar heat sublimates, or evaporates, the ices.

This causes the comet to brighten enormously. Sometimes this develops a brilliant tail, extending millions of kilometers into space. Even as the comet recedes again, the What are these spectacular comet tails composed of? Comet tails are made up of simple ionized molecules, including carbon monoxide and dioxide. By action of solar wind, molecules are blown away, forming a thin stream of hot gases continuously ejected from the solar corona. In case you do not know the meaning of a solar corona, it is the outermost atmosphere of the Sun.

Amazingly, the thin streams of high gases move at a speed of approximately 400 kilometers (250 miles) per second (as cited in Yeomans 1991 p. 185). In addition, a comet frequently also displays smaller, curved tails composed of fine dust particles blown from the oma by the pressure of solar radiation. Yet, as a comet recedes from the Sun, the loss of gas and other dust particles decrease in quantity, which contribute to the disappearance of the tail. Some comets with small orbits contain tails so short that they are practically invisible.

However, the tail of at least one comet has indeed exceeded approximately 320 million kilometers (200 million miles) in length (as cited in Yeornans 1991 p. 183). Surprisingly, of some 1400 comets on record, fewer than half the tails were visible to the naked eye, and fewer than 10 percent were conspicuous (as cited in Yeornans Interestingly, amateur astronomer Yuji Hayukutake from Hawaii, discovered Comet Hayukutake. This discovery was on January 30, 1996 (as cited in rosat-goc-comet) However, mid -march is at its most visible in the northern hemisphere.

This surprise comet has turned out to be the closest and most spectacular of the century. This comet was the brightest comet to come near the earth in more than twenty years. Comet Hayukutake gave astronomers a wealth of Not only did Comet Hayukutake bring new discovery with images, but it also renewed interest in observational astronomy when it passed the Earth in early 1996. The comet improved perceptions of the constellation it transversed. The appearance of Comet Hayukutake in March of 1996, inspired the same sense of wonder in those who observed it as the ancients who first saw comets in the sky.

Many ancient astronomers have linked comets to dragons breathing fire into the atmosphere, and viewers can readily see this while observing these celestial spheres (as cited in Vogt1993 p. 25). Both amateur and professional astronomers relayed heavily on enthusiastic reports of their observations on Comet Hayukutake when it appeared for several days in March of 1996. The comet provided clear observations of its gas jets, fans and rare disconnection event as it moved towards the sun. (as cited in rosat-goc-comet). Did you know Comet Hayukutake, a I O-mile-wide block of space dust and ice, passed within 9. million miles of Earth on March 25, 1996? (as cited in encarta 1994).

Although the Hubble telescope provided many excellent pictures of the comet, amateur astronomers have aided in tracking the comet. For those of us who were too busy to see this spectacular object, NASA’S web page provided a graphic picture of the comet, so we too, could get a glimpse of his spectacular sight of Comet Hayukutake. When astronomers observed Comet Hayukutake on March 27, 1996, they made the first ever extreme ultra violet (EUV) image of a comet (as cited in rosat-goc-comet).

Interestingly, this observation of Comet Hayukutake was simultaneous with the X-ray measurements made from the US, provided High Resolution Imager, reported in April. Why are these first ever X-ray/EUV images of a comet so remarkable? These images provided great, and quite unexpected-brightness. Many astronomers discovered large changes in brightness over a few hours. This very important discovery showed that previous unsuspected “high energy” processes must have been taking place in the comet.

High energy was probably due to the influence of the Sun’s radiation and solar wind. Astronomers who pointed the German X-ray Roentgen Satellite at Comet Hayukutake were surprised by a brilliant X-ray emission, especially since no X-ray had ever before been detected from a comet (as cited in rosat-goc-comet). One theory is that solar X-rays interacting with water molecules in the comet produced the emission. On the other hand, some researchers elieved that some X-rays were generated through the interaction between the Sun’s solar wind and the comet itself.

According to astronomer Michael Di Santi and his colleagues, they identified methane and ethane in Comet Hayukutake as it passed Earth in March of 1996 (as cited in rosat-goc-comet). Neither compound had ever been confirmed in comets, but the methane surprised many. This compound was not believed to have been part of the material For many, this comet was by far the most interesting thing to have witnessed. A man describes viewing Comet Hayukutake the night before its closest approach to Earth with a umber of amateur astronomers atop Mount TamalpaisJust north of San Francisco, California.

Many were impressed with Comet Hayukutake. However, the sight was not charming for all. Comets have been given the title “great” throughout history for being aw-inspiring, for spurring interest in astronomy and for their high visibility during perigee, the point at which is closest from the object to the earth. According to urban citizens. Comet Hayukutake did not produce such an effect on people since its appearance was obscured by light pollution in the cities. According to all astronomers, the range of phenomena attributed to comets are extraordinary.

Some of it true, much of it nonsense. But all of it adds to their considerable mystique and perhaps explains the universal interest shown in these, the solar systems’ smallest bodies. Comets are currently thought to be the building blocks of the major planets and sources for some of Earth’s water, volatiles, and organic molecules. Commentary impacts on Earth have deposited some of the biogenic material from which primitive life may have ultimately formed (as cited inYeomans, 1991 p. 228).

Matter and its basic properties

Philosophy of the concept of “matter”

“Matter” is one of the most fundamental concepts of philosophy. However, in different philosophical systems its content is understood in different ways. For idealistic philosophy, for example, it is characteristic that it either completely rejects the existence of matter or denies its objectivity. Thus, the eminent ancient Greek philosopher Plato considers matter as a projection of the world of ideas. By itself, the matter of Plato is nothing. In order to become a reality, some idea must be embodied in it.

In the follower of Plato, Aristotle, matter also exists only as a possibility, which is transformed into reality only as a result of its connection with form. Forms ultimately originate from God.

In G. Hegel, matter is manifested as a result of the activity of an absolute idea, an absolute spirit. It is the absolute spirit, the idea that generates matter.

In the subjective-idealistic philosophy of J. Berkeley, it is openly stated that there is no matter, and no one has ever seen it, that if one exiles this concept from science, no one will notice it, because it means nothing. He wrote that you can use the concept of “matter”, if you really really want, but only as a synonym for the word “nothing.” For Berkeley, to exist is to be potentially perceived. To the question of whether nature existed before man, Berkeley would answer – yes, in the consciousness of God.

Other representatives of subjective idealism (E. Mach, R. Avenarius, and others) openly do not deny the existence of matter, but reduce it to “the totality (complexes) of sensations.” Matter, thing, subject, by; their opinion is a complex of human sensations. It is the human sensations that create and construct them.

In materialist philosophy, there are also different ideas about matter. True, for all materialist philosophers, recognition of the nature of its objective existence, independent of the consciousness (sensations), is characteristic.

Already ancient philosophers (Chinese, Indian, Greek) considered as matter the most common sensually-specific substance, which they considered to be the primary basis of everything existing in the world. Such an approach to the definition of matter can be called substantial, because its essence was the search for the basis (substance) of the world. For example, the ancient Greek philosopher Thales of Miletus (beginning and middle of the 6th century BC) believed that everything originated from water.

Even the land, in his opinion, floats on water, like a piece of wood. A representative of the same Milesian school, the philosopher Avaximes, asserted that all things come from the air, due to its dilution or condensation (air evaporation, rising up and discharging, turn into fiery celestial bodies and, on the contrary, solid substances – earth, stones, and .d. – is nothing but thickened and frozen air). The air is in constant motion. If he were immovable, we would not perceive him in any way, when he moves, he makes himself felt in the form of wind, clouds, flame. This means, Anaksimen teaches, that all things are air modifications, and therefore air is the universal substrate of things.

Heraclitus of Ephesus considered fire to be the fundamental principle of all things. By the way, the fire at Heraclitus is the image of perpetual motion. “This cosmos,” he argued, “is the same for everyone, none of the gods created and none of the people, but it has always been, is and will be forever alive by fire, gradually burning and gradually fading.”

Of course, it was hard to imagine that at the heart of the diversity of things and processes is one thing. Therefore, later, philosophers began to consider several substances as a fundamental principle of the world (matter). So, for example, Empedocles (U c. BC) spoke of 4 elements as the roots of all things: fire, air (ether), water, and earth. These roots are eternal, unchanging, they can neither arise from anything else, nor pass into each other. All other things are the result of combining these elements in certain proportions.

Another ancient Greek philosopher, Anaxagoras, taught that the world consists of an infinite number of “seeds” – particles that are divided to infinity. In each thing there is a particle of each other, black is in white, white is black, heavy is heavy, and so on. The life of the world, Anaxagoras emphasized, is a process. Evaluating these views of Anaxagoras, it is impossible not to see that his philosophy practically prepared atomistic materialism.

Atomistic materialism is associated with the names of the ancient Greek philosophers Leucippus and Democritus (IV century BC). They identified matter with structureless atoms (the atom in Greek means “indivisible”). According to Democritus, being is made up of atoms and voids moving in space. Atoms are geometrical (for example, the soul consists of round atoms), they are not exposed to any external influence, they are incapable of any change, they are eternal and indestructible. They have a certain size, mass, can collide, hitting each other. The atoms are completely invisible to the eye, ”noted Democritus, but, however, they can be quite visible in a mental sense. Life, from the point of view of Democritus, is a combination of atoms, death is their decomposition. The soul is also mortal, because its atoms can decompose, ”Democritus taught.

The view of matter as an infinite number of atoms, without any noticeable changes, was preserved in various schools of philosophical materialism until the beginning of the twentieth century. The identification of matter with matter (and with indivisible atoms at its base) was characteristic of both the French materialists of the 18th century and L. Feuerbach. It is interesting that F. Engels, based on the positions of atomistic materialism, at the same time answering the question: does matter exist as such, wrote that matter actually exists only in the form of concrete forms, objects, and there is no matter as structureless primordial matter, not changeable form of all forms.

The most profound revolutionary changes took place at the end of the 19th and the beginning of the 20th century in natural science, especially in physics. They were so fundamental that they gave rise not only to the crisis of physics, but also very seriously affected its philosophical foundations. Among the most important discoveries that undermined the foundations of the mechanical picture of the world were, in particular, the detection of X-rays (1895), the radioactivity of uranium (1896, A. Becquerel, L. Curie, M. Warehouse-Curie), and the electron (1897 Mr. D. Thomson).

By 1903, we note that significant results were achieved in the study of radioactivity: its explanation as a spontaneous decay of atoms received a certain justification, and the convertibility of chemical elements was proved. M. Planck created the theory of quanta, the energy of micro-objects, A. Einstein revealed a quantitative relationship between the mass of bodies and the binding energy of their atoms.

It was not possible to explain the indicated (and some other) discoveries within the framework of the mechanical picture of the world; The inadequacy of the classic-mechanical understanding of physical reality was becoming increasingly apparent. This caused some confusion among a number of major physicists.

All this led to a radical revision of the previous established ideas about the structure of matter. The basic position of atomistic materialism about the indivisibility, immutability and indestructibility of the atom, which served as a pretext for the refutation of materialism in the light of the latest conclusions of natural science, collapsed. For example, the famous French physicist Henri Poincaré wrote about “signs of a serious crisis of physics,” that before us are the “ruins” of its principles, their “general defeat”, that “the great revolutionary radium” undermined the principle of energy conservation, and the electronic the theory negated the principle of conservation of mass. As a result, he comes to the conclusion that all the old principles of physics are crushed, therefore its positions are not true, but are only products of human consciousness.

The thesis that in connection with the new discoveries of physics matter disappeared, was legitimately challenged by V.I. Lenin, who defended philosophical materialism. Describing the true meaning of the expression “matter disappeared”, V.I. Lenin shows that it is not matter that disappears, but the limit to which we knew matter, that disappearance of matter, which some scientists and philosophers say, has nothing to do with a philosophical view. about matter, for it is impossible to confuse the philosophical concept (term) of matter with the natural science concepts of the material world.

With the development of natural science there is a change of one scientific view of the world (matter) to another, deeper and more solid. However, such a change of specific scientific ideas cannot disprove the meaning and significance of the philosophical concept (category) “matter”, which serves to denote objective reality given to a person in his sensations and existing independently of them.

Overcoming the difficulties faced by physics required (as always happens during a period of revolutionary changes in science) analyzing the problems of not only physical, but also epistemological. As a result of intense discussions in physics, there were several schools that radically diverged in their understanding of the ways out of the crisis situation. Some of them began to focus on an idealistic world view (although most physicists, naturally, were in the position of elemental materialism), than representatives of spiritualism and fideism tried to take advantage of.

This led to the fact that the revolution in physics developed into its crisis. “The essence of the crisis of modern physics,” wrote V.I. Lenin, “is to break the old laws and basic principles, to reject objective reality out of consciousness, that is, to replace materialism with idealism and agnosticism.“ Matter has disappeared ”is how it can be expressed basic and typical of many particular issues, the difficulty that created this crisis. ”

To understand the meaning of some physicists in the words “matter disappeared”, the following should be taken into account. The atomistic worldview has long been asserted in natural science. At the same time, the atom (in the spirit of Democritus) was understood to be an absolutely indivisible (without parts) elementary particle.

The point of view, according to which matter consists of atoms, which were regarded as a kind of “unchanging essence of things”, was divided by the majority of natural scientists, including physicists, by the end of the 19th century. Therefore, the discoveries testifying to the complexity of atoms (in particular, radioactivity as their spontaneous decay), were interpreted by some scientists as “decay”, “disappearance” of matter. It was on this basis that conclusions were drawn about the collapse of materialism and the science oriented towards it.

V.I. Lenin showed that in reality there was not a collapse of materialism as such, but the collapse of only its concrete, original form. After all, matter, understood as a certain invariable essence of things, is matter without movement, a category of non-dialectical materialism. In this connection, V.I. Lenin noted: “The recognition of any unchanging elements,” the unchanging essence of things, “etc., is not materialism, but metaphysical, that is, non-dialectical materialism.” Dialectical materialism considers matter as matter moving and therefore “insists on the approximate, relative nature of any scientific position on the structure of matter and its properties.” Accordingly, this type of materialism is not related to the specific content of physical representations. All that matters for him is that moving matter is the substantial basis of reality, reflected by human consciousness. “The recognition of the theory, – emphasized V.I. Lenin, – a snapshot, an approximate copy of an objective reality, – is what materialism is about.”

Therefore, the discovery that the structure of matter is much more complex than it seemed before is not evidence of the inconsistency of materialism. V. I. Lenin explained in this connection: “<< Matter disappears >> – this means that limit disappears, until which we knew matter until now … such properties of matter disappear that seemed previously absolute, unchanging, original. … and which are now discovered as relative, inherent only in certain states of matter. For the only “property” of matter, with the recognition of which philosophical materialism is connected, is the property of being an objective reality, to exist outside our consciousness. ”

The dialectic of the process of knowledge, we note, was deeply understood by Hegel. He developed, in particular, the concept of relative truth as a limited truth, i.e. which is true only within certain limits. The materialistic dialectic has developed these ideas into the doctrine of objective truth, meaning by it the process of bringing knowledge to reality, in the course of which the synthesis of the positive that exists in certain relative truths is carried out.

Objective truth is the unity of the latter, where they are present in the removed form, complementing and restricting each other. Classical mechanics, for example, is true if it is applied to macro-objects with non-relativistic velocities. The theorems of Euclidean geometry are true if we are talking about a space with zero curvature. And modern physics includes classical mechanics, but, importantly, with an indication of the limits of its applicability. Modern geometry in the same way includes the Euclidean geometry. And so on.

In other words, one of the reasons that gave rise to the crisis of physics is the understanding by some scientists of relative truth as only relative (this is epistemological relativism, which originated and was largely overcome in ancient philosophy). However, what is essentially important, “in every scientific truth, despite its relativity, is an element of absolute truth.”

Finishing consideration of the analysis of V. I. Lenin of the crisis of physics, let us pay attention to the following. His position that “the only” property “of matter, with the recognition of which philosophical materialism is associated, is the property of being an objective reality” is sometimes perceived as an indication that, according to materialist dialectics, matter has only this single property. But this is not so: it is only a matter of the fact that the only “property” of matter, with non-recognition of which is associated philosophical idealism, is objectivity.

Therefore, it is appropriate here to once again emphasize the inadmissibility of identifying the dialectical-materialist category “matter” with the natural science ideas about its structure and properties. The misunderstanding of this by the majority of scientists (who were mainly in the position of elemental materialism) at the turn of the XIX-XX centuries was one of the main causes of the crisis of natural science.

Considering the problems associated with the crisis of natural science at the turn of the XIX-XX centuries., We pay attention to the fact that crisis situations have arisen in it before, ending with a revolutionary transition to a new, deeper level of knowledge. Principal difficulties arose whenever science, deepening the analysis of the essence of phenomena, revealed a contradiction, which the existing theory could not explain. The need for its removal and led to the intensive development of a new theory, a new scientific picture of the world. (Dialectics, we recall, considers the contradiction as a source of development).

Considering matter as a philosophical category denoting objective reality, V.I. Lenin thereby continues the materialistic line in philosophy. In its definition, there is no tabulation of the category “matter” under a broader concept, because such a concept simply does not exist. In this sense, “matter” and “objective reality” – synonyms. Matter is opposed to consciousness, while emphasizing objectivity, as the independence of its existence from consciousness.

Matter and its attributes: space, time, movement, system

Matter as an objective reality is characterized by an infinite number of properties. Material things and processes are finite and infinite, because their localization is relative, and their interconnection is absolute, continuous (homogeneous inside themselves) and discontinuous (characterized by an internal structure): there is a mass in all material objects (be it a rest mass for any substance or a mass movements for the fields) and energy (potential or actualized).

But its most important properties, its attributes, are space, time and movement.

The space is characterized by the length and structure of material objects (formations) in their correlation with other formations.

Time is characterized by the duration and sequence of the existence of material formations in their relationship with other material formations.

Of fundamental importance is the answer to the question of how space and time are related to matter.

On this issue in philosophy there are 2 points of view.

The first of these is usually called the substantial concept of space and time. In accordance with this concept, space and time are independent entities that exist along with matter and independently of it. This understanding of space and time led to the conclusion that their properties are independent of the nature of the material processes occurring in them. Substantial concept originates from Democritus, it found its most vivid embodiment in. classical physics I. Newton. The idea of ​​absolute space and time of I. Newton corresponded to a certain physical picture of the world, namely, his views on matter as a set of atoms delimited from each other, having a constant volume, inertness (mass) and acting on each other instantaneously, either at a distance or in touch. Space, according to Newton, is invariably, motionless, its properties do not depend on anything, including time, they do not depend either on material bodies or on their movement. You can remove all bodies from space, but the space will remain and the properties will be preserved. It turns out that space is like a grand container, resembling a huge box turned upside down, in which matter is placed. Newton has the same views on time. He believed that time flows equally in the Universe and this flow does not depend on anything – and therefore time is absolute, because it determines the order and duration of the existence of material systems.

As we see, in this case both space and time appear as realities, which in a certain sense are higher entities in relation to the material world.

The second concept of space and time is called relativistic. According to this concept, space and time are not independent entities, but systems of relations formed by interacting material objects. Accordingly, the properties of space and time depend on the nature of the interaction of material systems. The relativistic concept originates from Aristotle. Most consistently, it was carried out in the non-Euclidean geometry of Lobachevsky and Riemann and in the theory of relativity by A. Einstein. It was their theoretical positions that excluded the concepts of absolute space and absolute time from science, thereby revealing the inconsistency of the substantial interpretation of space and time as independent forms independent of matter. It was these teachings, especially the general and special theory of relativity, that justified the dependence of space and time, their property on the nature of the movement of material systems.

Space and time, as the universal forms of its being inseparably connected with matter, possess a whole series of both general and specific properties for each of these forms.

General properties of space – time: their objectivity, and universality. The recognition of these properties almost immediately opposes the materialistic interpretation of space and time to their idealistic interpretations. After all, according to idealistic teachings, space and time are a product of human consciousness, and therefore they objectively do not exist.

The main properties of space are: length, homogeneity, isotropy (equality of all possible directions), three-dimensionality, and specific properties of time: duration, uniformity. (equality of all moments) ,, one-dimensionality, irreversibility.

The properties of space and time are manifested each time in a special way in the microworld, the macrocosm and megaworld, in living nature and in social reality.

The objective continuity of space and time and their discontinuity determine the movement of matter, which is the main mode of its existence. The movement of matter is absolutely, its rest is relative.

It should be borne in mind that in philosophy, movement is understood as any change in things and processes.

Denoting the change in time of the spatial characteristics of things and processes (their location and volume) by the concept of “movement”, and the variability of their qualitative certainty as a result of their existence in time by the conditional term “change”, we arrive at the following conclusion.

Movement in its broadest sense is the unity of the moments of movement of things and processes and their change. The driving car moves in space, the “old” book on the shelf “grows old”, occasionally “moving”.

This is the meaning put in the term “movement”, when they say that the material can not exist without movement.

A significant addition to this principle is the assertion that, in turn, movement cannot exist without a material carrier (substance or field). The statement that motion exists without matter is, from the point of view of materialistic philosophers, as absurd as the conclusion about the existence of matter without motion.

In the inseparable unity of matter and motion, matter is original, and motion is derivative. It is subordinated to matter.

The position opposite to materialism is taken by the energeticism advanced by the German scientist V. Oval’d. In his theory, V. Ostalald tried to reduce matter and motion to energy (hence the name of the theory of energetism). As you know, energy is a physical measure of movement. V. Ovalald declares with energy everything that exists in the world. Therefore, both matter, and consciousness, and cognition – all this is energy, and therefore matter and consciousness are derived from energy and movement. The modern form of energy (neo-energy) is associated with attempts to prove – the process of converting matter into energy based on A. Einstein’s well-known mass-energy law E = mc 2 (here E is energy, m is mass, c is the speed of light in vacuum). However, these attempts were unsuccessful, both physically and philosophically.

From a physical point of view, this formula reflects the proportionality of the relationship between the mass of matter and the energy of its interatomic bonds and the coefficient of this proportionality is the square of the speed of light in a vacuum).

From a philosophical point of view, it only confirms that things that have a mass of peace objectively exist. Moreover, they are in communication with equally objectively existing fields that have no rest mass (electric, magnetic, lepton, microlepton, etc.). And finally, this formula confirms the fundamental position of materialistic philosophy about the possibility of turning everything into everything, and including matter in the field.

Movement has a number of important properties. First, objectivity is peculiar to the movement, i.e. independence of its existence from human consciousness. In other words, matter itself has a reason for its change. Hence the position and the infinity of interconversions of matter.

Secondly, universality is peculiar to the movement. This means that any phenomena in the world are subject to movement as a way of existence of matter (there are no objects devoid of movement). It also means that the very content of material objects in all its moments in a relationship is determined by movement, expresses its specific forms (and manifestations).

Thirdly, the movement is characterized by inconvenience and indestructibility. Consistent philosophical materialism rejects any argument about the beginning or end of the movement. It is known, for example, that I. Newton admitted the possibility of a divine jolt. and the German philosopher E. Dühring believed that movement arises from peace through the so-called bridge of gradualness. In an explicit or implicit form, in this case, the thought is given about a certain beginning (outcome) of the movement.

This position is criticized by the materialists. Consistently this protects dialectical materialism. Claiming the principle of self-movement of matter. the materialist dialectic simultaneously reveals its mechanism. In their opinion (and it is confirmed by the experience of mankind and the data of the natural sciences), the movement is the result of the struggle of objectively existing opposites. These are, for example, action and resistance in mechanical movement, higher and lower temperature (energy) in thermal movement, positive and negative charge in electricity, polar interests of people and their various associations in social development, etc.

Fourthly, absoluteness is peculiar to movement. Recognizing the universal character of the movement, philosophical materialism does not reject existence in the world of stability, peace. However, consistent philosophical materialism emphasizes the relative nature of such states of material objects. This means that the absolute nature of the movement is always realized only in certain, locally and historically limited, dependent on specific conditions, passing and, in this sense, its relative forms.

That is why it can be said that any peace (or stability) is a moment of movement, since it is transitory, temporary, relative. Peace is like. movement in balance, because peace is included in the total movement, and it is removed by this absolute movement. Therefore, one can speak of rest as a certain equilibrium, a moment of movement only in relation to a certain point of reference. So, for example, one can see that any age of a person (say 18 years old) is a fixed moment in his constant change, movement, associated with a certain stability, rest of the temporary state of some properties of his nature compared to, say, the 17th anniversary and 19th anniversary.

A variety of specific manifestations of movement can be correlated with certain material carriers. This makes it possible to construct different classifications of the forms of motion of matter. The form of motion of matter is associated with a certain material carrier, has a certain area of ​​distribution and its own specific laws.

F. Engels noted the presence of 5 basic forms of motion of matter.

  • Mechanical motion associated with the movement of bodies in space.
  • Physical (essentially thermal) movement, like the movement of molecules.
  • Chemical motion – the movement of atoms inside molecules.
  • Organic or biological movement associated with the development of a protein life form.
  • Social movement (all changes in society).

This classification is now obsolete. In particular, it is not right to reduce physical movement only to thermal movement.

Therefore, the modern classification of forms of motion of matter includes:

  1. spatial displacement;
  2. electromagnetic movement, defined as the interaction of charged particles;
  3. gravitational form of motion;
  4. strong (nuclear) interaction;
  5. weak interaction (absorption and emission of a neutron);
  6. chemical form of motion (process and result of the interaction of molecules and atoms);
  7. geological form of motion of matter (associated with changes in geosystems – continents, layers of the earth’s crust, etc.);
  8. biological form of movement (metabolism, processes occurring at the cellular level, heredity, etc.);
  9. social form of movement (processes occurring in society)

Obviously, the development of science will continue to continue. make their own adjustments in this classification of the forms of motion of matter. However, it appears. that in the foreseeable future it will be carried out on the basis of the principles formulated by F. Engels.

First of all, the principle of development will not lose its significance as applied to the analysis of the forms of motion of matter. It allows them to be systematized in accordance with the real processor of the evolution of material systems from simple to complex, from lower to higher, from the simplest processes of mechanical movement to processes occurring in human society.

Still important is the principle of the connection of each form of movement with a specific material carrier, or more precisely, with a set of specific material carriers.

The principle of genetic and structural conditionality of the lower forms of motion of matter to the lower remains relevant. After all, any higher form of movement arises on the basis of the lower, includes it in itself in the removed form. This essentially means that structures specific to the higher form of movement can only be known by analyzing the structures of the lower forms.

And, conversely, the essence of the lower order form of motion can only be known on the basis of knowledge of the content of the higher form of matter motion relative to it.

The principle of the irreducibility of higher forms of movement to lower ones and the illegality of the transfer (extrapolation) of the properties of higher forms of movement of matter to lower forms is closely related to the principle of genetic conditioning. This is the principle of the qualitative specificity of any form of movement. In the higher form of movement, its lower forms are represented not in “pure”, but in a synthesized (“removed”) form. The “mechanical” movement of the human hand is the result of the addition of complex processes of the mechanical, biological, chemical, and. etc. Therefore, any attempt to create a purely mechanical analogue of the human hand is absurd.

Absurd and the transfer of wildlife to society, even if at first glance it seems. that it is dominated by the “law of the jungle.” Of course, human cruelty can be incomparably greater than the cruelty of predators. Yet predators do not know such human feelings as love, participation, and compassion.

On the other hand, attempts to find matter in the lower forms of motion are absolutely groundless. items. its higher forms. Thinking cobblestone is nonsense. However, this is an extreme, so to speak, case of hyperbole. Looked less funny, trying one out. major Soviet biologists who tried to create “human” conditions for monkeys, hoping to find an anthropoid (primitive man) in their offspring in a hundred or more years.

Finally, it is impossible to include another very important principle underlying the classification of the forms of motion of matter – the principle of the relationship of each of them with a particular science. This principle allows us to associate the problem of classification of forms of motion with the problem of the classification of sciences.

The principles for the classification of forms of motion of matter make it possible to treat reduction mechanism, the essence of which is to reduce the laws of higher forms of motion to regularities: the lower forms of social to biological, biological to physicochemical, etc.

The principles of classifications of the forms of motion of matter make it possible to critically refer to vitalism (from the Latin. Life) – a philosophical trend absolutizing the specificity of the biological form of motion and explaining the specificity of all living things by the presence of some special “life force”.

The most important property of matter and material formations is their systemic organization. A system (from the Greek is a whole made up of parts) is a complex of interacting elements, or, which is the same: a limited set of interacting elements.

Practically any material and ideal object can be represented as a system, for this it is necessary to select its elements in it (the element is then an indecomposable component of the system with this method of its consideration, identify the structure of the volume (a set of stable relations and connections between the elements) and fix its characteristics at the core of education. With this approach it is found that all systems are divided into complete and total. A complete system is one in which all its elements cannot exist from loss or withdrawal of at least one of its elements leads to the destruction of the system as a whole. Holistic systems are, for example, the solar system, water molecules (H2O)., salt (NaCl), symbiosis in organic nature, production cooperation in the economic sphere of public life, etc.

A distinctive feature of an integral system is its quality being non-reducible to the simple sum of the qualities of its constituent elements.

Summation systems are systems whose quality is equal to the sum of the properties of its constituent elements, taken in isolation from each other. In all summative systems, the constituent parts can exist autonomously by themselves. An example of such systems can be a bunch of stones, a cluster of cars on the street, a crowd of people. It is clear that it is impossible to say about these aggregates that they are unsystematic, although their consistency is weak and close to zero, since its elements have considerable independence in relation to each other and to the system itself, and the connection of these elements is often random.

A systematic approach or a systematic study of material objects implies not only the establishment of ways of describing the relationships and connections (structure) of this set, elements, but – what. especially important – the selection of those of them that are system-forming, i.e. provide separate functioning and development of the system. A systematic approach to material formations suggests the possibility of understanding the system in question more. high level. The system is usually characterized by a hierarchical structure – the sequential inclusion of a lower level system into a higher level system. This means that relations and connections in the system with a certain representation of it can themselves be considered as its elements, obeying the corresponding hierarchy. This allows you to build different, do not coincide with each other, the sequence of incorporation of systems into each other, describing the material object under study from different sides.

In modern science, the method of structural analysis is widely used, which takes into account the systematic character of the objects under study. After all, structurality is the internal dismemberment of material being, the mode of existence of matter. Structural levels of matter are formed from a certain set of objects of any kind and are characterized by a special way of interaction between their constituent elements.

Conclusion

The study of problems associated with the philosophical analysis of matter and its properties is a necessary condition for the formation of a person’s worldview, regardless of whether it turns out to be in the final analysis materialistic or idealistic.

In the light of the above, it is quite obvious that the role of defining the concept of matter, understanding the latter as inexhaustible for building a scientific picture of the world, solving the problem of reality and knowability of objects and phenomena of micro and megaworld is very important.

The definition is reasonable: “… Matter is an objective reality given to us in sensation”; “Matter is a philosophical category for denoting objective reality, which is given to man in his sensations, which is copied, photographed, displayed by our sensations, existing independently of them.” (In the first case, we are talking about matter as a category of being, an ontological category, in the second – about its fixing concept, the category of epistemological).