Society and Technology

“Society never advances. It recedes as fast on one side as it gains on the other. ” Although written long ago these words by, Ralph Waldo Emerson still hold true today. Everyday in society people are making improvements, however, but these improvements also have equal drawbacks. Today we are using cutting edge technology to improve every aspect of our daily lives. For instance in today’s society the fields of Communication and Medicine are constantly advancing yet they both create significant losses. Technology has helped increase the speed of communication and decrease its cost.

However, at the same time it has caused people to become more impersonal with each other. In earlier times the major form of communication was for people to visit each other and go to public meeting places. One of the next major advances was the telephone. Due to the telephone people no longer went to the public meeting places as often as they used to. As time goes on, new advances still allow people to contact and communicate with each other more easily. These advances such as faxes, beepers, and electronic mail, although seemingly making life easier, each help to decrease the earlier forms of communication.

The field of medicine, like the field communication, also displays what Emerson was trying to say. This field too, which had many advances, has also caused many difficulties. As scientists and doctors try to come up with cures for the many diseases we have today, they are also making new ones. For example, when scientists went to Africa in search of a cure for a disease, they came back with monkeys that were contaminated with the Emboli virus. Today in Russia there are military bases where Russian scientists are creating thousands of germs and viruses to use in germ warfare.

These germs and viruses are capable of killing thousands of people instantly. As technology continues to advance and society moves “forward”, people continue to use the less personal forms of communication, and create new problems in the field of medicine. The fear of becoming a society, which communicates only through machines, and creates new disease, is becoming greater with time. For all of society gains there are equal drawbacks. So as in Ralph Waldo Emerson words “society never advances. “

What is it like to be on the forefront of technology

New technology is constantly being designed and developed. The people who are responsible for this new technology in the field of computers are most likely system analysts. This paper will attempt to give the reader some insight into the career of a system analyst. People who work as system analysts work as teams and are constantly dealing with some form of new computer technology. They may build and design new systems, or they may provide consultation on the purchasing of computer systems for a company, school, or small business as to what type of computer system to purchase.

The analysts who build new systems must design the circuit boards, peripherals, and choose how the computer will recognize files. System analysts must also select or design an operating system, which is the way the computer interprets files. During the design of the system, a system analyst must use both math models and other models to solve any problems they may come across (Wisconsin Career Information System 1633. 3). Once they are finished, the team must write reports on how to solve any problems the consumer may have with the new system, which, in turn, involves the use of more math models.

In order to be capable of completing the above tasks, a system analyst must continue education beyond high school. The post-secondary education required for a system analyst can be found only at a four- year institution. A student looking to become a system analyst must concentrate on the science and math courses offered by such an institute. Employers look for people who possess a Bachelor of Science degree in one of the following areas: computer engineering, computer information systems, computer science, data processing, information science, and technical engineering.

These majors all require the student to be able to excel in math because it is an integral part of computer operation. If the student thinks he is done with all of his education once he has graduated, he is in for big shock. In this type of job, continual education is essential. This is imperative because technology is changing faster than most people can keep up with. Many large businesses hold their own training sessions for their employees who use computers, while smaller businesses send their employees to seminars. Employers seek applicants who have specialized training.

For example, an accounting firm would seek a system analyst who has a background in accounting, or a firm that produces medicine would look for someone with some medical experience. For this reason, colleges and universities often allow students to design a specific area of study in computer science. While attending the four-year institution, one must develop specific skills and abilities. One ability that is essential is the ability to solve problems logically and practically. Without this skill, a student would have no future in the computer market because math is based on logic. One must also be able to communicate well with others.

A person is constantly working with others and needs to able to convey his message clearly to the team and/or people that he must assist. In this profession, people often work alone, but still need to be an effective team worker. This is a skill that can be developed at all levels. Others may think that being a team worker is a personal characteristic, but for some, it is something they need to work at to achieve. Personal characteristics are something that a person possesses and are difficult to change. Two personal characteristics that a system analyst should possess are humility and confidence.

These two characteristics are closely tied together, yet they contradict each other in some ways. They must be able to admit their mistakes, yet be confident enough to think that they have done a good job. While system analysts usually work forty hours per week, they may be required to work overtime on emergency projects or to meet deadlines. This requires employees to be able to handle stress well, work effectively and efficiently, and be reliable. CEO’s recognize that these qualities enhance their business and seek to hire people who possess them. Another good quality for a system analyst to have is a sense of humor.

A sense of humor is what keeps tension low in the work environment and helps to lessen the stress in some situations. All of the things in the previous three paragraphs are helpful when looking to advance on the job. However, before advancement takes place, the opportunity must be made available. To have this opportunity, one must already be employed as a system analyst. Nationally, there were 463,000 people employed in the field in 1990 (U. S. Department of Labor 81) and 1,468 system analysts employed in Wisconsin (Wisconsin Career Information System 1633. 3). The demand for system analysts is expected to grow rapidly.

There are approximately 100 annual openings in Wisconsin every year, and by the year 2005, approximately 2,505 people are expected to be employed in the state (Wisconsin Career Information System 1633. 3). Once a person has acquired some experience, he is given more responsibility and independence. If someone has a lot of experience, he has an opportunity to become a technical specialist, a team supervisor, or an engineering manager. Anyone who advances in this field has usually obtained a graduate degree in one of the aforementioned college majors. Advancement means not only more responsibility, but higher pay.

In my research, the lowest starting pay nationally for a system analyst is between $17,000 and $21,000 from the government (U. S. Department of Labor 80). The average pay ranges from $33,000 to $46,000 per year (Wisconsin Career Information System 1633. 3). The top one tenth of system analysts earn more than $62,400 a year (U. S. Department of Labor 80). On average, people working in the Northeast were found to be paid the highest, while those working in the Midwest were the lowest (U. S. Department of Labor 81). Most system analysts receive benefits in addition to their salary.

These benefits may include, but are not limited to paid vacations, sick leave, health and dental insurance, retirement plans, and profit sharing. Many places hire system analysts, but the biggest employers are educational institutions, government, and large corporations. The above factors contribute to the working conditions of a system analyst and are always a consideration when it comes to choosing the right job. Overall, I believe choosing this career would be a good choice for me. System analysts do everything that I enjoy doing. They work with computers, develop new computer ideas, use math, and are on the cutting edge of technology.

I like the working hours and the benefits. They are something that is conducive to my way of life. I also may enjoy this career because most jobs are in urban areas and I want to live in or near a city larger than Platteville. This career is important to me because I feel it will allow me to grow constantly while seeing the results of my efforts. I will be performing interesting, exciting, and challenging work while using my own ideas and at the same time drawing on an extensive knowledge base. It is a job that will force me to be thinking constantly.

Although I like working under pressure, I do not like the fact that I may have to work overtime to finish projects. I also do not like the low job demand in Wisconsin. The low demand means that I most likely will not be able to stay in Wisconsin and be successfully employed as a system analyst. Researching this career has been enjoyable and I feel that it has given me a better understanding of my current career choice. Later in life I may change my choice, but right now I think that it is a good choice for me. I plan to continue to pursue the education required to become a system analyst.

Telecommunications Advances Essay

Today, telecommunications technology affects lives to a greater degree than ever before. Communication has evolved over many years from the earliest attempts at verbal communication to the use of sophisticated technology to enhance the ability to communicate effectively with others. Every time a telephone call is made, a television is watched, or a personal computer is used, benefits of telecommunication technologies are being received. The concept of telecommunications may be defined as the transmission of information from one location to another by electronic means.

Telecommunications is using electronic ystems to communicate. Life is changing constantly and has been changing faster since the rapid advancements in telecommunication. Because of continuing attempts to find better and more efficient ways to communicate, the process of communication has steadily improved. Many of these improvements were made without the use of electronic technology. Human beings earliest attempts at communication were through nonverbal means such as facial expressions and gesturing. The use of these nonverbal signs, prehistoric people were able to communicate emotions such as fear, anger, and happiness.

More specific motions, uch as pointing, allowed them to convey more information . Verbal communication probably started with a series of disorganized but meaningful sounds (grunts and snarls). These sounds slowly developed into a system of organized, spoken language that truly allowed humans to share information (Croal 59). Writing, which is the use of symbols to represent language, began with early cave drawings, progressed to picture writings such as hieroglyphics, and finally evolved into the handwritten language we use today (Croal 61).

As civilization developed, people found it necessary to communicate their ideas to one another ver greater distances. The earliest method of transporting information was to carry it from place to place; but as the development of commerce made speed an essential part, greater effort was expended to increase the rate at which ideas were transmitted (Croal62). The search for rapid transport of information led to the formation of the pony express in 1860 (Cozic 77). Although the pony express required several weeks to carry mail from the East Coast to the West Coast, it was a vast improvement over the earlier methods.

The pony express was not the only time humans teamed up with animals to attempt to improve communications. Dogs and pigeons were used to carry messages, especially during wartime . Most, if not all, of the early forms of communication had two significant problems. Both the speed at which information could be effectively communicated and the distance over which information could be sent were severely limited. With the advancements in forms of electronic communication, these problems were solved. It was even before the pony express that a true technological breakthrough was made.

In 1844, the first electronic transmission occurred when Samuel Morse developed a system of dots and dashes to symbolize letters of the alphabet. A ransmission device called the telegraph was used to send the coded signals over wires. The telegraph was to become the primary method of reliable and rapid communication during the American Civil War . It took quite a few years to link the major cities of America by telegraph wires, but by 1861 the pony express was replaced . Telegraphic communication became a major part of Americas business and military history.

One of the early telegraph companies, Western Union, became the dominant carrier. Today, Western Union, through the use of modern technology, transmits information twenty-four hours a day, seven days a week. Actual voice communication over distance finally became possible in 1876 when Alexander Graham Bell held the first telephone conversation with his assistant, Thomas Watson . This alternative to written communication rapidly helped the telephone become the worlds most important communication tool.

By 1866 the first successful attempt to link Europe and America by undersea cable had been accomplished. This cable was capable of carrying telegraph data only . The telephone today remains a vital tool, and like the telegraph, the telephone is constantly being improved by modern technology . By 1900, the goal of ommunication technologists was to find a method of transmitting messages over long distances without the need for wires. That dream became reality in 1901 when Gugliellmo Marconi and two assistants stood on a hill in Newfoundland and listened carefully to their receiver.

Faintly they heard the Morse code dot-dot-dot, the letter s. the signal had traveled 1,700 miles from Cornwall, England, and it represented the first successful wireless transmission. This success led Marconi to form Marconi Wireless Telegraphy Company. It was not until the Titanic disaster in 1912, however, that wireless ransmissions became commercially profitable. As the Titanic was sinking, the ships radio operator transmitted distress signals over his wireless telegraph.

A passing ship, the Carpathia, which sped to the Titanics location and rescued 700 of the 2,200 people aboard, picked up the signals. Shortly after this disaster, most maritime nations required wireless telegraphs on all large ships. The Marconi experiment eventually led to the development of the radio. On an evening in November, 1920, radio station KDKA in Pittsburgh, Pennsylvania, went on the air with the first live radio broadcast. By 1922, 564 radio stations were on the air. Today, thousands of radio stations broadcast our favorite music, news, weather, and sports information .

As important as it was, the impact of the transmission of sounds by wire and by wireless methods seems minor, when the effect of television, the device that permits the transmission of both sounds and images. In 1926 J. L. Baird, working with the British Broadcasting Company (BBC), became the first person to transmit a television picture, and in 1936 the worlds first television service was introduced . By 1948, twenty television stations were on the air. The first color television service began in the United States in 1954 . Sociologist James K.

Martin believes The impact of television is legendary and has totally changed the way American families live . Modern telecommunications rely on modern technology and one of the most important elements of that technology is the computer. Todays computer industry is moving with great momentum. Most schools are equipped to teach computer skills, and it is no longer rare for a student to come to first grade with a basic understanding of computers gained from the familys personal computer . In 1930 an American electrical scientist, Vannevar Bush, constructed the first analog computer .

However, the person credited with developing the first digital computer is Howard Aiken of Harvard University, who completed his project in 1944 . Analog signals are a constant flow of information, whereas digital signals are a series of short bursts of information. Historian Mark Halls says, most historians point to ENIAC (Electronic Numerical Integrator And Computer) as the real beginning of computer technology . Engineers at the University of Pennsylvania built this giant computer in 1946. ENIAC utilized vacuum tubes to control computer functions.

The concept of storing programs in a computers memory is credited to John van Neumann, an American mathematician. It was in 1951 that the developers of ENIAC constructed Univac I, which became the first computer to be mass-produced . The traditional U. S. postal service is not oriented to meet needs for instant information access, so many mailboxes have become electronic. Electronic messages can be sent any hour of the day or night using a computer, a modem, and a telephone. These electronic messages may be read, filed, stored, erased, printed, and rerouted.

A computer used in conjunction with the telephone line and a television set allows homeowners to view merchandise, compare prices, and do electronic shopping. No longer are bank customers dependent on bankers hours to withdraw money or to obtain account information . Many school libraries have a new reference resource, an electronic encyclopedia. Libraries connect to electronic encyclopedias with personal computers. Facts can be read on the screen or sent to the printer. Through the use of telecommunications, the opportunity to access vast amounts of information located in large commercial data bases are beyond belief.

Within a matter of seconds, a computer can access information and can appear on its screen. Today, information services bring new learning opportunities and data into the home through telecommunications ). The information age has already arrived, and telecommunication technology has played an important role in it. It has already had an impact on what have been considered traditional methods of transmitting information over distances. This new technology has also changed the methods by which information is manipulated and stored. Telecommunications is changing the way people work, play, live and think.

The Role of Technology in The Newfoundland Classroom

Defining event of the last decade of the twentieth has been the introduction of the Personal Computer. These machines have invaded our homes, offices, and our schools. The end result has been that our lives, both private and professional have been irreversibly changed. Computers and related technologies have defined our relationships, especially how we learn. Since their inception in the early 1980’s schools and educators have embraced the computer as a new tool to educate our children. Initially teachers treated computers as novelties, however in the last six years, computers have become essential equipment in the Newfoundland school system.

This paper will examine the development and implementation of computers and related technologies, in rural schools in Newfoundland. As well as, their positive and negative impacts on the education system. Development and Implementation of computers in rural Newfoundland schools. The use of computers in the school system is not a new phenomenon. “In the 1970’s its promoters claimed that it would transform and save education” However, a majority of educators viewed a computer in every classroom as something out of science fiction. In the 1970’s computers were huge machines, requiring a whole room to be set-aside for them.

As well, they were not user friendly, and required an expert to code, enter, and decipher the data that these machines produced. It was not until the early 1980’s, with the development, as was called at the time, the microcomputer. The microcomputer of the 1980’s was not as powerful as the computer that is being used to type this paper. Indeed, the Internet, and e-mail were many years in the future. However, they did have an impact. These computers were small, user friendly, and relatively cheap. In the early 1980’s emphasis was place on computer literacy.

In that teachers were expected to take classes to the computer, and instruct the students on it use. For example, the proper ways to boot the computer, load the appropriate software, and use that software. This emphasis on computer literacy produced an unwanted attitude among many teachers. This approach treated the computer as something unique and special which made it a source of intimidation to may teachers who imagined their jobs being threatened by these machines. It also resulted in an often unwelcome addition of content to be accommodated in the already cluttered curriculum.

However, there were a*lso teachers who viewed the computer as a novelty, and not something to be taken seriously. This was due to the computer’s own physical limitations. In the early 1980’s computers were relatively primitive compared to today’s standards. As has been stated, the Internet, e-mail, and computer networks were yet to be developed for personal computers. Indeed, even the software itself had little to no educational value. For example, this author can remember the introduction of the Apple II e into Acreman Elementary in 1985.

The software that was available for the computer was mostly Word Processing, and programs such Paint Shop. Also available were a simple math program, typing tutor, and several educational games. Teachers at the school were initially impressed with the system, but that quickly wore off. For the remained of this author’s stay at Acreman, the computer was used to design banners, and posters school announcements. Our class was never taken to the computer to use it for educational purposes. In the late 1980’s there was a shift in the way computers would be treated in the modern classroom.

The late 1980’s saw a growing shift towards computer integration which emphasized the curriculum and not the tool. Its proponents felt that there would be no need to add new objectives; the existing ones would instead be enhanced and students would learn new skills as they needed them in order to make the computer work for them. The computer could now be viewed more as a partner as opposed to a competitor and could be treated in a more natural manner, allowing it to become more “invisible” in the classroom context. This is the prevailing attitude towards computers today.

Computers are now integral parts of the classroom. The information that can be accessed by the students is used to enforce the lesson that the teacher has taught. However, in the schools in rural Newfoundland, the computer and its function in education was still somewhat of a mystery to teachers. In many schools, such as Carbonear Integrated Collegiate, special courses had been designed to introduce students to the computer. These courses consisted mostly of word processing, and database design. Educational software had was marginally better, but the Internet was still not available to the general public.

Indeed until the late 1990’s, computers were still being treated as tools by rural schools, and was not used in any sense as a partner to education. At this time the use of computers was centered mostly in Special Needs (what was once referred to as Special Education), classrooms. It was here that computers reached their “pre-Internet” potential. Students which are slow learners, could use the computer and educational programs to reinforce the lesson that the teacher had gone over during class time. As well, with the program’s emphasis on graphics, students now had visual aids to help them in their understanding of the course material.

Indeed, in 1991 the Provincial government in cooperation with educators developed a program called the Lighthouse project. The idea behind the project was to develop computer courses for Newfoundland’s schools. About 30 schools were equipped with the latest technology and the teachers were instructed to develop a program. One such school was Queen Elizabeth High School in Foxtrap. The program strayed from the traditional texts, and examined programs such as MS-DOS, Word Perfect, and Lotus Notes, as well as many other software packages.

The program was designed for students wishing to take computer science at the University level. Teachers, such as Mr. Duffet were impressed because it “is a major investment in our future and shows how computers are becoming more and more important. ” Indeed computer programs were being used in biology, physics, math and geography. In the late 1990’s, the Internet became readily available for use in the school system. The last three years have seen a boon in Internet technology, being introduced into the schools. As well, Information Technology has been introduced into the school system.

Students can now learn basic network design and maintenance and programming skills. From their introduction in the early 1980’s the computer and its related technologies have made a impact on the school system. They have gone from being regarded as novelties or viewed with hostility, to an integral part of the classroom. Like all new technology, there are benefits and problems to its implementation. In the drive to have Newfoundland’s classrooms “wired”, what positive and negative impacts has it had?

The Positive and Negative Impacts of Computers The positive impacts of computer technology in the Newfoundland school system have been numerous. By in large, they can be reduced to three broad categories, efficient management, impact on teaching, and impact on learning. What is meant by efficient management is that, the use of computers has made record keeping a simpler task that it once was. Now, teachers can create templates for grade reports and attendance sheets that can be printed off with ease. As well, through the use of database programs like Microsoft Access, averages can be computed with ease.

The use of computing technologies has drastically cut the amount of time teachers have to spend on record keeping. Freeing them up to concentrate on teaching. The impact on teaching has been great. Computer technology has provided a means for achieving in the professional development of the teacher, and expanding and modifying the role of the teacher in the classroom. When a teacher intergrates computer technology into their daily lessons, they spend less time dispensing information, and can spend more time with slower learners.

As a result, the students perform better. As well, through the use of educational software, teachers can assign tutorials and drills to students, and participation is simulators. Again, this frees up time for the teacher to spend with slower learners. This fact has not been lost on Special Needs teachers. The end result is the whole sale adaptation of computers and computer programs for special needs students. As well, through the use of networking technology and the Internet, teachers have access to countless resources and support services.

This is especially important in rural Newfoundland, where school libraries contain dated information, and access to other physical media is limited. The most drastic impact of computers has been on learning. Computer technologies have enabled new approaches to learning, and accommodate different learning styles. The use of computer programs and information technology encourages students to become active participants in education. By that, they can learn how to access information, represent ideas, communicate with others, and can generate products.

As well, computing technology can be used as a cognitive aid, these tools can amplify the knowledge of the user and can increase critical thinking, and problem solving abilities. As well, through the use of programs, students can analyze real world situations, test hypothesis, and see the outcomes of their decisions in real time. Through the use of these programs, a subject can be brought to life. For example, there are numerous historical simulators in which students can relive and reinvent history. This results in student becoming more enthusiastic about their lessons.

Indeed, in courses where computers are used frequently, students become more involved in the subject matter. The most important aspect on computing technologies for the rural Newfoundland school has been the Internet. The Internet has literally made the world a classroom. Students are no longer restricted to their school libraries that usually contain dated information. They can venture out on to the World Wide Web, and gain access to countless volumes of information on any particular subject area. As well, they can share their ideas with others, and collaborate and cooperate with other students.

Developing information retrieval skills and team skills that are mandatory for success in today’s work force. As much as computer technology has had a positive impact on education in Newfoundland, it has also had many negative impacts. The area of major concern is money or lack thereof. In recent years, the provincial government has repeatedly slashed the education budget. The recent reform to the education system was the responsibility of the Williams Report, which was primarily concerned with reducing the costs of the education system.

The costs of computers and their related technologies are tremendous. Millions of dollars are being spent to maintain and upgrade existing computer systems, as well as the purchase of new computers and services. To put it more colourfully, Yet the effects of this online obsession are already being felt. Elementary and high schools are being sold down the networked river. To keep up with this educational fad, school boards spend way to much money on technical gimmicks that teachers don’t want and student’s don’t need.

And look at the appalling state of our libraries’ book-acquisition programs! Computers and their accoutrements cost money. Big heaps of moola-whole swimming pools overflowing with bills and coins, bilked from people who’ve paid zillions for equipment, software, and network connections, from which they may never get their money’s worth The money that is being spent on computers, could be better spent on teachers, new acquisitions for libraries, and more importantly, new textbooks and laboratory equipment. Many have expressed dismay over this development, such as Kevin Major.

Even proponents of technology in the classroom, like Harvey Weir, have pointed out that “computers are not the end all be all of education, and that the teacher is the best resource we have. ” This mad stampede to get classrooms ‘wired” has led some to believe that the student teacher relationship is suffering. Rex Murphy has referred to this as the dynamic nexus. In his article of the same title, he believes that it is the teacher not technical devices that make a student want to learn. He maintains that the computer is simply a tool, and cannot compare to a teacher, or a good old-fashioned field trip.

Kevin Major expresses similar concerns in that “computers are inhuman, and lacking the emotions and care that a real teacher would have. ” The lack of this human element in computers is another concern. Some feel that there are adverse social impacts on those who use computers frequently. One only need to think of the stereotypical “computer geek”, to see that there is a glimmer of truth to this statement. There are several key areas to this issue, some view computers as dehumanizing and places their commitment and compassion to humanity in jeopardy.

This means that students lack the opportunity to interact with “real life” people, and as a result do not develop necessary social skills. One of the greatest fears is that information technology in the schools will only benefit the rich and more academically able and male students. This will only serve to widen the gender gap, as males will have more access to computers and computer courses, than females. Another issue is that computers will defeat the attempts to reform education. In that, they will enforce the traditional form of hierarchical power structures.

By placing decision making powers in the hands of the few who have access to and learn to use these new technologies, and thus create a reliance on them by those who do not. The chances then for cooperation between parents, teachers, administrators, and district officials are less likely. This is a serious issue in Newfoundland education, especially in rural schools. In rural districts, parents in general do not have a high degree of education. This means that they have a difficult time understanding what it is a teacher is doing in the classroom.

However, their school experience gave them some common ground in which they could use to gain a limited understanding. With the use of new technologies, which these lowly educated persons do not understand, they become generally become hostile to it. This hostility towards the computer can bleed over on to the teacher. This breaks down the parent teacher relationship, which is a central factor to having an effective education system. The lack of teacher training in computers is a problem specific to Newfoundland. In the last decade of the twentieth century, the school boards generally did not hire new teachers.

This was due to budgetary constraints. This means that the teacher in the Newfoundland class room is not comfortable around computers, and lacks the training to teach computer education effectively. *** Teachers can either be hostile to the computer, or not use it to its full capacity, preferring to allow students to play games on the machines. This attitude is easily summed up Mrs. Linda Galway, learning resources teacher at Carbonear Collegiate, “I’ve got my card catalogue, my books and magazines…what do I need those old computers for?

The Internet has had a major impact on rural Newfoundland schools. As has been stated, the Internet has turned the world into a classroom, and given students in rural schools access to information that was unavailable to students a generation ago. However, for all its benefits, the Internet has had a major negative impact on the classroom. Students no longer use the library for research on their term papers and essays, preferring to use the Internet. This has created a problem.

There is no regulatory agency that can control the information, and in many cases misinformation, that is on the Internet. For example, the most controversial aspect on the Internet is the ease at which hate literature can be spread. In order to use this information, and separate the lies, students must be in procession of advanced critical thinking skills (see Appendix A). Which according to Mrs. Galway, they will not develop until the student reach the university level. This has created extra workloads on teachers who must evaluate websites before they can be used in student papers.

For example, one student at Carbonear Collegiate handed in a paper for World History entitled “The Holocaust: Myth or Reality. ” This student through the use of “information” she found on the Internet concluded that the Holocaust was a myth, and she though she had the information to support her claim. Upon, investigation of the websites she used, they were found to Holocaust Denial sites. When confronted about this, the student replied, “It was on the ‘net, it has to be true! ” The use of computer technologies in the Newfoundland classroom has had both positive and negative impacts.

Computer education has allowed our students to gain skills that they will carry with them, has allowed them to cross international boundaries to work together, and has opened up the world and a world of learning that students a generation ago didn’t have access to. It has also had major negative impacts. The most common is that thousands if not millions of dollars is being spent on computers and related technology, that could be better spend on more useful things like new teachers. The use of computers is also leading to students not developing interpersonal skills for use “real life” situations.

As well, very few new teachers that would be comfortable with the new technology have been hired, leaving older teachers whom are reluctant to change their ways. As well, the Internet has increased the workload on the teacher by the teacher having to approve websites before their use in student papers. Since the introduction of computers into the Newfoundland school in the early 1980’s they have been either viewed as demons or angles. Initially the focus was on computer literacy, which produced hostility and cluttered up the already over crowded curriculum.

In the late 1980’s computer education focused on developing the computer as a partner in education, and helped the computer find a place in the curriculum. In recent years the computer has become an invaluable educational tool. There have been both positive and negative impacts concerning the use of computers and related technologies in the Newfoundland classroom. The proponent’s point to the development of life long learning skills, development of team skills, and the sheer amount of information those students can access.

The opponents point to the lack of teacher training and understanding, the countless dollars that could be better spent on other more useful things, the enforcement of hierarchical power structures, widening of the gender gap, and the students’ inability to think critically about information of the Internet. It is the opinion of this paper that computers and their related technologies are a fact of life, and that the use of computers can only benefit the Newfoundland student. However, due to the fluid motion of technology, the computer education program will have to be developed more, and change as technology does.

Another Virtual Reality

Imagine being able to point into the sky and fly. Or perhaps walk through space and connect molecules together. These are some of the dreams that have come with the invention of virtual reality. With the introduction of computers, numerous applications have been enhanced or created. The newest technology that is being tapped is that of artificial reality, or “virtual reality” (VR). When Morton Heilig first got a patent for his “Sensorama Simulator” in 1962, he had no idea that 30 years later people would still be trying to simulate reality and that they would be doing it so effectively.

Jaron Lanier first coined the phrase “virtual reality” round 1989, and it has stuck ever since. Unfortunately, this catchy name has caused people to dream up incredible uses for this technology including using it as a sort of drug. This became evident when, among other people, Timothy Leary became interested in VR. This has also worried some of the researchers who are trying to create very real applications for medical, space, physical, chemical, and entertainment uses among other things.

In order to create this alternate reality, however, you need to find ways to create the illusion of reality with a piece of machinery known as the computer. This is done with several omputer-user interfaces used to simulate the senses. Among these, are stereoscopic glasses to make the simulated world look real, a 3D auditory display to give depth to sound, sensor lined gloves to simulate tactile feedback, and head-trackers to follow the orientation of the head. Since the technology is fairly young, these interfaces have not been perfected, making for a somewhat cartoonish simulated reality.

Stereoscopic vision is probably the most important feature of VR because in real life, people rely mainly on vision to get places and do things. The eyes are approximately 6. 5 centimeters apart, and allow ou to have a full-colour, three-dimensional view of the world. Stereoscopy, in itself, is not a very new idea, but the new twist is trying to generate completely new images in real- time. In 1933, Sir Charles Wheatstone invented the first stereoscope with the same basic principle being used in today’s head-mounted displays.

Presenting different views to each eye gives the illusion of three dimensions. The glasses that are used today work by using what is called an “electronic shutter”. The lenses of the glasses interleave the left-eye and right-eye views every thirtieth of a second. The shutters electively block and admit views of the screen in sync with the interleaving, allowing the proper views to go into each eye. The problem with this method though is that you have to wear special glasses. Most VR researchers use complicated headsets, but it is possible to create stereoscopic three-dimensional images without them.

One such way is through the use of lenticular lenses. These lenses, known since Herman Ives experimented with them in 1930, allow one to take two images, cut them into thin vertical slices and interleave them in precise order (also called multiplexing) and put cylinder shaped lenses in front of them so that hen you look into them directly, the images correspond with each eye. This illusion of depth is based on what is called binocular parallax. Another problem that is solved is that which occurs when one turns their head.

Nearby objects appear to move more than distant objects. This is called motion parallax. Lenticular screens can show users the proper stereo images when moving their heads well when a head- motion sensor is used to adjust the effect. Sound is another important part of daily life, and thus must be simulated well in order to create artificial reality. Many scientists including Dr. Elizabeth Wenzel, a esearcher at NASA, are convinced the 3D audio will be useful for scientific visualization and space applications in the ways the 3D video is somewhat limited.

She has come up with an interesting use for virtual sound that would allow an astronaut to hear the state of their oxygen, or have an acoustical beacon that directs one to a trouble spot on a satellite. The “Convolvotron” is one such device that simulates the location of up to four audio channels with a sort of imaginary sphere surrounding the listener. This device takes into account that each person has specialized auditory signal processing, and personalizes what each erson hears.

Using a position sensor from Polhemus, another VR research company, it is possible to move the position of sound by simply moving a small cube around in your hand. The key to the Convolvotron is something called the “Head- Related Transfer Function (HRTF)”, which is a set of mathematically modelable responses that our ears impose on the signals they get from the air. In order to develop the HRTF, researchers had to sit people in an anechoic room surrounded with 144 different speakers to measure the effects of hearing precise sounds from every direction by using tiny microphone probes laced near the eardrums of the listener.

The way in which those microphones distorted the sound from all directions was a specific model of the way that person’s ears impose a complex signal on incoming sound waves in order to encode it in their spatial environment. The map of the results is then converted to numbers and a computer performs about 300 million operations per second (MIPS) to create a numerical model based on the HRTF which makes it possible to reconfigure any sound source so that it appears to be coming from any number of different points within the acoustic sphere.

This portion of a VR system can really nhance the visual and tactile responses. Imagine hearing the sound of footsteps behind you in a dark alley late at night. That is how important 3D sound really is. The third important sense that we use in everyday life is that of touch. There is no way of avoiding the feeling of touch, and thus this is one of the technologies that is being researched upon most feverishly. The two main types of feedback that are being researched are that of force- reflection feedback and tactile feedback.

Force feedback devices exert a force against the user when they try to push something in a virtual world that is ‘heavy’. Tactile feedback is the sensation of feeling an object such as the texture of sandpaper. Both are equally important in the development of VR. Currently, the most successful development in force- reflective feedback is that of the Argonne Remote Manipulator (ARM). It consists of a group of articulated joints, encoiled by long bunches of electrical cables. The ARM allows for six degrees of movement (position and orientation) to give a true feel of movement.

Suspended from the ceiling and connected by a wire to the computer, this machine grants a user the power to reach out and manipulate 3D objects that are not real. As is the case at the University of North Carolina, it is possible to “dock molecules” using VR. Simulating molecular forces and translating them into physical forces allows the ARM to push back at the user if he tries to dock the molecules incorrectly. Tactile feedback is just as important as force feedback in allowing the user to “feel” computer-generated objects. There are several methods for providing tactile feedback.

Some of these include inflating air bladders in a glove, arrays of tiny pins moved by shape memory wires, and even fingertip piezoelectric vibrotactile actuators. The latter ethod uses tiny crystals that vibrate when an electric current stimulates them. This design has not really taken off however, but the other two methods are being more actively researched. According to a report called “Tactile Sensing in Humans and Robots,” distortions inside the skins cause mechanosensitive nerve terminals to respond with electrical impulses. Each impulse is approximately 50 to 100mV in magnitude and 1 ms in duration.

However, the frequency of the impulses (up to a maximum of 500/s) depends on the intensity of the combination of the stresses in the area near the receptor which is responsive. In ther words, the sensors which affect pressure in the skin are all basically the same, but can convey a message over and over to give the feeling of pressure. Therefore, in order to have any kind of tactile response system, there must be a frequency of about 500 Hz in order to simulate the tactile accuracy of the human. Right now however, the gloves being used are used as input devices.

One such device is that called the DataGlove. This well-fitting glove has bundles of optic fibers attached at the knuckles and joints. Light is passed through these optic fibers at one end of the glove. When a finger is bent, the fibers also bend, and the mount of light that is allowed through the fiber can be converted to determine the location at which the user is. The type of glove that is wanted is one that can be used as an input and output device. Jim Hennequin has worked on an “Air Muscle” that inflates and deflates parts of a glove to allow the feeling of various kinds of pressure.

Unfortunately at this time, the feel it creates is somewhat crude. The company TiNi is exploring the possibility of using “shape memory alloys” to create tactile response devices. TiNi uses an alloy called nitinol as the basis for a small grid of what look like ballpoint-pen tips. Nitinol can take the shape of whatever it is cast in, and can be reshaped. Then when it is electrically stimulated, the alloy it can return to its original cast shape. The hope is that in the future some of these techniques will be used to form a complete body suit that can simulate tactile sensation.

Being able to determine where in the virtual world means you need to have orientation and position trackers to follow the movements of the head and other parts of the body that are interfacing with the computer. Many companies have developed successful methods of allowing six degrees of freedom including Polhemus Research, and Shooting Star Technology. Six degrees of freedom refers to a combination cartesian coordinate system and an orientation system with rotation angles called roll, pitch and yaw.

The ADL-1 from Shooting Star is a sophisticated and inexpensive (relative to other trackers) 6D tracking system which is mounted on the head, and converts position and orientation information into a readable form for the computer. The machine calculates head/object position by the use of a lightweight, multiply-jointed arm. Sensors mounted on this arm measure the angles of the joints. The computer-based control unit ses these angles to compute position-orientation information so that the user can manipulate a virtual world. The joint angle transducers use conductive plastic potentiometers and ball bearings so that this machine is heavy duty.

Time-lag is eliminated by the direct-reading transducers and high speed microprocessor, allowing for a maximum update rate of approximately 300 measurements/second. Another system developed by Ascension Technology does basically the same thing as the ADL-1, but the sensor is in the form of a small cube which can fit in the users hand or in a computer mouse specially developed to encase it. The Ascension Bird is the first system that generates and senses DC magnetic fields. The Ascension Bird first measures the earth’s magnetic field and then the steady magnetic field generated by the transmitter.

The earth’s field is then subtracted from the total, which allows one to yield true position and orientation measurements. The existing electromagnetic systems transmit a rapidly varying AC field. As this field varies, eddy currents are induced in nearby metals which causes the metals to become electromagnets which distort the measurements. The Ascension Bird uses a steady DC magnetic filed which does ot create an eddy current. The update rate of the Bird is 100 measurements/second. However, the Bird has a small lag of about 1/60th of a second which is noticeable.

Researchers have also thought about supporting the other senses such as taste and smell, but have decided that it is unfeasible to do. Smell would be possible, and would enhance reality, but there is a certain problem with the fact that there is only a limited spectrum of smells that could be simulated. Taste is basically a disgusting premise from most standpoints. It might be useful for entertainment purposes, but has almost no purpose for researchers or developers. For one thing, people would have to put some kind of receptors in their mouths and it would be very unsanitary.

Thus, the main senses that are relied on in a virtual reality are sight, touch, and hearing. Applications of Virtual Reality Virtual Reality has promise for nearly every industry ranging from architecture and design to movies and entertainment, but the real industry to gain from this technology is science, in general. The money that can be saved examining the feasibility of experiments in an artificial world before they are done could be great, and the money saved on energy used to operate such things as wind tunnels uite large.

The best example of how VR can help science is that of the “molecular docking” experiments being done in Chapel Hill, North Carolina. Scientists at the University of North Carolina have developed a system that simulated the bonding of molecules. But instead of using complicated formulas to determine bonding energy, or illegible stick drawings, the potential chemist can don a high-tech head-mounted display, attach themselves to an artificial arm from the ceiling and actually push the molecules together to determine whether or not they can be connected.

The chemical bonding process takes n a sort of puzzle-like quality, in which even children could learn to form bonds using a trial and error method. Architectural designers have also found that VR can be useful in visualizing what their buildings will look like when they are put together. Often, using a 2D diagram to represent a 3D home is confusing, and the people that fund large projects would like to be able to see what they are paying for before it is constructed. An example which is fascinating would be that of designing an elementary school.

Designers could walk in the school from a child’s perspective to gain insight on how high that water fountain is, or ow narrow the halls are. Product designers could also use VR in similar ways to test their products. NASA and other aerospace facilities are concentrating research on such things as human factors engineering, virtual prototyping of buildings and military devices, aerodynamic analysis, flight simulation, 3D data visualization, satellite position fixing, and planetary exploration simulations.

Such things as virtual wind tunnels have been in development for a couple years and could save money and energy for aerospace companies. Medical researchers have been using VR techniques to synthesize diagnostic images of a atient’s body to do “predictive” modeling of radiation treatment using images created by ultrasound, magnetic resonance imaging, and X- ray.

A radiation therapist in a virtual would could view and expose a tumour at any angle and then model specific doses and configurations of radiation beams to aim at the tumour more effectively. Since radiation destroys human tissue easily, there is no allowance for error. Also, doctors could use “virtual cadavers” to practice rare operations which are tough to perform. This is an excellent use because one could perform the operation over and over without the worry of hurting any human life.

CMIP vs. SNMP: Network Management

Imagine yourself as a network administrator, responsible for a 2000 user network. This network reaches from California to New York, and some branches over seas. In this situation, anything can, and usually does go wrong, but it would be your job as a system administrator to resolve the problem with it arises as quickly as possible. The last thing you would want is for your boss to call you up, asking why you haven’t done anything to fix the 2 major systems that have been down for several hours.

How do you explain to him that you didn’t even know about it? Would you even want to tell him that? So now, icture yourself in the same situation, only this time, you were using a network monitoring program. Sitting in front of a large screen displaying a map of the world, leaning back gently in your chair. A gentle warning tone sounds, and looking at your display, you see that California is now glowing a soft red in color, in place of the green glow just moments before. You select the state of California, and it zooms in for a closer look.

You see a network diagram overview of all the computers your company has within California. Two systems are flashing, with an X on top of them indicating that they are experiencing problems. Tagging the two systems, you press enter, and with a flash, the screen displays all the statitics of the two systems, including anything they might have in common causing the problem. Seeing that both systems are linked to the same card of a network switch, you pick up the phone and give that branch office a call, notifying them not only that they have a problem, but how to fix it as well.

Early in the days of computers, a central computer (called a mainframe) was connected to a bunch of dumb terminals using a standard copper wire. Not much thought was put into how this was done because there was only one way to do it: hey were either connected, or they weren’t. Figure 1 shows a diagram of these early systems. If something went wrong with this type of system, it was fairly easy to troubleshoot, the blame almost always fell on the mainframe system.

Shortly after the introduction of Personal Computers (PC), came Local Area Networks (LANS), forever changing the way in which we look at networked systems. LANS originally consisted of just PC’s connected into groups of computers, but soon after, there came a need to connect those individual LANS together forming what is known as a Wide Area Network, or WAN, the result was a complex onnection of computers joined together using various types of interfaces and protocols. Figure 2 shows a modern day WAN.

Last year, a survey of Fortune 500 companies showed that 15% of their total computer budget, 1. 6 Million dollars, was spent on network management (Rose, 115). Because of this, much attention has focused on two families of network management protocols: The Simple Network Management Protocol (SNMP), which comes from a de facto standards based background of TCP/IP communication, and the Common Management Information Protocol (CMIP), which derives from a de jure standards-based background ssociated with the Open Systems Interconnection (OSI) (Fisher, 183).

In this report I will cover advantages and disadvantages of both Common Management Information Protocol (CMIP) and Simple Network Management Protocol (SNMP). , as well as discuss a new protocol for the future. I will also give some good reasons supporting why I believe that SNMP is a protocol that all network administrators should use. SNMP is a protocol that enables a management station to configure, monitor, and receive trap (alarm) messages from network devices. (Feit, 12). It is formally specified in a series of related Request for Comment (RFC) documents, listed here.

Mind vs Machine

In 1792 Mary Wollstonecraft in her work A Vindication of the Rights of Woman posed the question, “In what does man’s pre-eminence over the brute creation consist? ” She answers, “In reason and virtue by which mankind can attain a degree of knowledge. ” Today, no one would argue that man and woman are not intellectually equal, or that humans have a superior intellectual capacity over the brute creation, but what would they say about humankind versus the machine? We have always felt ourselves superior to animals by our ability to reason — “to form conclusions, judgments, or inferences from facts or premises”(Random House Dictionary).

Philosophers have argued for centuries about what defines reason, now on the dawn of the 21st century this age old question must be revisited. Since the ENIAC, the first mainframe, hummed to life in 1946, the chasm between humankind and machine has appeared to dwindle. Computers have insinuated themselves into the lives of millions of people, taking over the performance of mundane and repetitive tasks. With the constant improvement of computer technology, today’s super-computers can outperform the combined brain power of thousands of humans.

These machines are so powerful that they can store an essay sixteen billion times longer than this one in active memory. With the development of artificial intelligence software, computers can not only perform tasks at remarkable speed, but can “learn” to respond to situations based on various input. Can these machines ever procure “reason and virtue,” or are they simply calculators on steroids? We have now reached the point where we must redefine what constitutes reason in the 21st century.

On the intellectual battlefield, in February 1996, thirty-two chess pieces, represented the most recent challenge to the belief that thought is exclusive to humans. Kasparov, the world chess champion, faced off against one of IBM’s finest supercomputers, Deep Blue. Chess, a game of logic and reason, would be a perfect test of a computer’s ability to “think. ” In the Information Age battle of David vs. Goliath, the machine clearly had the advantage. Deep Blue is capable of playing out 50- 100 billion positions in the three minutes allotted per turn.

Nonetheless, the silicon brain was no match for the cunning intellect of the human mind. Deep Blue lacked the ability to anticipate the moves that Kasparov would make. In preparation for the game, Kasparov adapted a strategy of play unique to the computer. He would not be aggressive. He would not play for a psychological advantage. He would not make moves where pure calculation would be dominant. Kasparov, in fact, found Deep Blue’s playing predictable. He could learn the style of play of the computer. The computer could not do the same of his. Deep Blue lacked that intuitive edge which separates the victors from the defeated.

On the chess battlefield, man proved what separates it from the “brute creation,” or in this case the silicon creation; the ability to reason and intuit. The question remains as to where to draw the line between thought and calculation. Is thought the process by which the answer is reached or the answer itself? Is the intuition and creativity of humankind only a complex algorithm yet to be bestowed on our silicon friends, or has humankind a special gift that continues to separate us from machines. some intangible spirit inside every one of us which separates humankind from the brute creation of electronic circuits…

How Do Dixons and Tandy Add Value To The Products They Sell

How do Dixons and Tandy add value to the products that they sell, and, in doing so, what benefits are passed on to the consumer? Do high street consumer electronics stores offer better value for money than their mail-order counterparts? The raw price figures show that, obviously, the high street stores cost more than the mail-order stores, but are the benefits that the high street stores bring worth the extra price? I took the prices of five types of products, a large stereo, a portable system, a small television, a video recorder, and a computer.

The large stereo was an AIWA NSX-V710, the portable system was a Sanyo MCD 278, the small televisions that I chose were not available in both stores, and so I had to choose similar models. The models I chose were the Matsui 14″ Remote from Tandy and the Nokia 14″ Remote from Dixons. The models were both available from the mail-order supplier, at the same price. The video recorder that I chose to use was an AKAI VSG745, and was in fact available from both stores. The computer was the most difficult part of the system to match, as the Dixons systems came with some dded bonuses such as extra multimedia software and Internet capability.

I therefore reduced the price of the Dixons machine to account for these differences, by deducting the price that it would cost to upgrade on the Tandy machine. So, to give the Tandy computer Internet capability would cost 150, so that was deducted, and the multimedia software would have cost 50, so that was deducted. The computer specification I aimed to have as a common platform was an Intel Pentium 120MHz machine, with 8MB RAM, a 14″ monitor, at least a 1 GB Hard Disk and MPC level 2 capability (i. e. e able to use CD-ROM Multimedia titles).

The mail order supplier I chose to match these specifications with was Computer Trading, as they offered a system which was a close match to the Tandy and Dixons ones, while having a low price. The common factor with all the products is that they are all more expensive than their mail-order price counterparts. This means that the high street stores ‘add value’. Adding value is taking one or more parts or products, combining, changing or adding to them, in such a way that the perceived value of the product is increased by more than the cost of he change.

For example you might expect to pay 150 more than the cost of the parts when buying a hi-fi, but the cost of putting the hi-fi together is much less than 150. The price, however, must not be too high, as the customer has to perceive the value of the product to be that at which it is priced for a sale to take place. Within any company there will be some several ‘departments’, each adding value in their own particular way. How much value do Dixons and Tandy add?

The only way in which this question can be answered is by looking at the figures hemselves, and how much items cost from Dixons and Tandy as opposed to the mail order companies. The figures that I obtained by looking through the stores and magazines were as follows: Here we can see that every product is more expensive from the shops than in the mail-order catalogue. You can see that the products cost very much the same from both of the high street stores at roughly 125% of the cost of the mail-order price. This means that the stores make a 25% mark-up on every product that they sell.

The fact that the figures from Tandy and Dixons are very similar show that here is another factor coming into play. This could be one of two things: The cost of supplying the services to the customer is a high proportion of the added cost, therefore meaning that different margins of profit make little or no difference to the price. There is competition, and each store is trying to match or beat the other one to attract more custom. Dixons and Tandy – Adding value in action Obviously, Dixons and Tandy are very similar in that they do not manufacture anything.

However value is added in several ways, as a perception from the ustomer: The products are available instantly, they can be bought and taken straight out of the shop, as opposed to having to wait for delivery The products have financing deals available, such as 0% APR (Annual Percentage Rate) on a loan. For example this would mean that you get the product delivered, but pay by monthly instalments over two years. The products from Tandy, valued at 299 or over, come with a free Sky satellite system. You can try a product before you buy it. You receive sales and after sales support, and advice is not really given over the phone.

Also the shop is close to the home, so it is easy to get the product repaired or serviced. Free gifts are often supplied, or complementary products discounted when a product is purchased. There is a range of products available because the people who would order from a mail order catalogue are likely to know what they want, whereas those who go into a shop may need advice on which product is best for them. If we split these perceptions into categories, we can see that each perception is a product of different type of adding value: Points 1, 5 & 7 are because of the company moving all the products into one place, eady for sale.

Points 2, 3, 4 & 6 are because of the company’s marketing strategy and how the sell the products. This should therefore show where the value is added. In theory, this would show that value was added mainly in marketing, then in relocating the products. Also, shop staff will have to be paid, so some value would be added there. Conclusion From the evidence shown, we can state that high street stores, such as Dixons and Tandy add value in two main ways. These two ways are ‘convenience options’ and Marketing.

Of these two, Marketing is approximately twice the size of Consolidation. Therefore we can say that Dixons and Tandy add value primarily through ‘convenience options’ and marketing. Also, in answer to the question ‘are the benefits that the high street stores bring worth the extra price? ‘ we can say that, apart from quicker delivery and financing options available, all of the services given are pre-sales services. This means in theory you could go into a Dixons or Tandy, receive advice from them, then buy the product from a mail-order company. Thus the answer to this question is probably ‘no’, in most cases.

The Applications of Technology in the First Decade of the Twenty-First Century

A quote I heard many times when I was in high school and which I now know traces back to Sir Francis Bacon, one of our earliest scientist or philosophers as they were then called, is the statement “Knowledge Is Power. ” Today, I believe that the fuller, more correct statement is to say, “the application of knowledge is power. ” The study of science, and technology subjects will broader our opportunities in life. As we continue to advance to the 21st century- now lesser than 30 days away-we are well aware that technology is possibly the hottest industrial commodity around the world today.

In the years ahead, it will be an increasingly critical factor in determining the success or failure of businesses. It is the fuel many of us are looking at to help us win this race to the 21st century. To do that, we should make technology matter. In this paper I am going to share my technology forecasts. I try to focus on my new forecasts a decade into the future – the first decade of the 21st century, because that is how far most businesses need to be looking ahead. There has never been a neutral or value-free, technology. All technologies are power.

They evoke economic and social consequences in direct proportion to their dislocation of the existing economy and its institutions. I believe that technologies such as: biotechnology and genetic engineering, intelligent materials, the miniaturization of electronics, and smart manufacturing systems, and controls, will be the hottest technologies in the next decade. I am going to put together a list of what I think as the top ten innovative products that will result from those technologies. Number one on the list is something we call genetic.

There are pharmaceutical products that will come from the massive genetic research going on around the world today. In ten years, we will have new ways to treat many of our ills – from allergies to ADIS. We may see the discovery of new methods of treatment for various types of cancer, for multiple sclerosis, osteoporoses, Lou Gehrig’s and Alzheimer’s disease, to name just a few. The biotechnology frontier, especially developments in the field of genetic, promises- and to some degree has already archived – a revolution in agriculture and human health care.

But proving the means to develop plant species that are more disease-and-pest-resistant, more tolerant of drought, and able to grow during extended periods of adverse conditions. These technologies will very likely provide future increasing in agricultural productivity. So far, these techniques have not add much to world food production; recent grow has come primarily from increasing acreage in production, in response to higher grain prices. However, further expansion of productive land is limited, and the increased application of fertilizer appears to be reaching a point of diminishing returns.

Therefore, increased agricultural productivity from this new field could be essential to feed the growing population. The mapping of human and plant genomes, a process already well underway, will provide greatly increased knowledge of genetic processes and, to some extend, information about how to control them. For humans, this will provide the means to deal with diseases that have genetic origins or result from man functioning of genetic material in the body.

These diseases include potentially: cancer, cystic fibrosis, Gaucher’s, hemophilia, rheumatoid arthritis, AIDS, hypercholesterolemia, and many others. Furthermore, genome analysis of an individual can indicate propensity to diseases whose symptoms have not yet been manifested. Scientists believe that many psychological and behavior attributes can be genetically controlled and therefore subject to diagnosis and eventually, for aberrant conditions, corrected. Such uses of this technology, of courses, raise serious social and ethical questions that must be considered.

Other applications of biotechnology might produce novel protein for food replacing meat, stimulate awareness and evaluation of microbial threats (including archaea, ancient bacteria, being perhaps more adaptable and potentially hazardous than was previous thought), and creation of plantation to produce and distribute biological products in the ocean. The process of cloning was perfected; evidence by the fact that in 1997 a sheep was successfully cloned in Scotland. Hence, biotechnology could eventually eliminate food shortages, improve health, and extend life expectancy. Number two on the list is the personalized computer.

The personal computer now sitting on our desk will be replaced by a very powerful, personalized computer. It will be able to send and receive wireless data. It will recognize your voice and follow your voice commands. It will include a variety of security and service tools that will make the computer fit your own individual needs. When we turn on our personalized computer the intelligent agents built into it might automatically show us high-lights and stories from last night’s football game. It could display the current stock report on your own portfolio and ask it you would like to make any changes.

It would give us a traffic report for our normal commute to work and suggest an alternate, if necessary. Finally, it may let us know what the lunch specials are at our favorite restaurants and ask if we would like to make reservations. The third product on my list is the multi-fuel automobile. In ten years, our cars will have to meet even stricter requirements for emissions and efficiency. And to do that, we are going to see a gradual shift to other fuel and power sources. Barring a major oil crisis, we don’t see a rapid shift to those alternatives. The internal combustion engine will still have a major place in ten years.

But we will see an increase in vehicles running on energy sources like batteries, kinetic energy, fuel cells, and hybrid sources. At first, these will be used in low-weight vehicles that typically travel short distances. But as these alternative- powered vehicles are introduced into the general population, many of our experts believe that they will likely run on a combination of fuels – like reformulated gasoline, electricity, and compressed natural gas. The fourth product is the next generation television set. Ninety-nine percent of American homes have televisions, and over the next decade, we will be replacing them.

These new television sets will be wide-screen, digital, high-definition models with extremely sharp clarity. Many will be so flat that we will hang them on the wall much like a large painting. Eventually, these televisions will merge with the personalized computer I mentioned earlier. Of course, we are going to have to pay for all these wonderful products, and we will probably be doing that will the fifth item on the list, electronic cash. We will be using electronic money for everything from buying soda in a vending machine to making an international transaction over our computer.

In ten years, our pocket might not jingle, because credit-card-sized smart cards will have all but replaced our cash and keys. At colleges, we will developed a system that will allow students to pay their tuition, sign up for classes, download textbooks onto their computer, do their laundry, enter their dorm, and order a pizza, all with one smart card. That card, of course, will be directly linked to their parents’ bank account! The next product on my list is the home health monitor. These devices will be inexpensive, simple-to-use, and non-invasive (which basically means they won’t puncture our skin).

We will use them to monitor our health conditions right at home. They will be able to track a variety of our physical functions – like liver, levels of cholesterol, triglycerides, sugar, hormones, water, salt, and potassium. Monitoring our total health will be as simple as keeping track of our weight today. The future industrial applications of biology and computing will allow more people than ever before to participate in creating imaginative service, to build new markets and to generate personal wealth. Number seven on the list is another one for our cars. It is smart maps and global positioning systems.

Already, we can get a global positioning system in our cars, and it will show us where we are on a map and plot routes. But it won’t give us any information about what’s going on around us. That is what’s going to be different in ten years. We will be combining global position system with the traffic management infrastructure to help manage traffic flow. So, our dashboard map will show us where traffic problems are, and it will plot the best rout around them. We will also be using global positioning systems to help stop crime by giving us the power to monitor the location of our cars and other valuables.

And we will be able to follow the exact location of our most precious valuable. Parents will be able to follow the location of their children as they walk home from school. The eighth product on my list is also one we might have in our cars, and we might also have it our office buildings, pipelines, airplanes, and even our sports equipments. These are new, smart materials that will give off warnings when they detect excessive stress. Materials in bridges or airplanes, for instance, could send a signal to a central operator when they detect stress, and that operator could send a return signal for the materials to respond to the stress.

Automobile parts could give us a similar warning when they are approaching the point of breakdown. What is really amazing is that these materials will be designed with sensors built into the molecular structure of material. And, not too far in the future, they will be inexpensive enough to be in products all around us. Ninth on my list are anti-aging and weight-control products. That is something we would like to see. Over the next decade, we will see the development of a host of high-tech weight-control and anti-aging products for all the aging baby boomers.

Unfortunately, no Fountain of Youth is on the horizon. If it was, I would be back in the lab working on it myself. Nevertheless, new products will make aging a little less traumatic. In fact, we think technology will allow us to look forward to active and comfortable retirements well into our 80s. These new products may include: weight-control drugs that use the body’s natural weight-control mechanism, wrinkle creams that actually work foods with enhanced nutrients, and an effective cure for baldness. The final item on my list is not technically a single, specific product.

It is more a trend that will change the way we obtain many products, especially computers and major household appliances. Within the next decade, we will begin to lease these products rather than buy them. Already, some utilities are developing programs that would allow you to lease expensive appliances (like water heaters) that use their respective sources of power. The trend for utilities is that over the next several years they will transform into “comfort companies. ” Instead of selling you a furnace, for instance, they well sell you the comfort of maintaining the proper temperature in every room in your house.

Those are my predictions. But what may be even more important are the lessons we have learned as we’ve put together the forecasts. Three of those lessons are particularly noteworthy. They apply to business decisions that leaders in any industry make in this race to the 21st century. The first lesson we learned is that we have to be more aggressive than ever in tracking technology. Technology is growing and spreading around the world faster than zebra muscles in the Great Lakes. Historically, the United States has taken the entrepreneurial lead in developing new technologies. Biotechnology is a good example.

But today, that entrepreneurial spirit is spreading around the global, and hot new technologies are growing everywhere. But here is the problem: That makes our jobs even more challenging, because: one more technology means increased competitive pressure. And two more technology means it will become harder and harder and harder to identify and keep track of the specific developments they can make a real difference for us, or our competitor. I mention that the increased emphasis on time-to-market has been one of the big competitive change in the R & D (Research and Development) over the pass twenty years.

We see it every day in the United States. Just recently, a new toothbrush was developed for Teledyne Waterpik five times faster than any other one of the market. Another example is Battlle company, developed the coating that was the key ingredient for the next-generation interactive globe. These were completely new developments, but the company had to take them from the idea stage to the store shelf in a year or less-and, of course, in time for the Christmas buying reason. Therefore, time-to-market is the key competitive factor.

Of course, to get new products out on the market quickly, we have to be able to identify and acquire the key developments in today’s widespread sea of technology. The second lesson is one that folks in Ames may be as familiar with as we are in Chicago: We’ll go crazy trying to predict ISU-Illinois basketball games. In other words, stick to what you know – and team up with people who know the rest. Companies which have business in technology, especially technology in several key markets, are often comfortable making predictions.

We cannot predict who is going to win Olympic medals, but we can forecast how technology will change the Olympic games over the next twenty years. Even thought my dorms sits practically across the street from ISU, and I can see Hilton Coliseum form my room window, there is no was I am going to try to predict what might happen when ISU meets up with Illinois. And with technology and global markets expanding in nearly every conceivable field, industry’s facing a similar challenge. It’s getting harder ad harder to know everything we need to know about every aspect of our business.

Today, for more and more companies, the answer is the alliance. Companies are focusing their internal efforts on their own core competencies, and they are developing alliances with other organizations to bring in technology related to their business. Through these partnerships, they are gaining access to new technologies and world-class scientist and engineers – and at the same time reducing costs. Over the next ten to fifteen years, we are going to see business going one step further. This movement toward more technology alliances and partnerships is really just a transition.

Basically, we are going to see the emergence of the virtual company and the total R & D alliance. A company might maintain a vice president of technology to manage a network of R & D alliances with supplies, universities, and R & D organizations. Maybe it would have a staff of its own scientists and engineers housed right in one or more of those other organizations. This type of setup could be the ultimate way for a company to focus its sources on its core business and still be able to access the latest technology at the least cost.

That brings me to the third and final lesson about the race to the 21st century. So far, I’ve mentioned scanning for technology and building alliances. The third point refers to making technology matter. As I mentioned above, technology alone is not the fuel that can give us the lead in this race we are all in. There were many amazing technologies that did not make our top-ten list. They were fascinating to dream about. But that does not mean they would lead to valuable products. And it gets even more complex, because many of these technologies will merge and open up vast new areas for growth.

For instance, when we cross biotechnology and advanced electronic, that opens up a whole new field of biologically based electronics. Will we be growing organic computer chips? Many, if not most, of tomorrow’s top products will come from this merging of two or more technologies. Mastering this vast web of technology will be a necessary step in winning the race to the 21st century and beyond. But it won’t be sufficient. The companies that will win that race are the companies that will be able to anticipate market forces and acquire incorporate the right technology into their business.

We need to combine a savvy understanding of market forces with a through knowledge of available and potential technology. That combination will be the fuel that powers us to develop the hottest products of tomorrow. Innovative thinking, powered by advanced technology, fueled by consumer demand, driven by responsibility and common sense will allow us taking the lead on preserving the environment and keeping customer priorities front and center. But taking that type of initiative to link technology to the marketplace we can use technology to do more than just improve efficient.

Our goal should be to capture and use technology to gain value-and grab a competitive edge. The story with Teledyne WaterPik’s SenSonic toothbrush I mentioned earlier is one of the best recent examples of a company using that combination of market awareness and technology initiative to grab a competitive edge. They are using technology and market awareness to provide their customers with a more valuable product. And that is how they are working to win the race to the 21st century. I have made a lot of predictions about technology and about this race that we are all in. But still, there is really only one prediction that I can guarantee.

It is that market and technology forces will continue to transform industry, and we will all have to keep up with them if we want to succeed. We will all have to be futurists. Each business will have to develop its own forecast of leading technology and market trends that will impact the company in the decade ahead. And, they will have to continually monitor and revise that forecast and their own technology strategies. Technology alone will not secure our success. But focusing on the future with on eye on the marketplace and the other on technology trends- that is what will put us in the fast lane to the 21st century.

Model Train Building

The world of Model Train Building has grown greatly with the aid of computers and technology to enhance the fun of building. Technology has long been a part of Model Train building with the adding of lights, bells, and whistles to capture your interest and imagination. But with the latest generation of building comes the influx of technology and the computer. The computer brings along a new breed of builders who plan track layout, buy parts on the Internet, receive updated news, and chat with other enthusiast.

The most notable difference that computers have brought to the world of Model Train building is in software programming. Now on the market there are numerous different packages of software that enable hobbyist in the challenge of real yard operations on a smaller scale. These programs allow the person to move loads between depots and keep track of your revenues. They allow simulations of operational switches between tracks, multiple train operation, coupling/uncoupling of railcars.

But the greatest benefit that they bring is allowing the person to design a layout using an electronic template and ensuring that all measurements in the layout will work before a single piece of track is laid. Many of these software programs even play off on the hype of using a computer for design in their name, with names of CyberTrack, The Right Track Software, and Design Your Own Railroad, who could not want to become involved in there use. This software ties into many other aspects of building that encourage the use of the Internet in this hobby.

Many of these programs allow the hobbyist a realistic railyard action complete with sights, sounds and even planned crashes. With the event of a crash you are always going to need replacement parts for repair or maybe you just want to upgrade or expand you track system. This brings in the convenience of the use of the Internet in product ordering. With few stores in scattered areas it may be difficult or expensive for some hobbyist to get to these locations for the parts that they need. The Internet brings this store right into their home with online catalogs and parts stores.

One mainstream over the counter catalog, The Atlas Catalog, provides an electronic version, The Atlas Online Catalog, for internet users to order parts on a secure online catalog. Even more important for some people are the Online Magazines that provide up to the minute news breaking information. Online magazines such as Where Bigger is Better or The Nickel Plate Road Historical and Technical Society offer many services and information such as; Workshops online, product reviews, clubs list, train shows, technology updates, and even Toy Train links. These sites have exploded in size and number with the easy use of building Web Pages.

Almost anyone with a Personal Computer can create a page to contact other hobbyist. With this contact comes the use of Chat rooms for these hobbyist to talk shop about their mutual interest. Of course many of these rooms are just a place for people with the same interest to socialize, they also allow them to finds others to trade, get ideas, and receive updated news. As always, manufactures are into selling their product and with all these flashy new selling points of use of a Personal Computer, how could hobbyist in this field not want to become involved?

One of the leading factors driving the trend toward these high-tech trains is the growing power and falling prices of the parts. Computer chips and memory that once cost hundreds of dollars can now be had for a fraction of the cost. As the price of personal computers falls, so does the price of software and components for the home user making the appeal to there use even stronger. The growing use of high-tech components in traditional train building brings, A blurring of the lines between what is a software product and what is a toy product, says Lego spokesman John Dion.

Regardless if the hobbyist is an experienced train builder or a novice in train building, it can be said that the influence of computers and technology will continue to grow as computers expand into more aspects of our modern life. Children are born into a technological world. Their frame of reference is that theres always technology, says Chris Byrne of Playthings Marketwatch. What electronics does is it simply enhances and adds a level of reality to the play experience, Byrne said. As the world of computers expands, so does too the world of Model Train building.

The use of software programming gives the builder a way to build complex and innovative tracks only limited to his imagination with the resources a computer will give him. Ryan Slata, director of marketing for Playthings Toys says, I think kids expect more in their toys these days because technology is all around them, were in the computer age and I think that translates down to the toys. To say that someone has an interest in Model Trains is always an understatement, and with the use of computers and technology this interest brings the experience to a new level.

Discuss the function of the DMA controller chip in a computer system

Here is an Abbreviation of direct memory access, a technique for transferring data from main memory to a device without passing it through the CPU. Computer that has DMA channels can transfer data to and from devices much more quickly than computers without a DMA channel can. This is useful for making quick backups and for real-time applications. Some expansion boards, such as CD-ROM cards, are capable of accessing the computer’s DMA channel. When you install the board, you must specify which DMA channel is to be used, which sometimes involves setting a jumper or DIP switch.

Why is DMA so important? Because it allows data to read and write form the DMA memory without intervention by the CPU. (CMOS setup and ROM Bios) Describe what the CMOS setup chip is used for in a personal computer? Personal computer contain a small amount of battery-powered CMOS memory to hold the date, time, and system setup parameters. Is the CMOS setup chip the same as the ROM BIOS? No they are different. If not, what does a ROM BIOS chip do in a personal computer? The BIOS is built-in software that determines what a computer can do without accessing programs from a disk.

On PCs, the BIOS contain all the code required to control the keyboard, display, disk drives, serial communications, and a number of miscellaneous functions. The BIOS is typically placed in a ROM ship that comes with the computer (it is often called a ROM BIOS). This ensures that the BIOS will always be available and will not be damaged by disk failures. It also makes it possible for a computer to boot itself. Because RAM is faster than ROM, though, many computer manufacturers design systems so that the BIOS is copied from ROM to RAM each time the computer is booted. Describe the different types of ROM technology used in ROM BIOS chips?

There are four types of ROM chips they are ROM, PROM, EPROM, and EEPROM/Flash ROM (EEPROM is also known as flash ROM). ROM is no longer in use that is what is found out. PROM is a blank chip on which data can be written with a special device called a PROM programmer. EPROM is a special type of memory that retains its contents until it is exposed to ultraviolet light. The ultraviolet light clears its contents, making it possible to reprogram the memory. EEPROM/Flash ROM is a special type of EEPROM that can be erased and reprogrammed in blocks instead of one byte at a time.

Many modern PCs have their BIOS stored on a flash memory chip so that it can easily be updated if necessary. Another technology is ROM Shadowing for example, when you turn your computer it reads the BIOS and when it reads it form there it is passes it through the RAM because it is faster. Describe the difference between ROM chips and RAM chips? ROM (read only memory) chip is very slow and RAM (random access memory) chip is faster then ROM chip. On a ROM chip which data has been prerecorded it cannot be removed and it only can be read. In addition when you turn off your computer the data that was written will not lose its contents.

RAM memory can be access randomly that is any byte of memory can be access without touching the preceding byte; whoever, there are two basic types of RAM. They are dynamic RAM and static RAM. The two types differ in the technology they use to hold data, dynamic RAM being the more common type. Dynamic RAM needs to be refreshed thousands of times per second. Static RAM does not need to be refreshed, which makes it faster; but it is also more expensive than dynamic RAM. Both types of RAM are volatile, meaning that they lose their contents when the power is turned off.

Nielsen Ratings Essay

The following information is pertinent to the vitality and success of the FOX 24 cable-programming national network. It is necessary to discuss the importance of the ratings and shares system to enable FOX to increase viewership in the local TV market of 247,780 (. 235% of US). This market is highly competitive among the affiliates of the other major networks: ABC, CBS and NBC. The target demographics for FOX include an average age of 28 years with a $55,000 annual income. 56% of viewers are male while 43% are female, and of these only 37% have a college degree.

Due to such specifics, it is imperative that keep a variety of shows that appeal to a wide range of young adults. The FOX Family Channel is more oriented towards children and families. The data compiled by the Nielsen Media Research is essential to TV programming across the United States and in Canada. It monitors television ratings and estimates audience sizes by providing the highest quality of accuracy, allowing the television marketplace to function effectively. This information provides programmers and commercial advertisers with the awareness of peoples viewing habits.

Depending on air times and the popularity of certain shows, the station calculates the advertising fees that generate a majority of its revenue. All TV shows are ranked in order each week according to their ratings. Ratings are simply a tally of how many viewers watched a specific TV program and are surveyed nationwide every minute of every day. The “sweeps” are four months out of every year (November, February, May and July) when Nielsen measures every local TV market in detail in addition to the ongoing national surveys.

The rating system involves mathematical statistics with a focus on percentages. For example, there are 100 million homes in the world with TV sets. A rating aims to answer the direct question, “What percentage of the television homes in the world is watching a particular telecast? ” A rating of 15 means 15%, or 15 million homes, were watching. At certain slow times during the day and night it is difficult to get viewers. The total viewing audience, the homes who are actually watching their television sets, is called the HUT, or Homes Using Television.

At 8 p. , the daily peak for television viewing, the HUT is approximately 70. That is, 70% of the television homes in the world are watching something. At 2 a. m. it is closer to only 5%. A typical prime-time HUT is 60, which represents 60 million homes. By using the rating and the HUT it is possible to determine the share. The share is the calculation of what percentage, or “share,” the rating is of the HUT. In other words, what share of the available viewers did a program reach? The 15 rating out of a common HUT of 60 is a 25 share because 15 is 25% of 60.

Having a 15 rating and a 25 share is acceptable the program will then probably be renewed. Lead-in programs are extremely important to understand when dealing with syndicated and local programming. A lead-in program generally premieres at 7:00 PM on weekdays. It is necessary that this program be strong and engaging enough to get viewers to watch it, hopefully persuading them to continue tuning in to the rest of the shows on prime-time without switching channels. An interesting lead-in program should make viewers think that the rest of the nightly programming for WTAT will be just as enticing.

During November of 2000 last year, 3rd Rock from the Sun was the syndicated lead-in program for FOX, premiering at 7:00 PM on Monday through Fridays. This show brought in an average household rating of 2%. This means that in the local market area, only 2% of all TVs in the United States were watching 3Rd Rock from the Sun. The share was 4%, meaning that 4% of all TVs turned on saw 3rd Rock. The demographics for the show fall consistently between age and gender, showing ratings between 3% and 5%, excluding children from the ages of 2 to 11.

Total households watching 3rd Rock from the Sun equaled 9. Out of 112 adults viewing, men dominated the viewer ship over women. A mean of 70 men, ages 18 to 54, watched 3rd Rock from the Sun, versus a mean of approximately 35 for women ages 18 to 54. In the category of children ages 2 to 11, 37 watched the lead-in program compared to only 23 of teens 12 to 17. Being the lead-in program, 3rd Rock from the Sun faced tough competition from the other major network affiliates.

Compared to 3rd Rock from the Suns petty 2% rating and 4% share, Frasier pulled in a 9% rating and a 19% share, closely followed by Wheel of Fortune with a 9% rating and 18% share. Living Single had only slightly higher statistics than 3Rd Rock from the Sun with a 4% rating and an 8% share. Presently, several of FOXs prime time shows have had adequate statistics. The most prominent, That 70s Show, which airs on Tuesdays at 8:00 PM, had a rating of 6. 7% and a share of 11% out of an average of 10. 3 viewers. However, it is ranked in the 49th position, barely in the top 50 TV shows for the week.

Unfortunately, for the past several months in the 2001 season, FOX has failed to provide a Top 20 TV show for prime-time cable programming. NBC leads with the highest rated shows: Friends, ER, The West Wing and Law & Order. To continue a strong viewership, it is with careful consideration that FOX should decide to revaluate its syndicated programming for the lead-in and prime time shows. Without a strong Top 20 program, advertisers are less likely to buy air space from FOX, thus causing revenues to drop and the company to falter.

A Ticking Timebomba Question Of Technology

To say technology is a function of history or vice versa, one runs immediately into a problem of endogenatity. Irrespective of time, technology pervades culture (even the most primitive) and it is difficult if not impossible to claim that one directly affects the other. From the discovery of fire to the advent of space travel, technology profoundly impresses the world around it, for better of worse.

Most recently, society has chosen to fixate mostly on technologys negative aspects, not because of some fundamental change in the nature of such advances, but because of our inability to understand their workings. This evolution of paradigm affords Pirsig the ability to evaluate the role of technology more comprehensively than Burgess could inA Clockwork Orange. Technology and the sciences are epistemological pursuits just like art and nature. They should each be evaluated on the basis of their inherent value, an appreciation that serves Pirsig well-.

The Buddha, the Godhead, resides quite as comfortably in the circuits of a digital computer or the gears of a cycle transmission as he does at the top of the mountain or in the petals of a flower. Ultimately, Pirsig reconciles his new life with those of Phaedrus. With this reconciliation in hand, Pirsig not only accepts his past but comes to transcend it. Upon reflection, the questions that brought Phaedrus to nervous breakdown do not bother Pirsig: “What is good, Phaedrus, and what is not good need we ask anyone to tell us these things?

Instead personal strain has evaporated. He retains a reverence of nature and a contemplative mood, but without self-consciousness. His life is no longer based on conflict, real or imagined. He has transcended the surface differences between technology and art, exterminating fear and removing the stigma of technological ugliness. His life centers on the appreciation of all that he encounters rather than the need to question, or necessarily understand them.

Development Of Operating Systems

An operating system is the program that manages all the application programs in a computer system. This also includes managing the input and output devices, and assigning system resources. Operating systems evolved as the solution to the problems that were evident in early computer systems, and coincide with the changing computer systems. Three cycles are clear in the evolution of computers, the mainframe computers, minicomputers and microcomputers, and each of these stages influenced the development of operating systems.

Now, advances in software and hardware technologies have resulted in an increased demand for more sophisticated and powerful operating systems, with each new generation able to handle and perform more complex tasks. The folowing report examines the development of operating systems, and how the changing tehcnology shaped the evolution of operating systems. First Generation Computers (1945-1955) In the mid-1940’s enormous machines capable of performing numerical calculations were created. The machine consisted of vacuum tubes and plugboards, and programming was done purely in machine code.

Programming languages were unheard of during the early part of the period, and each machine was specifically assembled to carry out a particular calculation. These early computers had no need for an operating system and were operated directly from the operator’s console by a computer programmer, who had immediate knowledge of the computers design. By the early 1950’s punched cards were introduced, allowing programs to be written and read directly from the card, instead of using plugboards. Second Generation Computers (1955-1965) In the mid-1950’s, the transistor was introduced, creating a more reliable computer.

Computers were used primarily for scientific and engineering calculations and were programmed mainly in FORTRAN and assembly language. As computers became more reliable they also became more business orientated, although they were still very large and expensive. Because of the expenditure, the productiveness of the system had to be magnified as to ensure cost effectiveness. Job scheduling and the hiring of computer operators, ensured that the computer was used effectively and crucial time was not wasted. Loading the compliers was a time consuming process as each complier was kept on a magnetic tape, which had to be manually mounted.

This became a problem particularly when there were multiple jobs to execute written in different languages (mainly in Assembly or Fortran). Each card and tape had to individually be installed, executed then removed for each program. To combat this problem, the Batch System was developed. This meant that all the jobs were grouped into batches and read by one computer (usually an IBM 1401) then executed one after the other on the mainframe computer (usually an IBM 7094), eliminating the need to swap tapes or cards between programs.

The first operating system was designed by General Motors for the IBM 701. It was called Input/Output System, and consisted of a small set of code that provided a common set of procedures to be used to access the input and output devices. It also allowed each program to access the code when finished and accepted and loaded the next program. However, there was a need to improve the sharing of programs, which led to the development of the SOS (Share operating system), in 1959. The SOS provided buffer management and supervision for I/O devices as well as support for programming in assembly language.

Around the same time as SOS was being developed, the first operating system to support programming in a high-level language was achieved. FMS (Fortran Monitoring System) incorporated a translator for IBM’s FORTRAN language, which was widely used as most programs where written in this language. Third Generation Computers (1965-1980) In the late 1960’s IBM created the System/360 which was a series of software compatible computers ranging in different power of performance and price. The machines had the same architecture and instruction set, which allowed programs written for one machine to be executed on another.

The operating system required to run on this family of computers has to be able to work on all models, be backwards compatible and be able to run on both small and large systems. The software written to handle these different requirements was OS/360, which consisted of millions of lines of assembly language written by thousands of different programmers. It also contained thousands of bugs, but despite this the operating system satisfactory fulfilled the requirements of most users. A major feature of the new operating system was the ability to implement multiprogramming.

By partitioning the memory into several pieces, programmers where able to use the CPU more effectively then ever before, as a job could be processed whilst another was waiting for I/O to finish. Spooling was another important feature implemented in third generation operating systems. Spooling (Simultaneous Peripheral Operation On-Line) was is ability to load a new program into an empty partition of memory when a pervious job had finished. This technique meant that the IBM 1401 computer was no longer required to read the program from the magnetic tape. ssion of a job and returning of results had increased.

This led designers to the concept of time-sharing, which involved each user communicating with the computer through an their own on-line terminal. The SPU could only be allocated to 3 terminals, each job held in a partition of memory. Many time-sharing operating systems were introduced in the 1960’s, including the MULTICS (Multiplexed Information and Computing Service). Developed by Bell Labs, MULTICS was written almost completely in high-level language, and is known as the first major operating system to have done so.

MULTICS examined many new concepts including segmented memory, device independence, hierarchal file system, I/O redirection, a powerful user interface and protection rings. The 1960’s also gave rise to the minicomputer, starting with the DEC PDP-1. Minicomputers presented the market with an affordable alternative to the large batch systems of that time, but had only a small amount of memory. The early operating system of the minicomputers were Input/Output selectors, and provided an interactive user interface for a single user, and ran only one program at a time. By the 1970’s, DEC introduced a new family of minicomputers.

The PDP-11 series had 3 operating systems available to use on the systems, a simple single user system (RT-11), a time sharing system (RSTS) and a real-time system (RSX-11). RSX-11 was the most advanced operating system for the PDP-11 series. It supported a powerful command language and file system, memory management and muiltprogramming a number of tasks. Around the same time as DEC were implementing their minicomputers, two researchers, ken Thomspson and Dennis Richie were developing a new operating system for the DEC PDP-7. Their aim was to create a new single-user operating, and the first version was officially released in 1971.

This operating system, called UNIX became very popular and is still used widely today. Fourth Generation Computers (1980-1990) By the 1980’s, technology had advanced a great deal from the days of the mainframe computers and vacuum tubes. With the introduction of Large Scale Integration circuits (LSI) and silicon chips consisting of thousands of transistors, computers reached a new level. Microcomputers, which were physically much like the minicomputers of the third generation, however they were much cheaper enabling individuals to now use them, not just large company’s and universities.

These personal computers and required an operating system that was user friendly so that people with little computer knowledge was able to use it. In 1981, IBM was releasing a 16-bit personal computer, and required a more powerful operating system then the ones available at the time, so they turned to Microsoft to deliver it. The software, called Micro Soft Disk Operating System (MS-DOS) became the standard operating system for most personal computers of that era. In the mid-1980’s, networks of personal computers had increased a great deal, requiring a new type of operating system.

The OS had to be able to manage remote and local hardware and software, file sharing and protection, among other things. Two types of systems were introduced, the network operating system in which users can copy from one station to another, and the distributed operating system, in which the computer appears to be a uni-processor system, even though it is actually running programs and storing files in a remote location. One of the best known network operating system for a distributed network is the Network File System (NFS), which was originally designed by Sun Microsystems, for use on UNIX based machines.

An important feature of the NFS is its ability to support different type of computers. This allowed a machine running NFS to communicate with an IBM compatible machine running MS-DOS, which was an important addition to networking computing. In 1983, Microsoft Corporation introduced the MSX-DOS, an operating system for MSX microcomputers that can run 8-bit Microsoft software including the languages BASIC, COBOL-80, and FORTRAN-80, and Multiplan. 1984 saw the release of the Apple Macintosh, a low-cost workstation, which evolved from early Alto computer designs. The Macintosh provided advanced graphics and high performance for its size and cost.

As the Macintosh was not compatible with other systems, it required its own operating system, which is how the Apple operating system was established. MIMIX, based on the UNIX design was also a popular choice for the Macintosh. As computer processors got faster, operating systems also had to improve in order to take advantage of this progression. Microsoft released version 2 of MS-DOS, which adopted the many features that made UNIX so popular, although MS-DOS was designed to be smaller then, but was not as large as the UNIX operating system making it ideal for personal computers.

Modern Operating Systems The past 9 years have seen many advances in computers and their operating systems. Processors continue to increase in speed, each requiring an operating system to handle the new developments. Microsoft Corporation has dominated the IBM compatible world, Windows being the standard operating system for majority of personal computers. Now as computing and information technology becomes more towards the Internet and virtual computing, so too must the operating systems. In 1992, Microsoft for Workgroups 3. 1 was introduced, extending on from the previous versions.

It allowed the sending of electronic mail, and provided advanced networking capabilities to be used as a client on an existing local area network. This was only the one stage in the vast evolution of the worlds most popular operating system, with the most recent being Windows NT and Windows 98, the latter being a fully Internet integrated operating system. Windows, however is not the only operating system in use today. Other’s such as UNIX, Apple Operating System and OS/Warp have also had an impact, each new version more advanced, and more user friendly then the last.

Virtual Reality – What it is and How it Works

Imagine being able to point into the sky and fly. Or perhaps walk through space and connect molecules together. These are some of the dreams that have come with the invention of virtual reality. With the introduction of computers, numerous applications have been enhanced or created. The newest technology that is being tapped is that of artificial reality, or “virtual reality” (VR). When Morton Heilig first got a patent for his “Sensorama Simulator” in 1962, he had no idea that 30 years later people would still be trying to simulate reality and that hey would be doing it so effectively.

Jaron Lanier first coined the phrase “virtual reality” around 1989, and it has stuck ever since. Unfortunately, this catchy name has caused people to dream up incredible uses for this technology including using it as a sort of drug. This became evident when, among other people, Timothy Leary became interested in VR. This has also worried some of the researchers who are trying to create very real applications for medical, space, physical, chemical, and entertainment uses among other things.

In order to create this alternate reality, however, you need to find ways to create the illusion of reality with a piece of machinery known as the computer. This is done with several computer-user interfaces used to simulate the senses. Among these, are stereoscopic glasses to make the simulated world look real, a 3D auditory display to give depth to sound, sensor lined gloves to simulate tactile feedback, and head-trackers to follow the orientation of the head. Since the technology is fairly young, these interfaces have not been perfected, making for a somewhat artoonish simulated reality.

Stereoscopic vision is probably the most important feature of VR because in real life, people rely mainly on vision to get places and do things. The eyes are approximately 6. 5 centimeters apart, and allow you to have a full-colour, three-dimensional view of the world. Stereoscopy, in itself, is not a very new idea, but the new twist is trying to generate completely new images in real- time. In 1933, Sir Charles Wheatstone invented the first stereoscope with the same basic principle being used in today’s head-mounted displays.

Presenting different views to each eye gives the illusion of three dimensions. The glasses that are used today work by using what is called an “electronic shutter”. The lenses of the glasses interleave h) 0*0*0* the left-eye and right-eye views every thirtieth of a second. The shutters selectively block and admit views of the screen in sync with the interleaving, allowing the proper views to go into each eye. The problem with this method though is that you have to wear special glasses.

Most VR researchers use complicated headsets, but it is ossible to create stereoscopic three-dimensional images without them. One such way is through the use of lenticular lenses. These lenses, known since Herman Ives experimented with them in 1930, allow one to take two images, cut them into thin vertical slices and interleave them in precise order (also called multiplexing) and put cylinder shaped lenses in front of them so that when you look into them directly, the images correspond with each eye.

This illusion of depth is based on what is called binocular parallax. Another problem that is solved is that which ccurs when one turns their head. Nearby objects appear to move more than distant objects. This is called motion parallax. Lenticular screens can show users the proper stereo images when moving their heads well when a head- motion sensor is used to adjust the effect. Sound is another important part of daily life, and thus must be simulated well in order to create artificial reality.

Many scientists including Dr. Elizabeth Wenzel, a researcher at NASA, are convinced the 3D audio will be useful for scientific visualization and space applications n the ways the 3D video is somewhat limited. She has come up with an interesting use for virtual sound that would allow an astronaut to hear the state of their oxygen, or have an acoustical beacon that directs one to a trouble spot on a satellite. The “Convolvotron” is one such device that simulates the location of up to four audio channels with a sort of imaginary sphere surrounding the listener.

This device takes into account that each person has specialized auditory signal processing, and personalizes what each person hears. Using a position sensor from Polhemus, another VR esearch company, it is possible to move the position of sound by simply moving a small cube around in your hand. The key to the Convolvotron is something called the “Head- Related Transfer Function (HRTF)”, which is a set of mathematically modelable responses that our ears impose on the signals they get from the air.

In order to develop the HRTF, researchers had to sit people in an anechoic room surrounded with 144 different speakers to measure the effects of hearing precise sounds from every direction by using tiny microphone probes placed near the eardrums of the istener. The way in which those microphones distorted the sound from all directions was a specific model of the way that person’s ears impose a complex signal on incoming sound waves in order to encode it in their spatial environment. )

The map of the results is then converted to numbers and a computer performs about 300 million operations per second (MIPS) to create a numerical model based on the HRTF which makes it possible to reconfigure any sound source so that it appears to be coming from any number of different points within the acoustic sphere. This portion of a VR system can really enhance the visual and tactile responses. Imagine hearing the sound of footsteps behind you in a dark alley late at night. That is how important 3D sound really is.

The third important sense that we use in everyday life is that of touch. There is no way of avoiding the feeling of touch, and thus this is one of the technologies that is being researched upon most feverishly. The two main types of feedback that are being researched are that of force- reflection feedback and tactile feedback. Force feedback devices exert a force against the user when they try to push omething in a virtual world that is ‘heavy’. Tactile feedback is the sensation of feeling an object such as the texture of sandpaper. Both are equally important in the development of VR.

Currently, the most successful development in force- reflective feedback is that of the Argonne Remote Manipulator (ARM). It consists of a group of articulated joints, encoiled by long bunches of electrical cables. The ARM allows for six degrees of movement (position and orientation) to give a true feel of movement. Suspended from the ceiling and connected by a wire to the computer, his machine grants a user the power to reach out and manipulate 3D objects that are not real. As is the case at the University of North Carolina, it is possible to “dock molecules” using VR.

Simulating molecular forces and translating them into physical forces allows the ARM to push back at the user if he tries to dock the molecules incorrectly. Tactile feedback is just as important as force feedback in allowing the user to “feel” computer-generated objects. There are several methods for providing tactile feedback. Some of these include inflating air bladders in a glove, rrays of tiny pins moved by shape memory wires, and even fingertip piezoelectric vibrotactile actuators. The latter method uses tiny crystals that vibrate when an electric current stimulates them.

This design has not really taken off however, but the other two methods are being more actively researched. According to a report called “Tactile Sensing in Humans and Robots,” distortions inside the skins cause mechanosensitive nerve terminals to respond with electrical impulses. Each impulse is approximately 50 to 100mV in magnitude and 1 ms in duration. However, the requency of the impulses (up to a maximum of 500/s) depends h) 0*0*0* on the intensity of the combination of the stresses in the area near the receptor which is responsive.

In other words, the sensors which affect pressure in the skin are all basically the same, but can convey a message over and over to give the feeling of pressure. Therefore, in order to have any kind of tactile response system, there must be a frequency of about 500 Hz in order to simulate the tactile accuracy of the human. Right now however, the gloves being used are used as input devices. One such device is that called the DataGlove. This well-fitting glove has bundles of optic fibers attached at the knuckles and joints. Light is passed through these optic fibers at one end of the glove.

When a finger is bent, the fibers also bend, and the amount of light that is allowed through the fiber can be converted to determine the location at which the user is. The type of glove that is wanted is one that can be used as an input and output device. Jim Hennequin has worked on an “Air Muscle” that inflates and deflates parts of a glove to allow the feeling of various kinds of pressure. Unfortunately at this time, the feel it creates is somewhat crude. The company TiNi is exploring the possibility of using “shape memory alloys” to create tactile response devices.

TiNi uses an alloy called nitinol as the basis for a small grid of what look like ballpoint-pen tips. Nitinol can take the shape of whatever it is cast in, and can be reshaped. Then when it is electrically stimulated, the alloy it can return to its original cast shape. The hope is that in the future some of these techniques will be used to form a complete body suit that can simulate tactile sensation. Being able to determine where in the virtual world means you need to have orientation and position trackers to follow the movements of the head and other parts of the body that are interfacing with the computer.

Many companies have developed successful methods of allowing six degrees of freedom including Polhemus Research, and Shooting Star Technology. Six degrees of freedom refers to a combination cartesian coordinate system and an orientation system with rotation angles called roll, pitch and yaw. The ADL-1 from Shooting Star is a sophisticated and inexpensive (relative o other trackers) 6D tracking system which is mounted on the head, and converts position and orientation information into a readable form for the computer. The machine calculates head/object position by the use of a lightweight, multiply-jointed arm.

Sensors mounted on this arm measure the angles of the joints. The computer-based control unit uses these angles to compute position-orientation information so that the user can manipulate a virtual world. The joint angle transducers use conductive plastic potentiometers and ball bearings so that this machine is eavy duty. Time-lag is eliminated by the direct-reading transducers and high speed microprocessor, allowing for a maximum update rate of approximately 300 h) 0*0*0* measurements/second.

Another system developed by Ascension Technology does basically the same thing as the ADL-1, but the sensor is in the form of a small cube which can fit in the users hand or in a computer mouse specially developed to encase it. The Ascension Bird is the first system that generates and senses DC magnetic fields. The Ascension Bird first measures the arth’s magnetic field and then the steady magnetic field generated by the transmitter. The earth’s field is then subtracted from the total, which allows one to yield true position and orientation measurements.

The existing electromagnetic systems transmit a rapidly varying AC field. As this field varies, eddy currents are induced in nearby metals which causes the metals to become electromagnets which distort the measurements. The Ascension Bird uses a steady DC magnetic filed which does not create an eddy current. The update rate of the Bird is 100 measurements/second. However, the Bird has a small lag of about 1/60th of a second which is noticeable. Researchers have also thought about supporting the other senses such as taste and smell, but have decided that it is unfeasible to do.

Smell would be possible, and would enhance reality, but there is a certain problem with the fact that there is only a limited spectrum of smells that could be simulated. Taste is basically a disgusting premise from most standpoints. It might be useful for entertainment purposes, but has almost no purpose for researchers or developers. For one thing, people would have to put some ind of receptors in their mouths and it would be very unsanitary. Thus, the main senses that are relied on in a virtual reality are sight, touch, and hearing.

Applications of Virtual Reality Virtual Reality has promise for nearly every industry ranging from architecture and design to movies and entertainment, but the real industry to gain from this technology is science, in general. The money that can be saved examining the feasibility of experiments in an artificial world before they are done could be great, and the money saved on energy used to operate such things as ind tunnels quite large. The best example of how VR can help science is that of the “molecular docking” experiments being done in Chapel Hill, North Carolina.

Scientists at the University of North Carolina have developed a system that simulated the bonding of molecules. But instead of using complicated formulas to determine bonding energy, or illegible stick drawings, the potential chemist can don a high-tech head-mounted display, attach themselves to an artificial arm from the ceiling and h) 0*0*0* actually push the molecules together to determine whether or ot they can be connected. The chemical bonding process takes on a sort of puzzle-like quality, in which even children could learn to form bonds using a trial and error method.

Architectural designers have also found that VR can be useful in visualizing what their buildings will look like when they are put together. Often, using a 2D diagram to represent a 3D home is confusing, and the people that fund large projects would like to be able to see what they are paying for before it is constructed. An example which is fascinating would be that of designing an elementary school. Designers could walk in the school from a child’s perspective to gain insight on how high that water fountain is, or how narrow the halls are.

Product designers could also use VR in similar ways to test their products. NASA and other aerospace facilities are concentrating research on such things as human factors engineering, virtual prototyping of buildings and military devices, aerodynamic analysis, flight simulation, 3D data visualization, satellite position fixing, and planetary exploration simulations. Such things as virtual wind unnels have been in development for a couple years and could save money and energy for aerospace companies.

Medical researchers have been using VR techniques to synthesize diagnostic images of a patient’s body to do “predictive” modeling of radiation treatment using images created by ultrasound, magnetic resonance imaging, and X- ray. A radiation therapist in a virtual would could view and expose a tumour at any angle and then model specific doses and configurations of radiation beams to aim at the tumour more effectively. Since radiation destroys human issue easily, there is no allowance for error.

Also, doctors could use “virtual cadavers” to practice rare operations which are tough to perform. This is an excellent use because one could perform the operation over and over without the worry of hurting any human life. However, this sort of practice may have it’s limitations because of the fact that it is only a virtual world. As well, at this time, the computer-user interfaces are not well enough developed and it is estimated that it will take 5 to 10 years to develop this technology. In Japan, a company called Matsushita Electric World Ltd. s using VR to sell their products.

They employ a VPL Research head-mounted display linked to a high-powered computer to help prospective customers design their own kitchens. Being able to see what your kitchen will look like before you actually refurnish could help you save from costly mistakes in the future. The entertainment industry stands to gain a lot from VR. h) 0*0*0* With the video game revolution of bigger and better games coming out all the time, this could be the biggest breakthrough ever. It would be fantastic to have sword ights which actually feel real.

As well, virtual movies (also called vroomies) are being developed with allow the viewer to interact with the characters in the movie. Universal Studios among others is developing a virtual reality amusement park which will incorporate these games and vroomies. As it stands, almost every industry has something to gain from VR and in the years to comes, it appears that the possibilities are endless. The Future of Virtual Reality In the coming years, as more research is done we are bound to see VR become as mainstay in our homes and at work.

As the computers become faster, they will be able to create more realistic graphic images to simulate reality better. As well, new interfaces will be developed which will simulate force and tactile feedback more effectively to enhance artificial reality that much more. This is the birth of a new technology and it will be interesting to see how it develops in the years to come. However, it may take longer than people think for it to come into the mainstream. Millions of dollars in research must be done, and only select industries can afford to pay for this. Hopefully, it ill be sooner than later though.

It is very possible that in the future we will be communicating with virtual phones. Nippon Telephone and Telegraph (NTT) in Japan is developing a system which will allow one person to see a 3D image of the other using VR techniques. In the future, it is conceivable that businessmen may hold conferences in a virtual meeting hall when they are actually at each ends of the world. NTT is developing a new method of telephone transmission using fiber optics which will allow for much larger amounts of information to be passed through the phone lines.

This system is called the Integrated Services Digital Network (ISDN) which will help allow VR to be used in conjunction with other communication methods. Right now, it is very expensive to purchase, with the head-mounted display costing anywhere from about $20,000 to $1,000,000 for NASA’s Super Cockpit. In the future, VR will be available to the end-user at home for under $1000 and will be of better quality than that being developed today. The support for it will be about as good as it is currently for plain computers, and it is possible that VR could become a very useful teaching tool.

Mind and Machine: The Essay

Technology has traditionally evolved as the result of human needs. Invention, when prized and rewarded, will invariably rise-up to meet the free market demands of society. It is in this realm that Artificial Intelligence research and the resultant expert systems have been forged. Much of the material that relates to the field of Artificial Intelligence deals with human psychology and the nature of consciousness.

Exhaustive debate on consciousness and the possibilities of consciousnessness in machines has adequately, in my opinion, revealed that it is most unlikely that we will ever converse or interract with a machine of artificial consciousness. In John Searle’s collection of lectures, Minds, Brains and Science, arguments centering around the mind-body problem alone is sufficient to convince a reasonable person that there is no way science will ever unravel the mysteries of consciousness. Key to Searle’s analysis of consciousness in the context of Artificial Intelligence machines are refutations of strong and weak AI theses.

Strong AI Theorists (SATs) believe that in the future, mankind will forge machines that will think as well as, if not better than humans. To them, pesent technology constrains this achievement. The Weak AI Theorists (WATs), almost converse to the SATs, believe that if a machine performs functions that resemble a human’s, then there must be a correlation between it and consciousness. To them, there is no technological impediment to thinking machines, because our most advanced machines already think.

It is important to review Searle’s refutations of these respective theorists’ proposition to establish a foundation (for the purpose of this essay) for discussing the applications of Artificial Intelligence, both now and in the future. Strong AI Thesis Strong AI Thesis, according to Searle, can be described in four basic propositions. Proposition one categorizes human thought as the result of computational processes. Given enough computational power, memory, inputs, etc. , machines will be able to think, if you believe this proposition. Proposition two, in essence, relegates the human mind to the software bin.

Proponents of this proposition believe that humans just happen to have biological computers that run “wetware” as opposed to software. Proposition three, the Turing proposition, holds that if a conscious being can be convinced that, through context-input manipulation, a machine is intelligent, then it is. Proposition four is where the ends will meet the means. It purports that when we are able to finally understand the brain, we will be able to duplicate its functions. Thus, if we replicate the computational power of the mind, we will then understand it.

Through argument and experimentation, Searle is able to refute or severely diminish these propositions. Searle argues that machines may well be able to “understand” syntax, but not the semantics, or meaning communicated thereby. Esentially, he makes his point by citing the famous “Chinese Room Thought Experiment. ” It is here he demonstrates that a “computer” (a non-chinese speaker, a book of rules and the chinese symbols) can fool a native speaker, but have no idea what he is saying. By proving that entities don’t have to understand what they are processing to appear as understanding refutes proposition one.

Proposition two is refuted by the simple fact that there are no artificial minds or mind-like devices. Proposition two is thus a matter of science fiction rather than a plausible theory A good chess program, like my (as yet undefeated) Chessmaster 4000 Trubo refutes proposition three by passing a Turing test. It appears to be intelligent, but I know it beats me through number crunching and symbol manipulation. The Chessmaster 4000 example is also an adequate refutation of Professor Simon’s fourth proposition: “you can understand a process if you can reproduce it.

Because the Software Toolworks company created a program for my computer that simulates the behavior of a grandmaster in the game, doesn’t mean that the computer is indeed intelligent. Weak AI Thesis There are five basic propositions that fall in the Weak AI Thesis (WAT) camp. The first of these states that the brain, due to its complexity of operation, must function something like a computer, the most sophisticated of human invention. The second WAT proposition states that if a machine’s output, if it were compared to that of a human counterpart appeared to be the result of intelligence, then the machine must be so.

Proposition three concerns itself with the similarity between how humans solve problems and how computers do so. By solving problems based on information gathered from their respective surroundings and memory and by obeying rules of logic, it is proven that machines can indeed think. The fourth WAT proposition deals with the fact that brains are known to have computational abilities and that a program therein can be inferred. Therefore, the mind is just a big program (“wetware”). The fifth and final WAT proposition states that, since the mind appears to be “wetware”, dualism is valid.

Proposition one of the Weak AI Thesis is refuted by gazing into the past. People have historically associated the state of the art technology of the time to have elements of intelligence and consciousness. An example of this is shown in the telegraph system of the latter part of the last century. People at the time saw correlations between the brain and the telegraph network itself. Proposition two is readily refuted by the fact that semantical meaning is not addressed by this argument. The fact that a clock can compute and display time doesn’t mean that it has any concept of coounting or the meaning of time.

Defining the nature of rule-following is the where the weakness lies with the fourth proposition. Proposition four fails to again account for the semantical nature of symbol manipulation. Referring to the Chinese Room Thought Experiment best refutes this argument. By examining the nature by which humans make conscious decisions, it becomes clear that the fifth proposition is an item of fancy. Humans follow a virtually infinite set of rules that rarely follow highly ordered patterns. A computer may be programmed to react to syntactical information with seeminly semantical output, but again, is it really cognizant?

We, through Searle’s arguments, have amply established that the future of AI lies not in the semantic cognition of data by machines, but in expert systems designed to perform ordered tasks. Technologically, there is hope for some of the proponents of Strong AI Thesis. This hope lies in the advent of neural networks and the application of fuzzy logic engines. Fuzzy logic was created as a subset of boolean logic that was designed to handle data that is neither completely true, nor completely false. Intoduced by Dr. Lotfi Zadeh in 1964, fuzzy logic enabled the modelling of uncertainties of natural language.

Dr. Zadeh regards fuzzy theory not as a single theory, but as “fuzzification”, or the generalization of specific theories from discrete forms to continuous (fuzzy) forms. The meat and potatos of fuzzy logic is in the extrapolation of data from seta of variables. A fairly apt example of this is the variable lamp. Conventional boolean logical processes deal well with the binary nature of lights. They are either on, or off. But introduce the variable lamp, which can range in intensity from logically on to logically off, and this is where applications demanding the application of fuzzy logic come in.

Using fuzzy algorithms on sets of data, such as differing intensities of illumination over time, we can infer a comfortable lighting level based upon an analysis of the data. Taking fuzzy logic one step further, we can incorporate them into fuzzy expert systems. This systems takes collections of data in fuzzy rule format. According to Dr. Lotfi, the rules in a fuzzy logic expert system will usually follow the following simple rule: “if x is low and y is high, then z is medium”. Under this rule, x is the low value of a set of data (the light is off) and y is the high value of the same set of data (the light is fully on). Is the output of the inference based upon the degree of fuzzy logic application desired.

It is logical to determine that based upon the inputs, more than one output (z) may be ascertained. The rules in a fuzzy logic expert system is described as the rulebase. The fuzzy logic inference process follows three firm steps and sometimes an optional fourth. They are: 1. Fuzzification is the process by which the membership functions determined for the input variables are applied to their true values so that truthfulness of rules may be established. 2.

Under inference, truth values for each rule’s premise are calculated and then applied to the output portion of each rule. 3. Composition is where all of the fuzzy subsets of a particular problem are combined into a single fuzzy variable for a particular outcome. 4. Defuzzification is the optional process by which fuzzy data is converted to a crisp variable. In the lighting example, a level of illumination can be determined (such as potentiometer or lux values). A new form of information theory is the Possibility Theory. This theory is similar to, but independent of fuzzy theory.

By evaluating sets of data (either fuzzy or discrete), rules regarding relative distribution can be determined and possibilities can be assigned. It is logical to assert that the more data that’s availible, the better possibilities can be determined. The application of fuzzy logic on neural networks (properly known as artificial neural networks) will revolutionalize many industries in the future. Though we have determined that conscious machines may never come to fruition, expert systems will certainly gain “intelligence” as the wheels of technological innovation turn.

A neural network is loosely based upon the design of the brain itself. Though the brain is an impossibly intricate and complex, it has a reasonably understood feature in its networking of neurons. The neuron is the foundation of the brain itself; each one manifests up to 50,000 connections to other neurons. Multiply that by 100 billion, and one begins to grasp the magnitude of the brain’s computational ability. A neural network is a network of a multitude of simple processors, each of which with a small amount of memory. These processors are connected by uniderectional data busses and process only information addressed to them.

A centralized processor acts as a traffic cop for data, which is parcelled-out to the neural network and retrieved in its digested form. Logically, the more processors connected in the neural net, the more powerful the system. Like the human brain, neural networks are designed to acquire data through experience, or learning. By providing examples to a neural network expert system, generalizations are made much as they are for your children learning about items (such as chairs, dogs, etc. ). Modern neural network system properties include a greatly enhanced computational ability due to the parallelism of their circuitry.

They have also proven themselves in fields such as mapping, where minor errors are tolerable, there is alot of example-data, and where rules are generally hard to nail-down. Educating neural networks begins by programming a “backpropigation of error”, which is the foundational operating systems that defines the inputs and outputs of the system. The best example I can cite is the Windows operating system from Microsoft. Of-course, personal computers don’t learn by example, but Windows-based software will not run outside (or in the absence) of Windows.

One negative feature of educating neural networks by “backpropigation of error” is a phenomena known as, “overfitting”. “Overfitting” errors occur when conflicting information is memorized, so the neural network exhibits a degraded state of function as a result. At the worst, the expert system may lock-up, but it is more common to see an impeded state of operation. By running programs in the operating shell that review data against a data base, these problems have been minimalized. In the real world, we are seeing an increasing prevalence of neural networks.

To fully realize the potential benefits of neural networks our lives, research must be intense and global in nature. In the course of my research on this essay, I was privy to several institutions and organizations dedicated to the collaborative development of neural network expert systems. To be a success, research and development of neural networking must address societal problems of high interest and intrigue. Motivating the talents of the computing industry will be the only way we will fully realize the benefits and potential power of neural networks.

There would be no support, naturally, if there was no short-term progress. Research and development of neural networks must be intensive enough to show results before interest wanes. New technology must be developed through basic research to enhance the capabilities of neural net expert systems. It is generally acknowledged that the future of neural networks depends on overcoming many technological challenges, such as data cross-talk (caused by radio frequency generation of rapid data transfer) and limited data bandwidth.

Real-world applications of these “intelligent” neural network expert systems include, according to the Artificial Intelligence Center, Knowbots/Infobots and intelligent Help desks. These are primarily easily accessible entities that will host a wealth of data and advice for prospective users. Autonomous vehicles are another future application of intelligent neural networks. There may come a time in the future where planes will fly themselves and taxis will deliver passengers without human intervention. Translation is a wonderful possibility of these expert systems.

Imagine the ability to have a device translate your English spoken words into Mandarin Chinese! This goes beyond simple languages and syntactical manipulation. Cultural gulfs in language would also be the focus of such devices. Through the course of Mind and Machine, we have established that artificial intelligence’s function will not be to replicate the conscious state of man, but to act as an auxiliary to him. Proponents of Strong AI Thesis and Weak AI Thesis may hold out, but the inevitable will manifest itself in the end.

It may be easy to ridicule those proponents, but I submit that in their research into making conscious machines, they are doing the field a favor in the innovations and discoveries they make. In conclusion, technology will prevail in the field of expert systems only if the philosophy behind them is clear and strong. We should not strive to make machines that may supplant our causal powers, but rather ones that complement them. To me, these expert systems will not replace man – they shouldn’t. We will see a future where we shall increasingly find ourselves working beside intelligent systems.

Futurism: America Beyond 2001

The bridge to the 21st century is under construction, and the only way were going to be able to build it quickly and correctly is if we understand the technological challenges ahead of us. If we ignore those challenges, then well likely end up in the river. These are market-driven challenges for industry, and anytime the marketplace challenges us, thats a tremendous opportunity for business growth and profitability. Global market forces are opening up these new opportunities, and theyre driving the development of new technology and products. There are many new innovations and a great incline in the development of new technology.

Richard Alm agrees and states that: Few Americans would deny todays technology explosion. Even in this era of supercomputers, space travel, and cloning, though, technology isnt always seen as a boom (20). While thinking of what will come, an allusion to a literary work comes up and describes a Rip Van Winkle later in the future that awakes to “the whoosh of trains being propelled through the air by superconducting magnets” (Elmer-DeWitt 42). This is what is to come very soon with the push of technology along with such innovations as “7-ft. TV images as crisp as 35-mm slides and enticing new food products concocted in the lab” (42).

The United States has missed several opportunities to overcome such technological countries like Japan. One of which was Americas chance to patent “hydrogen-storing alloys that are used in tiny batteries for notebook computers” (Black 168). After reflecting these thoughts, Philip Elmer-DeWitt says that if our fictional character Rip Van Winkle were to: read the labels on those futuristic creations, he might also discover the outcome of Americas struggle to remain the leading technological superpower. Sad to say, a majority of those products might well bear the words MADE IN JAPAN (42).

Advantages and disadvantages come about from being involved in the global market. The “expanding world markets are a key driving force for the 21st century economy” (Mandel 67). Michael J. Mandel states that “the severe slump in Asia points [out] the vulnerabilities of the global market place, but the long-term trends of fast-rising trade and rising world incomes still remain in place” (67). De-Witt continues by saying that this is “the worrisome analysis of U. S. experts in government, industry and academia” (42). He finishes up with the thought that:

Virtually every week seems to bring fresh evidence that Japan is catching up with the U. S. often surpassing it in creating the cutting-edge products that long were the turf of U. S. firms (42) The changes that have occurred in America are becoming ever more clear and will be even more evident in the near future and Tom Morganthau agrees with the statement that “The percentage of Latinos, African-Americans and Asians will jump, and whites could become a minority as early as the 2050s” (57). He also states, “there are signs that Americans are adapting to ethnic diversity and there are forces at work that will tend to obscure it” (58).

The intermarriage rate between blacks and whites, while still small, is rising, and the number of Latinos marrying across ethnic lines is increasing as well” (58). Many changes in Americas culture are coming about with the rise of the new generations and some traditions and cultural habits may become lost in the mixture of society in America. New ways of life will come about in this new millennium. “We are conscious of the desperate need for new ways for people to make money in our society, particularly ways that create jobs” (Gordon 17).

The market changes are becoming more apparent as well in the coming years by analyzing the past. Opportunities such as when the “U. S. electronics firms licensed away their breakthroughs in televisions and VCRs, materials companies divested theirs to foreign competitors” (Black 168). Such market changes could easily bring the market to a screeching halt and Americans would become completely dependent upon other countries. Some examples of new markets that will be coming about would include “microelectronics video imagingsuperconductivity[and] biotechnology” (Elmer-De-Witt 43).

Advances in medicine are becoming evidently clearer. “The burgeoning field known as tissue engineering didnt even exist 15 years ago” (Cowley 66). Cowley states “Today its pioneers are finding that almost any biological material can be coaxed from a cultural dish” (66). “The information revolution will continue to boost productivity across the economy” (Mandell 63). Michael Mandel also states that “over the next 10 years, such information-dependent industries as finance, media and wholesale and retail trade will change the most” (63).

There will also be a surge of major technology breakthroughs, including biotechnology, will begin to create entire new industries over the next 10 years” (63). Also an increase in “globalization will simultaneously provide much larger markets and tough foreign competitors. The result: companies will have even more incentive while cutting costs” (63) “The coming decades will show a substantial growth much faster than most economists expect—perhaps 3% or more per year,” says Michael Mandel (63) “Inflationary surges and large budget deficits will become less likely” (63).

Mandel states: Despite all the scare talk, the next generation will enjoy a rising standard of living, even while baby boomers are able to retire comfortably. Countries that follow policies that encourage innovation, free trade, and open financial systems will enjoy a competitive edge. Businesses that master the new technologies will be able to count on better profits and bigger market share (63). The downside to this is the “major dislocations and uncertainty for workers and businesses” and “will be inevitable as new technologies are adopted” (63).

Another negative effect to the economic drive is that “technology shocks will increase economic and financial volatility, both in the U. S. and globally” (63). Automobiles are going to be drastically different if car manufacturers like Ford and Daimler-Chrysler have their way. They first must jump a few hurdles which include the U. S. Government. George Eads states “it is crucial that policymakers and politicians looking for technological fixes to problems understand the difficulty of totally reinventing the automobile” (28).

The OTA report that was issued addresses the concern that it is more realistic to be fairly conservative about when many of the advanced technologies will enter the marketplace” (31). What parents need to be aware of most is if the children they are bringing into this world are ready for the 21st century. Emily Sachar believes “the learning process is state-of-the art as well; instead of memorizing facts and mastering rote drills, students [must] learn to solve many real life problems” (124).

One must still remember that “the basics reading, writing, and arithmetic are as important as the quest for technological literacy (124). The Americans of the 21st century are todays children. Their values are being shaped by mothers who work outside the home, neighbors who speak different languages and teachers who preach about the environment. Their destinations are being determined by the amount of money we set aside for their college education and for our own retirement. They will live in a world quite different from ours. A statement that was brought up that raises an interesting point is one spoken by Jeannye Thornton:

When the clock strikes 12 the night of December 31, 1999, revelers all over the world will hail the new millennium. Only trouble is, theyll be doing their partying a year early. By decree of the Royal Greenwich Observatory in Cambridge, England, the first New Years Eve of the third millennium falls on January 1, 2001 (14). As the economy evolves, it takes less and less time for new products to spread into the population and it a parent will have even less time to help their children grow up into a more mature attitude and living in reality, or should it be virtual reality.

Technology Changing the Workforce

Technology and social change go hand-in-hand with the advancement of the workforce society within the last decade. Thanks to new technological breakthroughs emerging on a regular basis, the way we view employment has changed drastically compared to those of years before us. Dating back to the 1400’s, Johan Gutenberg revolutionized the world as we know it today by developing the printing press. Today, we take such things for granted but it is writing that makes it possible to spread knowledge, communication, and ideas over such a wide body of population. With the amazing developing of print, other inventions began emerging.

Thanks to some amazing innovatists, the radio, television, telephone, and now today, the Internet, have all been established. Not only have these inventions altered our personal lives, but have changed the way the job industry has been run for years. However, probably the biggest change these inventions have had in our society ability to earn an education. A college degree is almost a necessity in today’s workforce. Today’s technologically advanced economy desperately needs those who are trained in specialized areas; ranging from analyzing molecular genetic information to programming a database for a large company.

Once there was a time when steel mills and assembly lines ruled the economy. Poor, uneducated men with amazing work ethics ruled the workforce. These men, and women, worked 60 hour work weeks to put their children through college so they did not have to suffer the same as they did. Those children are reaping the benefits today and times have changed drastically. In Reich’s book, he mentions that there are two types of winners. The first set includes those who posses skills, character traits, and desires to satisfy needs and wants in the ever-changing economy.

The second category of winners is determined by companies and other organizations that develop a mass of individuals who are intelligent enough to work within nimble organizations, which is ultimately meeting market demands. Reich always claims there are losers in the new Digital Age economy as well. These are the those who are not as capable, lack execution, and have weak performance in was consistent with the “winners. ” Reich’s theory is based around “symbolic analysts”; individuals who are capable of prospering in the new economy.

These people, who are often creative and think independently, are responsible for developing new ideas, manipulating and analyzing information, and implementing new strategies. Perfect careers for these “symbolic analysts” range from lawyers to engineers. In this situation, Reich shows just how the development of employees who are “symbolic analysts” and the advancement of technology as turned our economy “weightless”. Anything and everything associated with our economy has turned into information, numbers, knowledge, and skills.

People are making higher and higher salaries now not for what they can physically produce, but what they can mentally achieve. Because of this absolute change in economic personality, the job market has changed as well. The need for a developed mind is much greater than the need for a developed body. Education is now more important than previous work experience. At one time, a country’s gross domestic product (GDP) only factored in automobiles, steel, iron, rubber, cotton, textiles, and other touchable objects. Now, GDP is measured in anything “weightless.

College classes, internet purchases, and phone calls are now factored in. As aforementioned, there is a second group of winners are than the “symbolic analysts. ” This second group is companies which are capable of altering itself at a blinding pace to keep up with the ever-changing market shifts. Today’s companies are no like the traditional assembly lines which are routine products over and over and over. Companies alter their product to ensure absolute customer satisfaction. Henry Ford’s plan involved making the same car at a decent pace.

For that time, it was a phenomenal idea and revolutionized the way factory work was done for decades. However, in today’s Corporate America, companies like Dell are becoming more and more common. Dell provides its customers with hundreds of options to personalize their ideal computer. Because of this uncanny customer support, Dell has taken over the computer industry. Other competitors have gone out of business because they cannot compete. Dell is a “winner” simply because it is able to do something very few of it’s peers can; keep up with the lightning fast pace of the changes in customer needs and wants.

Because of companies like Dell, organizations are forced to hire a workforce which is capable of producing a mass variety of products in the shortest amount of time. Efficiency is the key to success in today’s competitive corporate world. Reich’s model of “winners” and “losers” is extremely accurate when comparing successful companies to those who run bankrupt. Companies which cannot handle the ever-changing world of technology will be left in the dust. Corporations which cannot meet all the customer demands, as different as they may be, will never profit.

Technology And The Stock Market

The purpose of this research paper is to prove that technology has been good for the stock market. Thanks to technology, there are now more traders than ever because of the ease of trading online with firms such as Auditrade and Ameritrade. There are also more stocks that are doing well because they are in the technology field. The New York Stock Exchange and NASDAQ have both benefitted from the recent technological movement. The NYSE says they “are dedicated to maintaining the most efficient and technologically advanced marketplace in the world.

The key to that leadership has been the state-of-the-art technology and systems development. Technology serves to support and enhance the human judgement at point-of-sale. NASDAQ, the world’s first fully electronic stock market, started trading on February 8th, 1971. Today, it is the fastest growing stock market in the United States. It alo ranks second among the world’s securities in terms of dollar value. By constantly evolving to meet the changing needs of investors and public companies, NASDAQ has achieved more than almost any other market, in a shorter period of time.

Technology has also helped investors buy stocks in other markets. Markets used to open at standard local times. This would cause an American trader to sleep through the majority of a Japanese trading day. With more online and afterhours trading, investors have more access to markets so that American traders can still trade Japanese stocks. This is also helped by an expansion of most market times. Afterhours trading is available from most online trading firms. For investing specialists, technology provides operational capability for handling more stocks and greatly increased volumes of trading.

Specialists can follow additional sources of market information, and multiple trading and post-trade functions, all on “one screen” at work or at home. They are also given interfaces to “upstairs” risk-management systems. They also have flexiblity to rearrange their physical workspaces, terminals and functional activities. Floor brokers are helped with supports for an industry-wide effort to compare buy/sell contracts for accuracy shortly after the trade. They are also given flexibility in establishing working relationships using the new wireless voice headsets and hand-held data terminals.

The ability to provide new and enhanced information services to their trading desks and institutional customers is provided. They have a comprehensive order-management system, that systematizes and tracks all outstanding orders. Technology gives a market’s member organizations flexibility in determining how to staff their trading floor operations as well as flexiblity in using that market’s provided systems, networks and terminals or interfacing their own technology. They are given assurance that their market will have the systems capacity and trading floor operations to handle daily trading and in billions of shares.

Member organizations get faster order handling and associated reports to their customers, along with speedier and enhanced market information. They also have a regulatory environment, which assures member organiztions that their customers, large and small, can trade with confidence. Technology also allows lower costs, despite increasing volumes and enhanced products. Companies listed on the NYSE are provided with an electronic link so they may analyze daily trading in their stock, and compare market performance during various time periods.

The technology also supports the visibility of operations and information, and regulated auction-market procedures, which listed companies expect from their “primary” market in support of their capital-raising activities and their shareholder services. Institutions get enhanced information flow from the trading floor, using new wireless technologies, as to pre-opening situations, depth of market, and indications of buy/sell interest by other large traders.

Also supported are the fair, orderly, and deeply liquid markets which institutions require in order to allocate the funds they have under management whether placing orders in size for individual stocks (block orders) or executing programs (a series of up to 500 orders usually related to an index). For institutional investors, technology gives information on timely trades and quotes and makes them available through member firms, market data services, cable broadcasts and news media.

They also are provided with a very effective way of handling “smaller” orders, giving them communications priority and full auction market participation for “price improvement” yet turning the average market order around in 22 seconds. Price continuitity and narrow quotation spreads, which are under constant market surveillence and a regulatory environment which enforces trading rules designed to protect “small investors” are also supported. There are many different kinds of equipment used on the stock market.

One of these machines is SuperDot, an electronic order-routing system through which member firms of the NYSE transmit market and limit orders directly to the trading post where the stock is traded. After the order has been completed in the auction market, a report of execution is returned directly to the member-firm office over the same electronic circuit that brought the order to the trading floor. SuperDot can currently process about 2. 5 billion shares per day. Another piece of machinery is the Broker Booth Support System.

The BBSS is a state-of-the-art order-management system that enables firms to quickly and efficiently process and manage their orders. BBSS allows firms to selectively route orders electronically to either the trading post or the booths on the trading floor. BBSS supports the following broker functions: recieving orders, entering orders, rerouting orders, issuing reports, research, and viewing other services via terminal “windows”. The overhead “crowd” display is America’s first commercial application of large-scale, high-definition, flat-screen plasma technology. It shows trades and quotes for each stock.

The display also shows competing national market system quotes. Clear, legible information is displayed at wide viewing angles. Full color and video capabilities are also provided. The “Hospital Arm” Monitor is suspended for convenient viewing by specialists. Multiple data sources that are displayed include point-of-sale books, overhead “crowd” displays, market montage and various vendor services. The list of information sources is going to continue expanding. The Point-of-Sale Display Book is a tool that greatly increases the specialist’s volume handling and processing capabilities.

Using powerful workstation technology, this database sysem maintains the limit order book for which the specialist has agency responsibility, assists in the recording and dissemination of trades and quotation changes, and facilitates the research of orders. All of this serves to eliminate paperwork and processing orders. The Consolidated Tape System is an integrated, worldwide reporting system of price and volume data for trades in listed securities in all domestic markets in which the securities are traded.

The Hand-Held is a mobile, hand-held device that enables brokers to recieve orders, disseminate reports, and send market “looks” in both data and image format, from anywhere on the trading floor. Intermarket Trading System is a display that was installed in 1978 linking all major U. S. exchanges. ITS allows NYSE and NASDAQ specialists and brokers to compare the price of a security traded on multiple exchanges in order to get the best price for the investor. These are the machines that have helped greatly increase the buying and selling of stocks over the past few years.

There are great advantages to trading today over the situation that past traders had. The biggest beneficiaries of this new technology are investors themselves. They have all day to trade instead of trading only during market hours, they have more stocks to choose from, and the markets are very high so people are making a lot of money. In conclusion, I have discovered that the research I have done on this project has revealed what I originally thought to be true. That is that the stock market has greatly benefitted from the recent advances in technologies.

Fragmentation, Dependence On Technology

It is a very busy world out there in which we live in. With so many different things to do and different paths to go on, it is often overwhelming to pick and choose what to experience. Technology brings more to our doorsteps then can be imagined and pop culture is changing every day. But we have choices, right? Horkheimer and Adorno charge that we in fact do not and the effects of the culture-driven industry are more damaging than we think. Through fragmentation, dependence on technology, alienation, and a loss of individuality, each of us lives a set of conditions molded by the cultural industry we live in.

With the lack of a central value system, our culture has become fragmented. The assembly line mentality of Fordism has shifted the focus of importance from society to the machine. The culture emphasizes the buy mentality rather than values like decency and respect. Even on campus we are bombarded with sales pitches such as the recent Flex dollar campaign, “Buy More, Buy Now, Buy Fast. ” As the end of the year rolls around, the marketing department would like to remind you that you still have Flex dollars to burn and that you don’t want those dollars to go to waste.

Such a spending focused society obviously will lose sight on values that are important to us. Ever since the industrial revolution, culture has become more and more dependent on the machine. With the rise of technology, society has increasingly grown dependent on technology for producing and experiencing life. Television, radio, movies, music, and internet influence and determine how we experience life. If you miss last night’s episode of Friends you find it difficult to participate in today’s lunch conversation, banking is done with one simple click of the mouse, and did you hear Britney’s new single?

These elements define who we are and restrict who we can be. The pre-packaged culture technology offers us is tempting. It allows a common thread to bond perfect strangers (i. e. a discussion over a television show or a pop star) but it also strips us of our control over our lives and limits our options, running our lives. With the control of technology looming over our heads, we become alienated from the culture in which we participate. Society determines our work life, when we leisure and enjoy life, and how we go about that leisure.

The consumption society that exists tells you what products to purchase, even when they tell you to choose what you want as with Sprite’s slogan, “Obey your thirst. ” The standardization of products, pre-interpretation and pre-packaging, disconnects the consumer from humanity. There are not unique elements to the culture you live in when the mass production mentality takes over. With fragmentation, dependence on technology, and alienation, a loss of individuality occurs. Instead of buying items because of their value or uniqueness, we’re driven to buy things because they’re “new,” “hip,” and “cool.

We then become the same as everyone else, cookie-cut out of pre-cooked dough, and we lose our individuality; what makes us unique in ourselves. The culture becomes run by a machine and we become nameless, faceless, and identified by a number. Fragmentation, dependence on technology, alienation, and loss of individuality are Horkheimer and Adorno’s claims about the effects of the cultural industry. Through these culture loses its identity and is driven by the dollar. Individuals in such a society are told what to like, buy, eat, and do in every day culture.

Radio Waves Essay

Radio waves travel at 186,000 miles per second through air. In contrast, sound waves travel at only 1/5 of a mile per second. If a modulation is made of the radio wave that exactly reproduces the amplitude and frequency characteristics of the original sound wave, then sound can be transmitted rapidly over long distances. This leads to a very interesting phenomena. During a live broadcast in New York, the music will reach listeners in California a fraction of a second before it can be heard by the New York audience sitting in the back of the concert hall.

Radio transmissions are a combination of two kinds of waves: audio frequency waves that represent the sounds being transmitted and radio frequency waves that “carry” the audio information. All waves have a wavelength, amplitude and a frequency as shown in the figure. These properties of the wave allow it to be modified to carry sound information. In AM (amplitude modulation) radio transmissions, the amplitude of the combined audio frequency and radio frequency waves varies to match the audio signal.

AM radio is subject to problems with static interference. Electromagnetic waves (like radio waves) are produced by the spark discharges in car ignition systems, brushes of electric motors and in all sorts of electrical appliances, as well as in thunderstorms. There is considerable background noise that changes the amplitude of the radio wave signal adding random crackling noises called static. In FM (frequency modulation) radio transmissions, the frequency of the combined waves changes to reproduce the audio signal.

For example, higher frequency is associated with the peak amplitude in the audio wave. FM waves do not have a problem with interference because the noise background does not modify the radio wave frequency. In addition FM waves give better sound reproduction. Inventor Ernst Alexanderson was the General Electric Engineer whose high-frequency alternator gave America its start in the field of radio communication. During his 46-year career with G. E. , Swedish-born Alexanderson became the company’s most prolific inventor, receiving a total of 322 patents.

He produced inventions in such fields as railway electrification, motors and power transmissions, telephone relay s, and electric ship propulsion, in addition to his pioneer work in radio and television. In 1904, Alexanderson was assigned to build a high-frequency machine that would operate at high speeds and produce a continuous-wave commission. Before the invention of his alternator, radio was an affair only of dots and dashes transmitted by inefficient crashing spark machines.

After two years of experimentation, Alexanderson finally constructed a two-kilowatt, 100,000-cycle machine. It was installed in the Fessenden station at Brant Rock, Massachusetts, on Christmas Eve, 1906. It enabled that station to transmit a radio broadcast which included a voice and a violin solo. Alexanderson’s name also will be recorded in history for his pioneer efforts in television and the transmission of pictures. On June 5, 924, he transmitted the first facsimile message across the Atlantic.

In 1927 he staged the first home reception of television at his own home in Schenectady, New York, using high-frequency neon lamps and a perforated scanning disc. He gave the first public demonstration of television on January 13, 1928. The invention of the Radio and discovery of Radio waves was very important to modern culture; they were used as a basis of communication for Morse code and helped to communicate during the war. Nowadays, Radio waves are still widely used by many companies for communication.

Kyllo, Danny V. United States

The main subject in the Kyllo case deals with the advance in modern technology and how it relates to constitutional law. The overall question in this case is whether or not the use of thermal imaging technology should be used as a tool for searching the home of a person. The argument by the appellant, Mr. Kyllo, uses the unreasonable search and seizure clause of the Fourth Amendment as a defense against the use of thermal imaging systems without a warrant to search for illegal drug production inside his home.

Kyllo v. U. S. is currently pending before the United States Supreme Court so the objective of this essay is to explain the procedural history of this case and to predict a final result and the implications of that prediction. The question presented to the court is: Does the 4th Amendment protect against the warrantless use of a thermal imaging device which monitors heat emissions from a persons private residence? As with any case, before any court, it is important to understand all aspects of a case.

For example, the facts, procedural history, issues, holding(s), legal reasoning, sources of law, and values are all relevant to predicting a potential outcome as the U. S. Supreme Court sees it. The facts and procedural history of the case are as follows. On January 16, 1992, at 3:20 a. m. , Sergeant Daniel Haas of the Oregon National Guard examined, from his parked car, a triplex of houses where Kyllo lived. The full nature of the examination involved the use of an Agema Thermovision 210 thermal imaging device to look for heat generated from inside the home of Kyllo.

The purpose of the examination was to possibly locate an abnormally high heat source coming from inside Kyllos home, indicating the production of marijuana. If marijuana is to be grown inside it must have some source of intense ultraviolet light to aid it. Haas did indeed locate a high heat source in Kyllos home with the Agema 210 and noted that Kyllos home showed much warmer than the other two houses in the triplex (Find Law). This indicated the presence of lights used to grow marijuana. This information was forwarded to William Elliot, an agent of the United States Bureau of Land Management.

Elliot had already subpoenaed Kyllos utility records as Kyllo was already under investigation for the production of marijuana. With the information gathered by the use of the Agema 210, Elliot inferred that the high levels of heat emission indicated the presence of high intensity lights used to grow marijuana indoors (Find Law). Elliot presented this information to a judge and was issued a search warrant. In searching Kyllos home the Bureau of Land Management found more than one hundred marijuana plants, weapons and drug paraphernalia.

Kyllo was then indicted for manufacturing marijuana and filed a motion to suppress the evidence on the grounds that it was obtained illegally in accordance with the 4th Amendment. The district court denied Kyllos motion to suppress and he entered into a conditional guilty plea. Kyllo was sentenced to prison for 63 months. Kyllo appealed the denial of the suppression of motion, challenging the warrantless scan of his home with a thermal imager. In 1994, the 9th Circuit Court of Appeals reviewed whether the warrant used to search the home of Kyllo was based on knowingly and recklessly false information in the affidavit for the warrant (OTDNWU).

The court reversed and remanded the decision of the district court and sent the case back to hold an evidentiary hearing on the capabilities of the Afema 210. Again the district court denied Kyllos motion to suppress with the conclusion that warrantless searches of homes with the Agema are permissible. Kyllo then appealed again in 1998 to the 9th Circuit. The court of appeals found, in a 2-1 decision, that the use of thermal imaging systems was unconstitutional. The government petitioned for a rehearing and the case went back to the 9th Circuit which retired one judge and picked up another.

This time the decision was 2-1, holding that the monitoring of heat emissions by a thermal imaging system does not intrude upon Kyllos privacy. Kyllo recently appealed to the U. S. Supreme Court where the case is currently pending with arguments expecting to be heard in 2001. The main issue is a concern of privacy and how far the government can intrude into the lives of citizens. With technology developing so rapidly it is difficult to rely on the interpretations of the 4th Amendment and statutes that do not incorporate the newest technologies.

The question being asked to the Supreme Court is: Does the 4th Amendment protect against warrantless use of a thermal imaging device which monitors heat emissions from a persons private residence? The current holding of the U. S. District Court in Oregon and affirmed by the 9th Circuit Court of Appeals would suggest that the Supreme Court would further affirm that decision. However, the 9th Circuit Court holds only three judges and that court had already reversed and remanded the decision made by the District Court.

In order to predict what the Supreme Court will decide it is important to investigate the legal reasoning behind the previous decisions made in the lower courts. Investigating the case further requires that we investigate the reasons for the decisions already made. In the opinion of the court, Circuit Judge Hawkins gives reasons for the initial findings of the District Court of Oregon. The opinion states, the district court found that it (Agema 210) was a non-intrusive device which emits no rays or beams and shows a crude visual image of the heat being radiated from the outside of the house (Find Law).

Hawkins goes on further in the opinion to say that The Agema 210 scan simply indicated that seemingly anomalous waste heat was radiating from the outside surface of the home, much like a trained police dog would be used to indicate that an object was emitting the odor of illicit drugs(Find Law). This analogy is difficult to parallel to the use of thermal imaging devices because drug dogs have no specific targets. In this case, Kyllos home was targeted. Circuit Court Judge Noonan also used an analogy in his dissent.

The closest analogy is use of a telescope that, unknown to the homeowner, is able from a distance to see into his or her house and report what he or she is reading or writing. Such and enhancement of normal vision by technology, permitting the government to discern what is going on in the home, violates the Fourth Amendment(OTDNWU). Noonan, an advocate of privacy goes on to say, Such activities can cause the emission of heat from the home which the Agema 210 can detect. The activity will be reported as well as where it is taking place(Find Law).

Noonan is suggesting that the decision of the court creates precedent and would protect the government from spying on people in their homes. However, previous cases that have already set precedent were also investigated. In the opinion of the court Hawkins mentions two specific sources of law. Hawkins writes, While a heightened privacy expectation in the home has been recognized for purposes of Fourth Amendment analysis (Dow Chemical Co. v. U. S. ), activities within a residence are not protected from outside, non-intrusive, government observation, simply because they are within the home or its curtilage (Florida v. Riley) (Find Law).

These two sources of law give Hawkins opinion good justification but the dissent also finds legal precedent. In Montana v. Bullock and Peterson, 901 P. 2d 61 (1995), the Supreme Court of Montana ruled: individuals have reasonable expectations of privacy (Find Law). In this case reasonable expectations of privacy can be interpreted differently by different jurisdictions. This case challenged the legality of police to search property that they dont own. The only problem with the sources of law is that there is no specific case that deals directly with modern technology and its use as a search and seizure tool.

There are however contextual factors that exist here. For instance, many Americans, including Judge Noonan, feel that there is a moral factor involved in deciding this type of case. If the District Court Judgement is affirmed it is possible that other technological advances such as satellite photography and video will invade the privacy of Americans. If the Supreme Court holds with the trend of the United States District Court of Oregon and the 9th Circuit Court of Appeals then the ultimate interpretation of the 4th Amendment will be precedent for future search and seizure cases involving technological monitoring.

For this reason I believe that the U. S. Supreme Court will overturn the Circuit Courts affirmation. The consequences for a reversal of the Circuit Courts decision are few. The 4th Amendment would still protect the rights of citizens. The negative aspect is that some drug dealers will go unnoticed. This is only a slight inconvenience given that thermal imaging may still be used if a warrant is obtained.

How to be Dumb

Now that Alan Cooper’s personas have become famous, one of the most prominent and well-known goals for user interface designers is not to make the user look stupid. This goal isn’t really new because we all know of situations where we or someone else looked horribly stupid when trying to do something on a computer. Even the smartest women and men can look stupid at a computer if they don’t know which button to click, menu command to call, or key to press – defenseless and exposed to the laughter and ridicule of other, less knowledgeable people.

I came across so many people who did not dare touch a computer in my presence, either because they feared destroying something on the computer or afraid they would look stupid. As this problem is a really big issue for computer users, one of the most prominent and noble research areas for usability people should be to investigate how computers can avoid making people look stupid.

Figure 1: Like so many other personas, Gerhard – my personal persona – does not want to look stupid when working at the computer

Computers are Intransparent

In the early days, computers were totally intransparent: there were just some switches and light bulbs at the computer’s front panel that served for the communication with the “knowledgeable.” From time to time, the computers spit out a punched tape, which again required some machine to decode it. (The “experts,” however, could even decode the tape just by looking at it.) Later, computers printed out some more or less cryptic characters, and even later, the user communicated with the computer via keyboard, monitor and mouse – that’s the state we have today. But however sophisticated these devices are, we still look into the computers’ inner workings through a “peephole” called a monitor.

Do we really understand what state the computer is in, which commands it expects and what its cryptic error and system “messages” mean? No – computers often still leave us in the dark about what they expect from us, what they want us to do and what they can do for us. So, it’s no wonder that even the smartest people can look stupid in front of a computer, but even ordinary people like you and me can too.

Computers are Rigid Machines

As we all know, computers are mindless, rule-following, symbol-manipulation machines. They are just machines, though not ruled by the laws of mechanics but by the rules of logic and by the commands of their programs. Nevertheless, there is no inbuilt flexibility in computers, they just react according to the commands that have been programmed into them.

There have been long debates in the past whether artificial intelligence based on symbol manipulation is possible. Some people have proven that it is, others haven proven that it is not – in the end, this issue seems a matter of personal belief. So, let’s return to “real life.” We have all had the experience that computers are rigid in so many ways: they issue error messages, they do not find a file or search item if you misspell a name, they crash if they run on a wrong command. This stubbornness drives many users crazy: they feel stupid because they can’t remember even the simplest cryptic command. And they feel inferior to those “logical” machines because they are “fuzzy” human beings who commit so many errors.

Computers Can Cheat – But Not so Well…

But even if computers exhibit some flexibility, it is because farsighted programmers have programmed this flexibility into them. Often these programmers are not farsighted enough, or do not take human characteristics into account, such as the desire for a certain stability of the work environment. For example, there is a current trend to make computer systems adaptive in order to make them easier to use. The adaptive menus in the recent Microsoft applications are an example of this approach: the menus adapt their appearance according to their usage – with the result that people like me are puzzled each time they open a menu because it always looks different. So, today’s computers are even narrow-minded when they try to be flexible. They still make people look stupid, for example because the system changes its look and behavior in unpredictable ways.

Computers Are too Complex and Complicated for their Users

One of the arguments, often put forward by developers of complex software, is that it’s not the computers that are stupid but the users. Well, I let this stand as it is, but of course there are many occasions where average users are overwhelmed by the complexity of their computer hardware and software. There are so many things you have to remember and think of, far more than in a car or household. So, if you forget to bear in mind one important detail, all your efforts in trying to impress your friends or colleagues with how well you can master computer technology may be ruined within a second.

Let me illustrate this point with an example. Lately I took some photos of my friends with my awesome digital camera, actually a computer in itself. My friends were enthusiastic about the photos. OK, I said, and now I will print these images in the blink of an eye. My friends’ enthusiasm increased and I received many “oh’s” and “ah’s” because how fast the process of taking and printing a photo could go. But then there was a problem – I had forgotten to reconnect the printer to the USB port because I had used my scanner on that port. However, when I connected the cable, the computer did not recognize the printer, despite all the hype and promises with USB. Finally, I had to reboot the computer. So the blink of an eye turned into a quarter of an hour, and my friends had plenty of time for a couple of jokes on computers in general and on my “well prepared” equipment in particular.

Computers Can Be Mean and Wicked

Many of us can tell stories of evil computer experiences: the program that crashes shortly before you want to save an important mail or document that you worked on for several hours or the printer that jams in the middle of printing the slides for an urgent presentation. Computers can have even allies in their wickedness. Again and again, someone prints a 100-page document with lots of graphics when you are in a hurry and need to print a paper shortly before a meeting. I could continue with such stories for hours. So, how do computers know when the “right” moment has come to break down? This question still remains a mystery to me and requires further investigation. From Clifford and Nass’ book The Media Equation, we know that many people attribute human characteristics to computers and often treat them like humans – especially in those breakdown situations. On the other hand, we know that computers are rigid machines and typically do not care about human reactions and emotions. How can we clarify this contradiction? Some people believe that computers just follow the “law of maximum meanness,” similar to entropy in thermodynamics, others still believe there are demons inside computers.

I do not know who is right, but at least I have some examples to offer of how “wicked” computers make people appear stupid. Yes, computers can make your life hard at times, and they know well when the time is right – and you become a laughing stock. Presentations are a good time to make people look stupid because the presenters are in a hurry and nervous because the presentation is supposed to make them look good. There is also an audience that is often grateful for the mishaps. Think of the presentations where the server for the demo is not available although it was available just a few minutes before; the following hurried activities are well suited for entertaining the audience. Or you want to go to the next slide in a presentation but it does not appear. You click a second time, and – oops -, now you are beyond the slide, and the audience has some fun.

Conclusion

What can we do so that computers cannot make us look stupid? For some people, the simple solution is not to use computers at all. However, there are many people who have to use computers in their daily work. Not every one is old enough for early retirement. So, my advice is to create computer applications that take human characteristics, human strengths and human weaknesses into account. As long as we require human to adapt to the logic of machines, we still will stumble into situations where people look stupid while working with computers. But if we strive hard we will one day arrive at computer programs that accept the users as human beings with all their human limitations.

Essentials Of Robotics

Have you ever wondered how your car, your computer, or even a can of beans is made? Well, it is all done by a computer-controlled machine that is programmed to move, manipulate objects, and accomplish work while interacting with its environment (Robot). This complicated machine is called a Robot. Robots have been used all over the world to help make dangerous or even long labored jobs a simple task (“Reaching”). They work in mines, industrial factories, consumer goods factories, and many more places. Robots are also used as personal hobbies, as seen in many movies, shows, etc (Schoeffler).

Robots have existed for over 80 years and there potential is only growing more and more (“Robot”). Robots are essential to the world we live in today, because of all the different things they are used for a daily basis. Robots have been used in many dangerous environments, keeping humans from being harmed (“Reaching”). For example, The Department of energy faces the enormous task of cleaning up radioactive waste and harmful chemicals accumulated during years of nuclear weapons production at sites across the country (“Robots work”). To clean this mess up the DOE uses robots.

This is a very practical way to prevent harm to humans from the radioactive material. This is one job that is not to be messed around with a human life. Also the robots are very cost effective, because of the risk involved and the fact that they never get tired (“Robots work”). For people to do the job the robots do, it would require very high pay and very skilled technicians (“Robots work”). It would be hard to find a skilled professional to risk their life for this job. Robots are also being used by the military to eliminate the need for manual rearming of battle tanks (“Reaching”).

This is good because once again it will provide a safe environment and increase efficiency. They will also help the army in terms of cost effectiveness. For instance, when tank after tank are coming in for ammunition rearming the job can be done without costly humans getting tired and needing fill-ins. Many scientists are now using robots to explore volcanoes, which have the potential to erupt (“Reaching”). The robots are sent down on cable and later take soil samples and test for volcanic pressure (“Reaching”). This is helping the world to better understand and predict volcanic eruptions (“Reaching”).

It may later lead to the prevention of many volcanic disasters. Without these robots many people would have to put their lives to risk for something as stupid as a nuclear waste clean up. If anyone has ever have wondered how a car is made, well the answer is a robot. To do this CAM (Computer Aided Manufacturing) computers operate machine tools that make various parts and components. They also instruct robots that weld and paint the car. Metal stamping is a method used to manufacture cars, where a machine is programmed (robot) to shape metal into the form of a dye.

This method makes the making of parts for a car very productive. For example, after the parts are ready the robots can put together an average of 75 cars an hour. Imagine a human trying to do that many cars an hour. Welding Robots are used to weld together the parts made by the dye and produce the car body frame. This also makes the construction of cars a simple task. The robots can make many cars in a very short time and do it with extreme accuracy, which is very important when making a car. Robots are very essential to the mass production of cars in the present day in time (“Automobile”).

Robots are used all the time in the making of consumer goods, such as clothing, food, toys, and much more (“Clothing”). In the food industry robots are used to transfer food from one assembly line to another, to can food, and to package final products (“Food”). This makes it easier on companies to use robots because they never need a break and can always perform their duties to the fullest (“Manufacturing”). Food packaging can be done precisely and very quickly with robots (“food”). Also in the Consumer industry-making clothes is much simpler when using robots (“Clothing”).

Robots are used to stitch clothes exactly the same way every time making it very easy to make mass quantities of the clothing (“Clothing”). They can also be used to produce letters, drawings, or patterns on clothing with exact accuracy (“Clothing”). In addition, many robots are used to manufacture toys; the molding machines are considered robots because of their ability to create a toy by means of instruction (Henningsen). Robots are also used to take the parts made by the injection molding machines and to assemble them to make a complete toy (Henningsen).

Once the toy product is down robots also can sort the toys for shipping (Henningsen). All these products mentioned above are used worldwide all the time and with out robot assistance this would be near to impossible. Robots play their biggest role in the production of consumer goods (“Robots”). Robotics is a personal hobby that many people thrive off of. Robotics to some people is like basketball to Michael Jordan, and they are a way of life for many hobbyists. Personal robots are truly the most fascinating types of robots and the most commonly recognized ones.

For example, many people know about the robots in Stars wars, which talk and have human features. This is what most people would say a robot is. Well that is exactly what a hobby robot is. The kind of robot that drives around interacting with its environment and entertaining its audience in the mean time (Schoeffler). Most commonly they have heads with eye or resemble a living creature. They are like any other hobby. People try to improve their skills and try different methods. They enter nation wide contests and win prizes for having their robot perform a certain task the best.

Robots are essential in the world as far as hobbyists are concerned. Probably the fastest growing industry beginning to lead in the use of robots is technological industry. Computers are becoming more and more important every day and people’s interest in them is growing like crazy. The first computer ever made took up a large room and could only compute what a basic calculator can do today; it was built by electricians doing tons of wiring. Today the thinking power of a computer is microscopic and is impossible for a human to build. So how are they built you ask?

Well, humans obviously design and virtually build the CPU (Central Processing Unit) on a computer that is interfaced with many very precise robots. The robots use lasers and special tools to build the CPU. Robots are also used in computers and many other electronics to place and solder electronic components onto circuit boards. Circuit boards are very messy sometimes and take very accurate placing of components. Robots are the major factor in the technology age because it would not be possible without it (“Computer”). So all in all robots are proven to be an essential part of this world.

It has been a great lesson to learn about the importance of robotics in the food, clothing, and toy industries. Finding the importance of robots for jobs, which are too dangerous or harmful for a human to perform was also very intriguing. Robots are amazing how they can have such a strong meaning in this world. Robots have the power to expand their uses and more and more people are trying to come up with new uses everyday or just modifying the current uses. We drive cars, operate computers, and eat canned food all the time and realize that, yes; robots are essential in our daily lives.

Future Technology Essay

People often think that future is all about flying cars, robots and space travelling. Maybe it will be like that, who knows, but at least until this day the changes havent been remarkable. Companies are all the time investing more money on research and development. This indicates that companies and government are interested to achieve and find new technological inventions that would change the markets. All ready one of the computer related inventions, Internet, has changed the spreading of information globally.

E-companies are stocks are rising in the stock markets like rockets. This is a great example how future technology will change the economics around the world as it affects greatly our everyday life. Internet is worldwide network of connected computers. This network enables you to communicate with the rest of the world in different ways. (1) Has been approximated that the total amount of information globally doubles every 18 months, which indicates that internet, as an important part of media nowadays, affects everyone of us though we might not have a possibility to be on-line.

The approximated number of people who are on-line daily is more than 18%. As you can imagine and as you probably may have seen, there are a lot companies. You can find the big ones like Coca-Cola, Disney, Xerox, IBM. Apart from supplying (product) information and amusement, they mostly use the web for name and product branding (recognition). There’s a completely new industry with lots and lots of Net based companies like the search engines, banner exchanges, hosting services, (Net) marketers and software enterprises.

And there are others, which have expanded their originally offline business field to the Net (Credit Card companies, Researchers, Marketers, Yellow Pages). Small and medium usiness companies selling to consumers. A great part of them use the Net to expand their offline business, others try to make a living on it. And some of them see the necessity to transfer from one to the other in the future. Business-to-business companies are also found on the Net. In short, all kind of enterprises have taken the step to the online world.

Internet is not only a way to spend time surfing, but it is also an very good way to make money by transforming products, services and markets. It is an easy way to reach people when thinking advertising and it is an easy way to people to reach the nformation wanted, but the competition between companies in the virtual reality of Internet, is as hard as in the real world. Governments space program also influences and will influence economics of the future. U. S. overnments NASA (North American Space Association) has done great job exploring space and research new opportunities in outer space and other planets.

The question is how the new future technology will change the direction of economics and by that our living on Earth or maybe on some other planet The world population is growing fast. The room to live on earth might be a problem in future, and Earth might ot be able to feed the upcoming population. This is one of the reasons why we have to explore the space for new opportunities. The problem is the money. Are taxpayers willing to pay?

After the resent failure of sending a $266 million Pathfinder to Mars, taxpayers started doubt is the space program worth it, but mistakes that are caused by understaffed and overworked space teams are not unique to interplanetary missions, like NASAs Pathfinder mission. A single broken cord can turn to a $400 million cost, but who said it is not risky. Is this $450 billion plan going to give taxpayers their moneys back? No, because the new technology will help their children and grandchildren to live their everyday lives in polluted and overpopulated environment caused by the past generations.

In recent years, cost-reduction efforts throughout Americas space industry have had profound effects on the workforce. Older and more experienced workers were the predominant target of cost-conscious layoffs or of contract swapping prior to retirement-benefits vesting. But even the younger workers, supposedly their eventual replacement, were victimized by the cuts. (3)This is what the taxpayers should understand; their selfish use of oney on researching new technology might be a threat for the future generations.

If we were to bring back a rock in 2005 that clearly shows evidence of ancient life on the planet or fi we were to find evidence of life on Mars, that would be great impetus for a human program. A manned mission must have a compelling scientific or economic rationale, said Alan Ladwig, NASAs associate administrator. (4) The greatest effect of future technology has is on the productivity. Technological change, or innovation, is a contributor to the growth of productivity. From the development of plows to the nvention of computers, history shows many example of technologies that have increased productivity.

New products, new methods of production, new ways of organizing production(Fords assembly line) or marketing products and new methods of communication can each demonstrate how productivity increases. And when productivity increases faster than the population, standard of living increases. This makes peoples everyday life easier and the quality of living is higher. One example how technological change has changed our living past 10 years have been reusable products and materials. Recycling and reusable aterials have made our quality of living better by minimizing the production of trash.

Also the technological changes in agriculture have increased productivity of our basic need products. One of the most dramatic high-tech developments arriving at the millenium is the obsolescence of money. The advent of the Internet and other new media marketplaces, like interactive TV, demands a new kind of currency that is secure, virtual, global, and digital. The death of hard cash, and its rebirth as digital currency, will transform all transactions in society and touch industry worldwide.

The emerging digital market and the new interactive consumer challenge our assumptions about how to conduct business. 30 million people today with a spending power of over$100 billion, represents a serious market no business can afford to ignore. This new consumer is virtual, global, interactive and multimedia-driven. (5) The digital money has taken over. The simple cash has changed into numbers on the computers. People pay their bills from home by using computers and Internet, people pay their grocery with a plastic credit card and people go shopping from home and they dont ven have to move, just use the keyboard. A huge problem in the future will be the energy.

Already we are noticing that our sources of energy will be empty someday. A team of scientists and engineers have predicted that the technological trends that will shape the world in next 50 years will be high powered energy packages. On the energy front are highpower energy packages such as microgenerators of electricity that will make electronic products and appliances highly mobile; environmentally clean, decentralized power sources; batteries linked to solar power; and small generators fueled by natural as.

As the population of the Earth keeps increasing we have to figure out how to feed all the people who are going to live here. Globally thinking we are already suffering of the lack of the food. All over the world hunger is a big problem. Clean water will be a problem too if technological changes wont help us. Designer foods, genetically engineered foods that are environmentally friendly and highly nutritious, will fill the stores. Even cotton and wool will be genetically engineered. Water worldwide will be safe and inexpensive because echnology will provide advanced filtering, processing, and delivery.

Desalination and water extraction from air are also possible. In the years ahead new technologies will become much more personalized, and they will closely affect almost every aspect of our lives. (7) This was an very optimistic prediction of the future, but until then we have to keep people worldwide alive without the new innovations. The money countries are using to military should go to the people who suffer hunger and to the research of cures of globally spread diseases like HIV and cancer.

No one knows whats going to happen in the future, but the new future technology can at least give us a direction. Our actions have a great effect how we and the upcoming generations are going to live on Earth. Putting money now on research and development gives a better economic base that we can rely on. The biggestchange to our economic will have the increased productivity. By increased productivity our standard of living will be higher and our everyday life will be easier. May everyone of us be there to witness the flying cars and talking robots, so that we can be proud of our achievements.

Mind and Machine: The Essay

Technology has traditionally evolved as the result of human needs. Invention, when prized and rewarded, will invariably rise-up to meet the free market demands of society. It is in this realm that Artificial Intelligence research and the resultant expert systems have been forged. Much of the material that relates to the field of Artificial Intelligence deals with human psychology and the nature of consciousness.

Exhaustive debate on consciousness and the possibilities of consciousnessness in machines has adequately, in my opinion, revealed that it is most unlikely that we will ever converse or interract with a machine of artificial consciousness. In John Searle’s collection of lectures, Minds, Brains and Science, arguments centering around the mind-body problem alone is sufficient to convince a reasonable person that there is no way science will ever unravel the mysteries of consciousness. Key to Searle’s analysis of consciousness in the context of Artificial Intelligence machines are refutations of strong and weak AI theses.

Strong AI Theorists (SATs) believe that in the future, mankind will forge machines that will think as well as, if not better than humans. To them, pesent technology constrains this achievement. The Weak AI Theorists (WATs), almost converse to the SATs, believe that if a machine performs functions that resemble a human’s, then there must be a correlation between it and consciousness. To them, there is no technological impediment to thinking machines, because our most advanced machines already think.

It is important to review Searle’s refutations of these respective theorists’ proposition to establish a foundation (for the purpose of this essay) for discussing the applications of Artificial Intelligence, both now and in the future. Strong AI Thesis Strong AI Thesis, according to Searle, can be described in four basic propositions. Proposition one categorizes human thought as the result of computational processes. Given enough computational power, memory, inputs, etc. , machines will be able to think, if you believe this proposition. Proposition two, in essence, relegates the human mind to the software bin.

Proponents of this proposition believe that humans just happen to have biological computers that run wetware as opposed to software. Proposition three, the Turing proposition, holds that if a conscious being can be convinced that, through context-input manipulation, a machine is intelligent, then it is. Proposition four is where the ends will meet the means. It purports that when we are able to finally understand the brain, we will be able to duplicate its functions. Thus, if we replicate the computational power of the mind, we will then understand it.

Through argument and experimentation, Searle is able to refute or severely diminish these propositions. Searle argues that machines may well be able to understand syntax, but not the semantics, or meaning communicated thereby. Esentially, he makes his point by citing the famous Chinese Room Thought Experiment. It is here he demonstrates that a computer (a non-chinese speaker, a book of rules and the chinese symbols) can fool a native speaker, but have no idea what he is saying. By proving that entities don’t have to understand what they are processing to appear as understanding refutes proposition one.

Proposition two is refuted by the simple fact that there are no artificial minds or mind-like devices. Proposition two is thus a matter of science fiction rather than a plausible theory A good chess program, like my (as yet undefeated) Chessmaster 4000 Trubo refutes proposition three by passing a Turing test. It appears to be intelligent, but I know it beats me through number crunching and symbol manipulation. The Chessmaster 4000 example is also an adequate refutation of Professor Simon’s fourth proposition: you can understand a process if you can reproduce it.

Because the Software Toolworks company created a program for my computer that simulates the behavior of a grandmaster in the game, doesn’t mean that the computer is indeed intelligent. Weak AI Thesis There are five basic propositions that fall in the Weak AI Thesis (WAT) camp. The first of these states that the brain, due to its complexity of operation, must function something like a computer, the most sophisticated of human invention. The second WAT proposition states that if a machine’s output, if it were compared to that of a human counterpart appeared to be the result of intelligence, then the machine must be so.

Proposition three concerns itself with the similarity between how humans solve problems and how computers do so. By solving problems based on information gathered from their respective surroundings and memory and by obeying rules of logic, it is proven that machines can indeed think. The fourth WAT proposition deals with the fact that brains are known to have computational abilities and that a program therein can be inferred. Therefore, the mind is just a big program (wetware). The fifth and final WAT proposition states that, since the mind appears to be wetware, dualism is valid.

Proposition one of the Weak AI Thesis is refuted by gazing into the past. People have historically associated the state of the art technology of the time to have elements of intelligence and consciousness. An example of this is shown in the telegraph system of the latter part of the last century. People at the time saw correlations between the brain and the telegraph network itself. Proposition two is readily refuted by the fact that semantical meaning is not addressed by this argument. The fact that a clock can compute and display time doesn’t mean that it has any concept of coounting or the meaning of time.

Defining the nature of rule-following is the where the weakness lies with the fourth proposition. Proposition four fails to again account for the semantical nature of symbol manipulation. Referring to the Chinese Room Thought Experiment best refutes this argument. By examining the nature by which humans make conscious decisions, it becomes clear that the fifth proposition is an item of fancy. Humans follow a virtually infinite set of rules that rarely follow highly ordered patterns. A computer may be programmed to react to syntactical information with seeminly semantical output, but again, is it really cognizant?

We, through Searle’s arguments, have amply established that the future of AI lies not in the semantic cognition of data by machines, but in expert systems designed to perform ordered tasks. Technologically, there is hope for some of the proponents of Strong AI Thesis. This hope lies in the advent of neural networks and the application of fuzzy logic engines. Fuzzy logic was created as a subset of boolean logic that was designed to handle data that is neither completely true, nor completely false. Intoduced by Dr. Lotfi Zadeh in 1964, fuzzy logic enabled the modelling of uncertainties of natural language.

Dr. Zadeh regards fuzzy theory not as a single theory, but as fuzzification, or the generalization of specific theories from discrete forms to continuous (fuzzy) forms. The meat and potatos of fuzzy logic is in the extrapolation of data from seta of variables. A fairly apt example of this is the variable lamp. Conventional boolean logical processes deal well with the binary nature of lights. They are either on, or off. But introduce the variable lamp, which can range in intensity from logically on to logically off, and this is where applications demanding the application of fuzzy logic come in.

Using fuzzy algorithms on sets of data, such as differing intensities of illumination over time, we can infer a comfortable lighting level based upon an analysis of the data. Taking fuzzy logic one step further, we can incorporate them into fuzzy expert systems. This systems takes collections of data in fuzzy rule format. According to Dr. Lotfi, the rules in a fuzzy logic expert system will usually follow the following simple rule: if x is low and y is high, then z is medium. Under this rule, x is the low value of a set of data (the light is off) and y is the high value of the same set of data (the light is fully on). s the output of the inference based upon the degree of fuzzy logic application desired.

It is logical to determine that based upon the inputs, more than one output (z) may be ascertained. The rules in a fuzzy logic expert system is described as the rulebase. The fuzzy logic inference process follows three firm steps and sometimes an optional fourth. They are:

1. Fuzzification is the process by which the membership functions determined for the input variables are applied to their true values so that truthfulness of rules may be established. 2. Under inference, truth values for each rule’s premise are calculated and then applied to the output portion of each rule. 3. Composition is where all of the fuzzy subsets of a particular problem are combined into a single fuzzy variable for a particular outcome. 4. Defuzzification is the optional process by which fuzzy data is converted to a crisp variable. In the lighting example, a level of illumination can be determined (such as potentiometer or lux values).

A new form of information theory is the Possibility Theory. This theory is similar to, but independent of fuzzy theory. By evaluating sets of data (either fuzzy or discrete), rules regarding relative distribution can be determined and possibilities can be assigned. It is logical to assert that the more data that’s availible, the better possibilities can be determined. The application of fuzzy logic on neural networks (properly known as artificial neural networks) will revolutionalize many industries in the future. Though we have determined that conscious machines may never come to fruition, expert systems will certainly gain intelligence as the wheels of technological innovation turn.

A neural network is loosely based upon the design of the brain itself. Though the brain is an impossibly intricate and complex, it has a reasonably understood feature in its networking of neurons. The neuron is the foundation of the brain itself; each one manifests up to 50,000 connections to other neurons. Multiply that by 100 billion, and one begins to grasp the magnitude of the brain’s computational ability. A neural network is a network of a multitude of simple processors, each of which with a small amount of memory. These processors are connected by uniderectional data busses and process only information addressed to them.

A centralized processor acts as a traffic cop for data, which is parcelled-out to the neural network and retrieved in its digested form. Logically, the more processors connected in the neural net, the more powerful the system. Like the human brain, neural networks are designed to acquire data through experience, or learning. By providing examples to a neural network expert system, generalizations are made much as they are for your children learning about items (such as chairs, dogs, etc. ). Modern neural network system properties include a greatly enhanced computational ability due to the parallelism of their circuitry.

They have also proven themselves in fields such as mapping, where minor errors are tolerable, there is alot of example-data, and where rules are generally hard to nail-down. Educating neural networks begins by programming a backpropigation of error, which is the foundational operating systems that defines the inputs and outputs of the system. The best example I can cite is the Windows operating system from Microsoft. Of-course, personal computers don’t learn by example, but Windows-based software will not run outside (or in the absence) of Windows.

One negative feature of educating neural networks by backpropigation of error is a phenomena known as, overfitting. Overfitting errors occur when conflicting information is memorized, so the neural network exhibits a degraded state of function as a result. At the worst, the expert system may lock-up, but it is more common to see an impeded state of operation. By running programs in the operating shell that review data against a data base, these problems have been minimalized. In the real world, we are seeing an increasing prevalence of neural networks.

To fully realize the potential benefits of neural networks our lives, research must be intense and global in nature. In the course of my research on this essay, I was privy to several institutions and organizations dedicated to the collaborative development of neural network expert systems. To be a success, research and development of neural networking must address societal problems of high interest and intrigue. Motivating the talents of the computing industry will be the only way we will fully realize the benefits and potential power of neural networks.

There would be no support, naturally, if there was no short-term progress. Research and development of neural networks must be intensive enough to show results before interest wanes. New technology must be developed through basic research to enhance the capabilities of neural net expert systems. It is generally acknowledged that the future of neural networks depends on overcoming many technological challenges, such as data cross-talk (caused by radio frequency generation of rapid data transfer) and limited data bandwidth.

Real-world applications of these intelligent neural network expert systems include, according to the Artificial Intelligence Center, Knowbots/Infobots and intelligent Help desks. These are primarily easily accessible entities that will host a wealth of data and advice for prospective users. Autonomous vehicles are another future application of intelligent neural networks. There may come a time in the future where planes will fly themselves and taxis will deliver passengers without human intervention. Translation is a wonderful possibility of these expert systems.

Imagine the ability to have a device translate your English spoken words into Mandarin Chinese! This goes beyond simple languages and syntactical manipulation. Cultural gulfs in language would also be the focus of such devices. Through the course of Mind and Machine, we have established that artificial intelligence’s function will not be to replicate the conscious state of man, but to act as an auxiliary to him. Proponents of Strong AI Thesis and Weak AI Thesis may hold out, but the inevitable will manifest itself in the end.

It may be easy to ridicule those proponents, but I submit that in their research into making conscious machines, they are doing the field a favor in the innovations and discoveries they make. In conclusion, technology will prevail in the field of expert systems only if the philosophy behind them is clear and strong. We should not strive to make machines that may supplant our causal powers, but rather ones that complement them. To me, these expert systems will not replace man – they shouldn’t. We will see a future where we shall increasingly find ourselves working beside intelligent systems.

Technological Changes of the Past and Present

The technology which surrounds almost everyone in the modern society, affects both work and leisure activities. Technology contains information that many would rather it did not have. It influences minds in good and bad ways, and it allows people to share information which they would otherwise not be able to attain. Even if a person does not own a computer or have credit cards, there is information on a computer somewhere about everyone. The technology which is just now beginning to be manipulated and harnessed is affecting the minds of small children and adolescents in ways that could be harmful.

It is affecting our immediate future. It also gives another form of communication and exchange of information which was not available before, information that is both good and bad. Technology is one of the principal driving forces of the future; it is transforming our lives and shaping our future at rates unprecedented in history, with profound implications which we can’t even begin to see or understand. Many different elements affect how satisfied we are with our lives. The impact of technology on these elements can change how safe, healthy and happy people feel.

Throughout history, people have looked for better ways to meet their needs and to satisfy their expectations. Technology has improved the way people feed, clothe and shelter themselves. Technology has also changed other aspects of everyday life, such as health care, education, job satisfaction, and leisure time activities. People have used technology since they first chipped stone blades to improve their hunting. Yet some people call the current age the “Technological Age” because of society’s dependence on technology. For the first time in human history, almost all the goods and services people use depend on technology.

The products of technology are available to almost everyone in society. The economy of a country influences how the people of the county live. Technology is often considered the key to a nation’s economic growth. Most economists would say that it is one of the factors in economic growth, but they would probably disagree about its importance. Many economists think that if technology sparks growth in one sector of the economy in the form of increased productivity, growth will also occur in other sectors of the economy. Jobs may be lost in one industry, such as agriculture but new jobs may emerge in other sectors of the economy.

There may be more jobs or, in some case, completely new kinds of jobs. Technology may also be used to solve urgent problems. Our growing population is using up infinite supplies of natural resources. Innovations in technology can allow for more efficient use of limited or scarce resources. More products might be made from the same amount of raw material using new techniques. Technology can increase productivity to help countries compete with other countries in selling goods and services. Some say that without technological improvements, the economy would grow slowly or not at all.

Society could remain the same for years, some what like the early Middle Ages in Europe, in which there was little economic change for hundreds of years. Ways to manufacture goods have changed continuously through history. Today, several important new advances in technology are transforming. These technologies create new products; most of them also change the way people in society interact. These technologies have a tremendous impact on our monetary resources. Some of the technologies which are having the greatest effect on the economy are: robotics, automation and computerization.

Although robotics have a well-established position in the Japanese industry, it has not, so far, turned out to be what many experts thought it would. Businesses in the United States and Europe have not embraced industrial robots at nearly the rate of the Japanese, and other more consumer oriented versions are very much in the development phase. Even so, industry sources believe that the use of robots to make clothes and other consumer goods will be common by the turn of the century. This general trend (the use of robotics) is likely to change, perhaps dramatically, in the next two decades.

Robots are in one sense collections of other more basic technologies: sensors, controlling and analysis software, pattern recognition capabilities and so on. Most all of these other technologies will make significant strides in capability, size, power requirements, and other design characteristics and the integration of these other advances should accrue directly to robotics. Robots are machines which combine computer technology with industrial machines. The computers are programmed to operated the machines. Robots come in many shapes and sizes and can be programmed to perform a variety of tasks.

Robots are gradually being introduced on assembly lines in some industries. In automated factories, the amount produced by each human worker increases tremendously, but robots are very expensive for industries to buy. Only large industries such as the auto industry currently develops, though, the cost of robots is dropping and improvements to robots are making them more flexible so more manufacturers will find them useful. The use of robotics effects our economy immensely. Robots are much more durable, faster, efficient, ,reliable and cheaper “workers”.

The use of robots in industries will rise because employers will see the advantages that robots have over human employees. The utilization of robots in the workplace will have a massive effect to the unemployment rate. Automation: Moving in a New Direction A small number of decisions we make play a major role in shaping many other areas of our lives. For example, when we decide what (and how) we will consume, a huge system of farms, distributors, stores, manufactures, restaurants etc these respond directly to those desires.

One of the most important decisions we make concerns the way we move ourselves and our commodities. Our system of transportation greatly affects how we use energy, develop technology , affect the economy and environment, and shape our social relationships. When Henry Ford was starting out on his remarkable career in Detroit, a bustling town which gave full vent to the creative energies of some amazing innovators, the economy of was showing enormous cracks. But at the time, even the most prescient of fortune-tellers would have had trouble forecasting what was about to happen.

Carriage and buggy-whip makers were still turning handsome profits in a growing market, and the few cars on the dusty, unpaved roads were little more than fanciful toys for the adventurous rich. Some of the communications technologies pioneered toward the end of the nineteenth century must have seemed just as esoteric to the leading financiers and industrialists of the day, who were doing fine bankrolling the traditional industries they knew so well. Yet, within a few short years, Ford and others would shape consumer products out of the new technologies that would set in motion an awesome economic transformation.

Henry Ford didn’t invent the automobile. Nor did he invent mass production or the assembly line. Ford is famous because he took these existing concepts and incorporated them into a n efficient, large-scale system of manufacturing inexpensive, reliable cars. “I’m going to democratize the automobile. ” Ford said, “and when I’m through, everybody will have one. ” (Chase, 1997, 47) Cars have made a big difference in the way communities have been designed. Street layout, the design of homes, and traffic laws have changed as methods of transportation has changed throughout history.

Automobiles are responsible for more than half the airborne pollution in the western world. Many plans are being developed to control air pollution. Burning cleaner fuel and burning fuel more efficiently both help the environment. Pollution controls devices for cars have also been developed. For example, catalytic systems were installed in many car exhaust systems in the 1980s. These devices change dangerous gases into harmless carbon dioxide and water. They also burn up much of the exhaust with fresh air in a chamber near the exhaust pipe.

The car of the future will need new designs which make even better use of the fuel which powers them. Cars influence the ways communities are developing. Since it is possible to drive great distances rapidly, many people choose to live far away from where they work. Many cities have a downtown core where people work and a suburban area where they live. People may spend a great deal of time commuting through rush hour traffic. In spite of many problems , it is hard to imagine a society without cars.

Cars and trucks have become so important that most people would not want to do without them. They would prefer to see the design and construction of cars changed to accommodate safety and environment concerns. The car has helped created jobs, freedom, convenience and fun as well as pollution, traffic jams and urban sprawl. The challenge facing the auto industry is to keep pace with the changing values of society and to develop the technology to do so. Computerization: Extraordinary Technology Computers are used in most manufacturing industries today.

Computers are used to automate processes in much faster ways . These can be office procedures such as word processing or bookkeeping, or production processes such as cutting and assembling clothes. Computers are becoming an important part of industrial design. Computer-aided design (CAD) and computer-aided manufacturing (CAM) are new terms which describe the important role computers have come to play in our industry. The wide use to computers has stimulated companies which manufacture the many parts needed to make and operate them.

Some people, however feel that computer technology has gone too far. It may create problems such as machine errors in people’s records and banks and governments may gain access to private financial information. Computerization has made it easier for banks to keep track of individual baking transactions so charges for these have increased. Branch-bank employees worry that computers and automated tellers may replace people. While technological change has been a priority for banks over the last years, they also recognize the need to communicate in person with customers.

Banks must manage money and data effectively but they must also maintain personal relations. Bank personnel may be assisted with computer and some services may work well when automated, but banks will probably never lose their staffs to machines. A new, information-technology-driven circle of growth has replaced the aging manufacturing ring and scarcely not many have noticed. The statistics that told us so much about the economy’s health during the 1920s to the 1980s are still treated with a reverence they no longer deserve.

That’s why the experts have so much trouble explaining what’s going on now. The prophets mumbled about the severity of the recession in industry; rising unemployment; a weakening currency. Now, statistics can be managed to produce all sorts of results. But no matter how you shake or stir them, the numbers show plainly that a New Economy, embodied and driven by technology, information and innovation, has emerged, with little fanfare, in the past decade. And though it would be impossible to tell from the general statistics, this New Economy is absolutely booming, with no peak in sight.

Now with the new wave of the Internet minds of not only small children, but also adolescents and adults become influenced by this outside information. As the mind develops, things such as pornography is no longer the main concern. Now, because of the easy access to information, the fourteen years old who has just discovered that she failed ninth grade can find out how to make a bomb out of household detergents. As can the laid-off business man, the dumped boyfriend, and the deranged psycho. My general sentiment about technology, and the Internet are simple.

In light of the history of mass communication, there is nothing we can do to protect any media from the “sound byte” or any other form of commercial poisoning. But, our country’s public opinion doesn’t have to fall into a nose-dive of lies and corruption, because of it! Television doesn’t have to be a weapon against us, used to sway our opinions or to conform to people who care about their own prosperity, not ours. With the power of a critical thinking education, we can stop being motivated by the sound byte and, instead we can laugh at it as a cheap attempt to persuade us and have a little fun with it.

Technology is not all bad. The whole point of this is that people have to be sure that everyone is aware of all the good and bad aspects of technology. I feel that the advance of technology is a good trend for our society; however, it must be in conjunction with advances in education so that society is able to master and understand technology. In the future we may see many problems arising from this new wave of technology. Unemployment numbers will most probably rise, crime will increase, and We can be the masters of technology, and not let it be the masters of us.

Nanotechnology Report Essay

Nanotechnology is an anticipated manufacturing technology giving thorough, inexpensive control of the structure of matter. The term has sometimes been used to refer to any technique able to work at a submicron scale Molecular manufacturing will enable the construction of giga-ops computers smaller than a cubic micron; cell repair machines; personal manufacturing and recycling appliances; and much more. Nanotechnology Broadly speaking, the central thesis of nanotechnology is that almost any chemically stable structure that can be specified can in fact be built.

This possibility was first advanced by Richard Feynman in 1959 when he said: “The principles of physics, as far as I can see, do not speak against the possibility of maneuvering things atom by atom. ” (Feynman won the 1965 Nobel prize in physics). This concept is receiving increasing attention in the research community. There have been three international conferences directly on molecular nanotechnology as well as a broad range of conferences on related subjects.

Science said “The ability to design and manufacture devices that are only tens or hundreds of atoms across promises rich rewards in electronics, catalysis, and materials. The scientific rewards should be just as great, as researchers approach an ultimate level of control – assembling matter one atom at a time. ” “Within the decade, Foster or some other scientist is likely to learn how to piece together atoms and molecules one at a time using the STM . ” (Referring to John Foster of IBM Almaden labs, who spelled “IBM” by pushing xenon atoms around with a scanning tunnelling microscope.

Eigler and Schweizer at IBM reported on “. the use of the STM at low temperatures (4K) to position individual xenon atoms on a single- crystal nickel surface with atomic precision. This capacity has allowed us to fabricate rudimentary structures of our own design, atom by atom. The processes we describe are in principle applicable to molecules also”. Drexler has proposed the assembler, a device having a submicroscopic robotic arm under computer control. It will be capable of holding and positioning reactive compounds in order to control the precise location at which chemical reactions take place.

This general approach should allow the construction of large atomically precise objects by a sequence of precisely controlled chemical reactions, building objects molecule by molecule. If designed to do so, assemblers will be able to build copies of themselves, that is, to replicate. Because they will be able to copy themselves, assemblers will be inexpensive. We can see this by recalling that many other products of molecular machines–firewood, hay, potatoes–cost very little. By working in large teams, assemblers and more specialized nanomachines will be able to build objects cheaply.

By ensuring that each atom is properly placed, they will manufacture products of high quality and reliability. Left-over molecules would be subject to this strict control as well, making the manufacturing process extremely clean. The plausibility of this approach can be illustrated by the ribosome. Ribosomes manufacture all the proteins used in all living things on this planet. A typical ribosome is relatively small (a few thousand cubic nanometers) and is capable of building almost any protein by stringing together amino acids (the building blocks of proteins) in a precise linear sequence.

To do this, the ribosome has a means of grasping a specific amino acid (more precisely, it has a means of selectively grasping a specific transfer RNA, which in turn is chemically bonded by a specific enzyme to a specific amino acid), of grasping the growing polypeptide, and of causing the specific amino acid to react with and be added to the end of the polypeptide. The instructions that the ribosome follows in building a protein are provided by mRNA (messenger RNA). This is a polymer formed from the four bases adenine, cytosine, guanine, and uracil. A sequence of several hundred to a few thousand such bases codes for a specific protein.

The ribosome “reads” this “control tape” sequentially, and acts on the directions it provides. In an analogous fashion, an assembler will build an arbitrary molecular structure following a sequence of instructions. The assembler, however, will provide three-dimensional positional and full orientational control over the molecular component (analogous to the individual amino acid) being added to a growing complex molecular structure (analogous to the growing polypeptide). In addition, the assembler will be able to form any one of several different kinds of chemical bonds, not just the single kind (the peptide bond) that the ribosome makes.

Calculations indicate that an assembler need not inherently be very large. Enzymes “typically” weigh about 10^5 amu (atomic mass units). while the ribosome itself is about 3 x 10^6 amu. The smallest assembler might be a factor of ten or so larger than a ribosome. Current design ideas for an assembler are somewhat larger than this: cylindrical “arms” about 100 nanometers in length and 30 nanometers in diameter, rotary joints to allow arbitrary positioning of the tip of the arm, and a worst-case positional accuracy at the tip of perhaps 0. o 0. 2 nanometers, even in the presence of thermal noise.

Even a solid block of diamond as large as such an arm weighs only sixteen million amu, so we can safely conclude that a hollow arm of such dimensions would weigh less. Six such arms would weigh less than 10^8 amu. The assembler requires a detailed sequence of control signals, just as the ribosome requires mRNA to control its actions. Such detailed control signals can be provided by a computer. A feasible design for a molecular computer has been presented by Drexler.

This design is mechanical in nature, and is based on sliding rods that interact by blocking or unblocking each other at “locks. ” This design has a size of about 5 cubic nanometers per “lock” (roughly equivalent to a single logic gate). Quadrupling this size to 20 cubic nanometers (to allow for power, interfaces, and the like) and assuming that we require a minimum of 10^4 “locks” to provide minimal control results in a volume of 2 x 10^5 cubic nanometers (. 0002 cubic microns) for the computational element. (This many gates is sufficient to build a simple 4-bit or 8-bit general purpose computer, e. a 6502).

An assembler might have a kilobyte of high speed (rod-logic based) RAM, (similar to the amount of RAM used in a modern one-chip computer) and 100 kilobytes of slower but more dense “tape” storage – this tape storage would have a mass of 10^8 amu or less (roughly 10 atoms per bit – see below). Some additional mass will be used for communications (sending and receiving signals from other computers) and power. In addition, there will probably be a “toolkit” of interchangable tips that can be placed at the ends of the assembler’s arms.

When everything is added up a small assembler, with arms, computer, “toolkit,” etc. should weigh less than 10^9 amu. Escherichia coli (a common bacterium) weigh about 10^12 amu. Thus, an assembler should be much larger than a ribosome, but much smaller than a bacterium. It is also interesting to compare Drexler’s architecture for an assembler with the Von Neumann architecture for a self replicating device. Von Neumann’s “universal constructing automaton” had both a universal Turing machine to control its functions and a “constructing arm” to build the “secondary automaton.

The constructing arm can be positioned in a two-dimensional plane, and the “head” at the end of the constructing arm is used to build the desired structure. While Von Neumann’s construction was theoretical (existing in a two dimensional cellular automata world), it still embodied many of the critical elements that now appear in the assembler. Should we be concerned about runaway replicators? It would be hard to build a machine with the wonderful adaptability of living organisms.

The replicators easiest to build will be inflexible machines, like automobiles or industrial robots, and will require special fuels and raw materials, the equivalents of hydraulic fluid and gasoline. To build a runaway replicator that could operate in the wild would be like building a car that could go off-road and fuel itself from tree sap. With enough work, this should be possible, but it will hardly happen by accident. Without replication, accidents would be like those of industry today: locally harmful, but not catastrophic to the biosphere.

Catastrophic problems seem more likely to arise though deliberate misuse, such as the use of nanotechnology for military aggression. Chemists have been remarkably successful at synthesizing a wide range of compounds with atomic precision. Their successes, however, are usually small in size (with the notable exception of various polymers). Thus, we know that a wide range of atomically precise structures with perhaps a few hundreds of atoms in them are quite feasible. Larger atomically precise structures with complex three-dimensional shapes can be viewed as a connected sequence of small atomically precise structures.

While chemists have the ability to precisely sculpt small collections of atoms there is currently no ability to extend this capability in a general way to structures of larger size. An obvious structure of considerable scientific and economic interest is the computer. The ability to manufacture a computer from atomically precise logic elements of molecular size, and to position those logic elements into a three- dimensional volume with a highly precise and intricate interconnection pattern would have revolutionary consequences for the computer industry.

A large atomically precise structure, however, can be viewed as simply a collection of small atomically precise objects which are then linked together. To build a truly broad range of large atomically precise objects requires the ability to create highly specific positionally controlled bonds. A variety of highly flexible synthetic techniques have been considered in . We shall describe two such methods here to give the reader a feeling for the kind of methods that will eventually be feasible. We assume that positional control is available and that all reactions take place in a hard vacuum.

The use of a hard vacuum allows highly reactive intermediate structures to be used, e. g. , a variety of radicals with one or more dangling bonds. Because the intermediates are in a vacuum, and because their position is controlled (as opposed to solutions, where the position and orientation of a molecule are largely random), such radicals will not react with the wrong thing for the very simple reason that they will not come into contact with the wrong thing. Normal solution-based chemistry offers a smaller range of controlled synthetic possibilities.

For example, highly reactive compounds in solution will promptly react with the solution. In addition, because positional control is not provided, compounds randomly collide with other compounds. Any reactive compound will collide randomly and react randomly with anything available. Solution-based chemistry requires extremely careful selection of compounds that are reactive enough to participate in the desired reaction, but sufficiently non-reactive that they do not accidentally participate in an undesired side reaction.

Synthesis under these conditions is somewhat like placing the parts of a radio into a box, shaking, and pulling out an assembled radio. The ability of chemists to synthesize what they want under these conditions is amazing. Much of current solution-based chemical synthesis is devoted to preventing unwanted reactions. With assembler-based synthesis, such prevention is a virtually free by-product of positional control. To illustrate positional synthesis in vacuum somewhat more concretely, let us suppose we wish to bond two compounds, A and B.

As a first step, we could utilize positional control to selectively abstract a specific hydrogen atom from compound A. To do this, we would employ a radical that had two spatially distinct regions: one region would have a high affinity for hydrogen while the other region could be built into a larger “tip” structure that would be subject to positional control. A simple example would be the 1-propynyl radical, which consists of three co-linear carbon atoms and three hydrogen atoms bonded to the sp3 carbon at the “base” end.

The radical carbon at the radical end is triply bonded to the middle carbon, which in turn is singly bonded to the base carbon. In a real abstraction tool, the base carbon would be bonded to other carbon atoms in a larger diamondoid structure which provides positional control, and the tip might be further stabilized by a surrounding “collar” of unreactive atoms attached near the base that would prevent lateral motions of the reactive tip. The affinity of this structure for hydrogen is quite high.

Propyne (the same structure but with a hydrogen atom bonded to the “radical” carbon) has a hydrogen-carbon bond dissociation energy in the vicinity of 132 kilocalories per mole. As a consequence, a hydrogen atom will prefer being bonded to the 1-propynyl hydrogen abstraction tool in preference to being bonded to almost any other structure. By positioning the hydrogen abstraction tool over a specific hydrogen atom on compound A, we can perform a site specific hydrogen abstraction reaction. This requires positional accuracy of roughly a bond length (to prevent abstraction of an adjacent hydrogen).

Quantum chemical analysis of this reaction by Musgrave et. al. show that the activation energy for this reaction is low, and that for the abstraction of hydrogen from the hydrogenated diamond (111) surface (modeled by isobutane) the barrier is very likely zero. Having once abstracted a specific hydrogenatom from compound A, we can repeat the process for compound B. We can now join compound A to compound B by positioning the two compounds so that the two dangling bonds are adjacent to each other, and allowing them to bond.

This illustrates a reaction using a single radical. With positional control, we could also use two radicalssimultaneously to achieve a specific objective. Suppose, for example, that two atoms A1 and A2 which are part of some larger molecule are bonded to each other. If we were to position the two radicals X1 and X2 adjacent to A1 and A2, respectively, then a bonding structure of much lower free energy would be one in which the A1-A2 bond was broken, and two new bonds A1-X1 and A2-X2 were formed.

Because this reaction involves breaking one bond and making two bonds (i. , the reaction product is not a radical and is chemically stable) the exact nature of the radicals is not critical. Breaking one bond to form two bonds is a favored reaction for a wide range of cases. Thus, the positional control of two radicals can be used to break any of a wide range of bonds. A range of other reactions involving a variety of reactive intermediate compounds (carbenes are among the more interesting ones) are proposed in , along with the results of semi-empirical and ab initio quantum calculations and the available experimental evidence.

Another general principle that can be employed with positional synthesis is the controlled use of force. Activation energy, normally provided by thermal energy in conventional chemistry, can also be provided by mechanical means. Pressures of 1. 7 megabars have been achieved experimentally in macroscopic systems. At the molecular level such pressure corresponds to forces that are a large fraction of the force required to break a chemical bond.

A molecular vise made of hard diamond-like material with a cavity designed with the same precision as the reactive site of an enzyme can provide activation energy by the extremely precise application of force, thus causing a highly specific reaction between two compounds. To achieve the low activation energy needed in reactions involvingradicals requires little force, allowing a wider range of reactions to be caused by simpler devices (e. g. , devices that are able to generate only small force). Further analysis is provided in .

Feynman said: “The problems of chemistry and biology can be greatly helped if our ability to see what we are doing, and to do things on an atomic level, is ultimately developed – a development which I think cannot be avoided. ” Drexler has provided the substantive analysis required before this objective can be turned into a reality. We are nearing an era when we will be able to build virtually any structure that is specified in atomic detail and which is consistent with the laws of chemistry and physics. This has substantial implications for future medical technologies and capabilities.

One consequence of the existence of assemblers is that they are cheap. Because an assembler can be programmed to build almost any structure, it can in particular be programmed to build another assembler. Thus, self reproducing assemblers should be feasible and in consequence the manufacturing costs of assemblers would be primarily the cost of the raw materials and energy required in their construction. Eventually (after amortization of possibly quite high development costs), the price of assemblers (and of the objects they build) should be no higher than the price of other complex structures made by self-replicating systems.

Potatoes – which have a staggering design complexity involving tens of thousands of different genes and different proteins directed by many megabits of genetic information – cost well under a dollar per pound. The three paths of protein design (biotechnology), biomimetic chemistry, and atomic positioning are parts of a broad bottom up strategy: working at the molecular level to increase our ability to control matter. Traditional miniaturization efforts based on microelectronics technology have reached the submicron scale; these can be characterized as the top down strategy.

Should any limits be placed on scientific developments

Man, powered by his imagination and inquisitive character, has wondered he mechanisms of Nature since time infinite. This quest for the truth, the ways in which his surrounding works, has led to many a scientific discoveries and innovations. Since the art of making fire and creating handcrafted tools, our civilization has come a long way. Science and Technology are making advances at an amazing rate. From telephones to the Internet, calculators to computers, cars to rockets and satellites, we are submerged in a sea of discoveries and inventions made possible by Science.

Fields like Medicine and communications have made inroads into our cultures and thus our lifestyles. So vast is the impact of Science in our lives, that people fear the unthinkable. It leads them to accusations such as Science tries to play God, as in the case of cloning. Repeatedly, it is also heard that we are so dependent on Science and Technology that we who create it are nothing but slaves to it. However I feel that it would not be wrong to term Science as a friend of Humanity.

This faithful friend has come through many a times. We have reaped innumerable benefits out of this friendship. Therefore in the question of whether any limits should be placed on scientific developments, we have to assess whether these benefits and also the cons. What better field of science then, to platform our discussion than the field of medicine and forensics, which has stirred much controversy? Medicine has helped humankind in uncountable ways. People have started taking charge of their own health and life.

Therefore, the life expectancy of a person living in the nineties is about twenty years more on an average from that which people enjoyed at the start of the last century. By the virtue of medicine, not only does a person live longer but also lives his life to the fullest in the best of health. Deadly diseases such as small pox, plague and polio have caused a large number of epidemics resulting in major loss of life. The Plague Epidemic of London in the 1600s had wiped out nearly a fifth of its population.

Researches and scientific effort led many scientists to find cures or preventive vaccinations for these life-threatening diseases. Today these diseases have been eradicated from the face of the earth. Thanks to our Science, millions of lives have been saved from the clutches of these evils. The field of medicine today is well equipped to cope with the health problems faced by man. Science behind Medicine has led to awareness and preventive education among the public.

Antibiotics and other medicines sometimes help us fight life-threatening conditions. In short, the patients are often handed a second chance to live. We are no longer at the complete mercy of nature. The right to choose and to take control of ones life has been passed down to the individual. As pointed out by Willard Gaylin in his essay, Harvesting the Dead, science has essentially changed the definition of death. Now although a person could be declared dead, he could have willed his usefulness beyond his mortality.

Medical technology has reached a point where organs can be transplanted from one individual to another. However many see red in such an act as desecration of a human body. But by donating his or her organs, the person would not only have saved someones life, he would have also found a meaning not only in his lifetime but also in his death. Medicine has often been cited as a means to over-population. Sure, it helps us live a little longer but it also provides us with birth control techniques such as contraceptives and sterility operations to help prevent it.

Speaking along this line, instead of blaming science for our troubles, would it not be right to blame those who do not heed the advice provided by Science and make use of the technology it has provided to curb over-population? Issues such as euthanasia and abortion have always been topics of debate in the field of medicine. Many equate these to murder and protest against its use. Even before the present day techniques were developed, people already had in place procedures that essentially had similar goals to what is now termed as euthanasia.

Science has just provided us with simpler ways that are not tough on the patients themselves. This in my opinion is no justification but the fact. Further man controls the use of any technology. It is a question of ethics of the person resorting to such means. If there is enough reason and rationale behind it, then it can be judged as an act of mercy. On the other hand, an abuse of this technology is nothing but a murder. Even if it results in a few cases of abuse of this science, we cannot possibly discount it as evil because it is its use that is bad.

There has been a lot of discussion and hype surrounding the recently unveiled Human Genome Project. As one of the researchers puts it, “It has opened a library of life which might take at least a century to explore”. With such a huge database at our command, there is no telling where scientific developments might lead us next. The mission of the Genome Project is to identify the thousands of genes found in man, determine their scientific sequences, interpret the data to find solutions to some of the unsolved questions on human life.

Though finding facts about our bodies is its main emphasis, it would also look into the possible ethical and legal consequences of unveiling such data to humankind. The project has been speculated to stop and even reverse the aging process. In short, it might be possible to bridge the gap between life and death. This bold claim has caused an uproar amongst people, they say that by acting in such a manner, we humans are trying to play the role of God. However, according to Capra of Tao of Physics, Science is trying to find the basic stuff that constitutes the reality.

This research has shed enormous amount of light on life. Though only a piece of the great jigsaw puzzle of life, it leads us one step closer to the whole picture. Understanding the data helps us find the meaning of life and who we really are. It helps us figure out why we act and behave in a manner that we commonly do. Thus with a better understanding of our bodies, we progress towards conditions which our bodies perhaps want. If this is so, then it can only result in better living standards. What would be that God, who does not yearn for the prosperity of His people?

The Genome Project supplies us with valuable information, which tend to further the good done by Medicine. The knowledge obtained about our DNA could help us to diagnose, treat and someday prevent the thousands of disorders and illnesses that affect us. Learning our genetic codes could help us determine the modes of attacks used by pathogens and viruses and thus wipe out deadly diseases such as malaria from humanity. Another possible use of this vast information can be marked out in Genetic Screening of pregnant mothers and their fetuses.

Some people see red in this citing discrimination of the less fortunate individuals where though genetically they are at a risk for some disease which they do not even show symptoms of. However, every coin has two sides to it. If the prediction turns out to be right, is it right to put that unborn child through a lifelong agony of pain and suffering, always depending on others to fend for them? The democracy that we live in guarantees its citizens with the freedom to choose.

After such genetic screening, all that we would be doing is handing the parents of the unborn child the right to choose life or abortion of their child. After all, they are going to care after him or her for the rest of their lives. The creation of Dolly, the sheep famed for being the first ever clone, has since led to the opening of a Pandoras box. The possibility of cloning human beings is now very much a reality. Cloning involves many religious, moral and ethical issues that need to be addressed. Is it unnatural?

Are we playing God? compelling arguments state that cloning of both human and non-human species results in harmful physical and psychological effects on both groups : cloning of human beings would result in severe psychological effects in the cloned child, and that the cloning of non-human species subjects them to unethical or moral treatment for human needs. However, cloning is good news for infertile couples hoping for a child of their own, or it can be used to clone animals on the brink of extinction to ensure its survival.

If cloning were allowed to be further developed, scientists would be able to clone body organs which are an exact replica of an individual body organ. This would prove to be very beneficial to a person who may have lost a body organ. For a lot of time, especially the period of the First World War, there had been talks about Eugenesis (Happy Genetics). It is simply a breeding program for humans with certain desirable characteristics for the benefit of humans. In recent times there has been much talk about designer babies, whereby the genetic make up of the future child is carefully selected and planned.

Thus these babies could possess everything a potential parent might want in a child: good looks, intelligence, perfect eyesight, atheletic abilities, lower risk of illnesses etc, right down to the little details like blond hair and blue eyes. While this pursuit for perfection could benefit society with smarter people, or less health problems, the future might be bleak for the children who were conceived naturally, they might even harbour feelings of resentment against parents who could, but didnt give them the best.

In my opinion, science, like most things have the good and the bad side to it. Douglas Shrader tries to explain through the Utilitarianism Principle that if an act produces more good than harms for a society, it can be reasoned out as a right thing to do for the society as a whole. Similarly, if we take a balance and weigh the benefits and costs of scientific developments, we would find that the case is not even close. The benefits such scientific discoveries could and have endowed on humanity far outweigh the costs .

The ethical and moral implications associated with it make it difficult to draw the line of limit. However, looking more closely at our world, we find ethics in most of the disciplines including religion. People can take advantage of any field if they wish to, but our social and political ties prevent most of us from acting in manners considered as taboos. Some people who yet work in ways to disrupt the social structure are often times rejected. Further there are laws in place to guarantee that no ones right to freedom of choice is infringed upon.

The Evolution Of The Microprocessor

Only once in a lifetime will a new invention come about to touch every aspect of our lives. Such a device that changes the way we work, live, and play is a special one, indeed. The Microprocessor has been around since 1971 years, but in the last few years it has changed the American calculators to video games and computers (Givone 1). Many microprocessors have been manufactured for all sorts of products; some have succeeded and some have not. This paper will discuss the evolution and history of the most prominent 16 and 32 bit microprocessors in the microcomputer and how they are similar to and different from each other.

Because microprocessors are a subject that most people cannot relate to and do not know much about, this paragraph will introduce some of the terms that will be in- volved in the subsequent paragraphs. Throughout the paper the 16-bit and 32-bit mi- croprocessors are compared and contrasted. The number 16 in the 16-bit microproces- sor refers how many registers there are or how much storage is available for the mi- croprocessor (Aumiaux, 3). The microprocessor has a memory address such as A16, and at this address the specific commands to the microprocessor are stored in the memory of the computer (Aumiaux, 3).

So with the 16-bit microprocessor there are 576 places to store data. With the 32-bit microprocessor there are twice as many places to store data making the microprocessor faster. Another common term which is mentioned frequently in the paper is the oscil- lator or the time at which the processors clock ticks. The oscillator is the pace maker for the microprocessor which tells what frequency the microprocessor can proc- ess information, this value is measured in Mega-hertz or MHz.

A nanosecond is a measurement of time in a processor, or a billionth of a second. This is used to measure he time it takes for the computer to execute an instructions, other wise knows as a cy- cle. There are many different types of companies of which all have their own family of processors. Since the individual processors in the families were developed over a fairly long period of time, it is hard to distinguish which processors were introduced in order. This paper will mention the families of processors in no particular order.

The first microprocessor that will be discussed is the family of microprocessors called the 9900 series manufactured by Texas Instruments during the mid-70s and was developed rom the architecture of the 900 minicomputer series (Titus, 178). There were five dif- ferent actual microprocessors that were designed in this family, they were the TMS9900, TMS9980A, TMS9981, TMS9985, and the TMS9940. The TMS9900 was the first of these microprocessors so the next four of the microprocessors where simply variations of the TMS9900 (Titus, 178).

The 9900 series microprocessors runs with 64K memory and besides the fact that the 9900 is a 16-bit microprocessor, only 15 of the address memory circuits are in use (Titus, 179). The 16th address is used for the computer to distinguish between word and data functions (Titus, 179. The 9900 series microprocessors runs from 300 nanoseconds to 500 ns from 2MHz to 3. 3MHz and even some variations of the original microprocessor where made to go up to 4MHz (Avtar, 115). The next microprocessor that will be discussed is the LSI-11 which was pro- duced from the structural plans of the PDP-11 minicomputer family.

There are three microprocessors in the LSI-11 family they are the LSI-11, LSI-11/2, and the much im- proved over the others is the LSI-11/32 (Titus, 131). The big difference between the LSI-11 family of microprocessors and other similar microprocessors of its kind is they ave the instruction codes of a microcomputer but since the LSI-11 microprocessor originated from the PDP-11 family it is a multi-microprocessor (Avtar, 207). The fact that the LSI-11 microprocessor is a multi-microprocessor means that many other mi- croprocessors are used in conjunction with the LSI-11 to function properly (Avtar, 207).

The LSI-11 microprocessor has a direct processing speed of 16-bit word and 7- bit data, however the improved LSI-11/22 can directly process 64-bit data (Titus, 131). The average time that the LSI-11 and LSI-11/2 process at are 380 nanoseconds, while the LSI-11/23 is clocked at 300 nanoseconds (Titus, 132). There are some great strengths that lie in the LSI-11 family, some of which are the efficient way at which the microprocessor processes and the ability to run minicomputer software which leads to great hardware support (Avtar, 179).

Although there are many strengths to the LSI- 11 family there are a couple of weaknesses, they have limited memory and the slow- ness of speed at which the LSI-11 processes at (Avtar, 179). The next major microprocessors in the microcomputing industry were the Z8001 and Z8002, however when the microprocessor entered into the market the term Z8000 was used to mean either or both of the microprocessors (Titus, 73). So when describing the features of both the Z8001 and the Z8002, they will be referred to as the Z8000. The microprocessor was designed by the Zilog Corporation and put out on the market in 1979 (Titus, 73).

The Z8000 are a lot like the many other previous micro- processors except for the obvious fact that it is faster and better, but are similar be- cause they depend on their registers to function properly (Titus, 73). The Z8000 was improved by using 21 16-bit registers, 14 of them are used for general purposes opera- tions (Titus, 73). The difference with the Z8001 and the Z8002 is the Z8002 can only ddress 65K bytes of memory, which is fascinating compared to the microprocessors earlier in time but is greatly inferior to the Z8001 which can address 8M bytes (8000K) of memory (Titus, 73).

The addressing memory between the two otherwise very simi- lar microprocessors is drastically different were as other functions of the microproces- sors seem to be quite the same. An example of this is the cycle time. The cycle time is 250 nanoseconds and the average number of cycles that occur per instruction are be- tween 10 and 14 for both microprocessors (Avtar, 25). The next microprocessor that will be discussed is the 8086. This microproces- sor is the best in my opinion, out of all the 16-bit microprocessors.

Not only because the speeds of processing are tremendous, but because it simply paved the way to the 32-bit microprocessors using various techniques that will be discussed later. The 8086 was the second Intel microprocessor (being preceded by the 8080) (Avtar, 19). The 8086 was introduced in early 1978 by Intel (Avtar, 19). Like so many of the other processors the 8086 is register oriented with fourteen 16-bit registers, eight of which are used for general processing purposes (Avtar, 19). The 8086 can directly address MB (1,048,576 bytes) which is used only in accessing Read Only Memory.

The ba- sic clock frequency for the 8086 is between 4MHz and 8MHz depending on the type of 8086 microprocessor that is used (Avtar, 20). Up until this point in the paper there have been common reoccurring phrase such as a microprocessor containing 14 16-bit registers. At this time in the evolution of microprocessors come the 32-bit register, which obviously has double the capacity to hold information for the microprocessor. Because of this simple increase of the register capacity we have a whole different type of microprocessor.

Although the 16- it and 32-bit microprocessors are quite different (meaning they have more compo- nents and such), the 32-bit microprocessors will be described in the same terms as the 16-bit microprocessors were. The remainder of the paper will discuss the 32-bit microprocessor series. The external data bus is a term that will be referred to in the remainder of the paper is. The data bus is basically what brings data from the memory to the processor and from the processor to the memory (Givone, 123).

The data bus is similar to the registers located on the microprocessor but are a little bit slower to access (Givone, 123). The first 32-bit microprocessor in the microprocessor industry that will be dis- cussed is the series 32000 family and was originally built for main-frame computers. In the 32000 family all of the different microprocessors have the same 32-bit internal structure; but may have external bus values such as 8, 16, or 32 bits (Mitchell, 225). In the 32000 family the microprocessors use only 24 of the potential 32 bit addressing space, giving the microprocessor a 16 Mbyte address space (Mitchell, 225).

The 32- bit registers are set up so there are six 32-bit dedicated registers and then in combina- ion there are two 16-bit dedicated registers (Mitchell, 231). Each dedicated register has its own type of specific information that it holds for processing (Mitchell, 232). The microprocessors oscillator (which now comes from an external source) runs at 2. 5 MHz, but due to a divide-by-four prescaler the clock frequency runs at 10MHz. There have been many new ideas put into practice to improve the 32000 series micro- processor generally and thus making it run faster and more efficient.

The next family of microprocessor which was fabricated for the microcomputer is the MC68020 32-bit microprocessor which is based on the MC68000 family. The other microprocessors that are included in this family are the MC68000, MC68008, MC68010 and the MC68012 (Avtar, 302). Before going into the types of components that this microprocessor contains, it should first be know that the making of the MC68020 has been the product of 60 man-years of designing including the manufac- turing of the High-density Complementary Metal Oxide Semiconductor giving the mi- croprocessor high speed and low resistance and heat loss (Avtar, 302).

Because of all the work that was put into the MC68020 and its other related microprocessors, it is an xtremely complex microprocessor. The MC68020 operates in two modes, these are the user mode(for application programs) or the supervisor mode (the operating system and other special functions) (Mitchell, 155). The user and supervisor modes all have there own specific registers to operate their functions. The user programming has 17 32-bit address registers, and an 8-bit register (Mitchell, 155). Then the supervisor pro- gramming has three 32-bit, an 8-bit and two 3-bit registers for small miscellaneous functions (Mitchell, 155).

All of these registers within the two modes are split up into ifferent groups which would hold different information as usual, but this set up of registers gives the microprocessors a 20 32-bit information storing capacity. The next family of microprocessor is Intels 80386 and 80486 families. The 80386 and 80486 were mostly over all better then the other microprocessors being made by the different companies in the industry at this time, simply because Intel is now the leading microprocessor producer in todays market.

The 80386 was a product that evolved from Intels very first microprocessor, the 8-bit 8080 (Mitchell, 85). Then next came the earlier mentioned 16-bit 8086. The reason why Intel did so well in the market for microprocessors was because every microprocessor that they made was compatible with the previous and future (Mitchell, 85). This means that if a piece of software worked on the 8080 then it worked on the future microprocessors and vice-a- versa. Not only did Intel look forward but they looked back.

The main difference between the 80386 and the other 32-bit microprocessors is the added feature of a bar- rel shifter (Mitchell, 88). The barrel shifter allowed information to switch places mul- tiple times in the registers within a single cycle (Mitchell, 88). The microprocessor ontains 8 general purpose 32-bit registers, but with the barrel shifter that is increased to the equivalent of a 64-bit microprocessor. For the most common 20MHz 80386 microprocessor the run time for each cycle is 59 nanoseconds, but for a 33MHz mi- croprocessor the cycle time is reduced to 49 nanoseconds.

The next 32-bit microprocessor in market are AT&Ts WE32100 and 32200 (Mitchell, 5). These microprocessors also needed six peripheral chips in order to run, these are termed: Memory Management Units, floating point arithmetic, Maths Accel- eration Units, Direct Memory Access Control, and Dynamic Rand Access Memory Control (Mitchell, 5). These microprocessors apart from the microprocessors all work an important part of processing the data that comes through the microprocessor.

The difference from this microprocessor and the others is because the WE32200 address information over the 32-bit range with the help of a disk to work as a slow form of memory (Mitchell, 9). The WE32200 microprocessor runs at a frequency of 24MHz (Mitchell, 9). The 16-bit and 32-bit microprocessors are a mere page in the great book of processor history. There will be many new and extremely different processors in the near future. A tremendous amount of time and money have been put into the making and improving of the microprocessor.

The improving and investment of billions of dollars are continually going toward the cause of elaborating the microprocessors. The evolution of the microprocessor will continue to evolve for the better until the time when a much faster and more efficient electronic device is invented. This is turn will create a whole new and powerful generation of computers. Hopefully this paper has given the reader some insight into the world of microprocessor and how much work has been put into the manufacturing of the microprocessor over the years.

Evolution Of Technology

As every day passes we are become more and more a globalized society. With this ongoing cycle we come across a vast multitude of impasses. One of the main ideas leading toward this “global paradox” is the concept of global mindset. In this paper we will discuss all of the aspects of the global mindset: what it is, how it helps people live productively and successfully in the globalizing society, and how to develop an effective global mindset. Having a global mindset is a crucial competence of most businesses futures. What crucial competence means is the most sought after characteristic.

Any level of manger that does not act with a global strategy will be left in the dust in today’s globalizing markets. So what is a global mindset? Before we discuss what a global mindset is we must look at the reasons why we need a global mindset, so we can get a clearer picture of what we actually need. The world is becoming more interconnected and there have been recent changes in the world political systems. Incidents such as the fall of the Berlin Wall and the collapse of the Soviet Union; as well as revolutionary advances in communication technology.

The implications for higher education in this changing world scene are significant as the new global workplace, driven by the up and coming information technology (IT) area, has made communication in daily life increasingly multinational and multicultural (Kim 617). Informal education is also a way to start. By this we mean that you don’t have to go to formal classes to learn. Just by paying attention to people from other cultures in every day life we can enlarge our global mindset.

In a class offered at the University of Rhode Island, BUS/COM 354, International Business Communication Exchange, students work in teams and individuals with students overseas. In an article written by Professor Chai Kim, who teaches this class, it is stated, “More than ever, students must be trained to work with partners across cultural and natural borders. To adequately prepare each student for the next century, educators must develop strategies to assure not only the mastery of abilities in functional areas of business and technology but also the command of intercultural communication skills.

Accomplishment of this goal is one of the biggest challenges facing institutions of higher education today. (Kim 617). This quote exemplifies the need for the global mindset and gives a concise outline of what it is. This semester in Professor Kim’s BUS/COM 354 class, students engaged in an e-mail debate with students from Braunschweig University in Germany and also engaged in an e-mail discussion with students from Bilkent University in Ankara, Turkey. We found a lot of information on global mindset, however, we did not find a concrete definition.

But we did find a definition of mindset. “Mindset is the perception filter through which we see the world” (Chen and Starosta). So what we did was pool all of our individual information and try to come up with a definition in our own terms. What we came up with is, “global mindset is the ideology that one must take with him/her into today’s society. Not necessarily business, but life in general. It incorporates intercultural sensitivity, intercultural awareness, and cultural diversity knowledge.

It reduces ethnocentrism and eliminates parochialism, moreover, using a broad range of vision so you can view yourself not as a part of a singular nation among many nations, but a member of one global nation. So now we have a definition we can go back to the reasons that we need a global mindset. The global mindset is possibly most widely seen in an institution like the World Trade Organization. The WTO in short is responsible for reducing taxes and tariffs, which in turn opens up global business markets.

Here’s a brief look at some statistical information that shows how the world is financially diverse and how money distribution is very unequal. The gap between the rich and the poor is ever-widening. In 1960, 20 per cent of the world’s population living in the richest countries had 30 times the income of the poorest 20 per cent. By 1997, the richest were 74 times richer (Balls and Peel 1). The World Trade Organization attempts to shorten this gap by opening up trade barriers. However, there are many people out there with closed mindsets who do not want these trade barriers opened.

Surprisingly many of these people are in the United States. It is probably true that as the World Trade Organization’s goals become more attainable there will be some U. S. jobs and money lost. One must realize though, that the amount of world wide jobs and income earned by the lifting of these barriers will exceed the amount of U. S. loss ten fold. There were demonstraters at the WTO conference last week in Seattle, WA. People dressed up as sea turtles in revolt to the fact that if things go as planned for the WTO one of the trade results would be the U. S opening seafood trade with Malaysia.

Malaysian shrimp fisherman have nets that kill sea turtles. The important question is: In the grand scheme of things what is more important, the life of a sea turtle or the Malaysian fisherman being able to put food on his families table. These demonstraters, although their cause is very noble, they are not looking at this issue through globalized eyes. The example just written about shows us a possible future look at a globalized society, but there are many more intricacies needed to give us a global mindset.

A lack of communism must occur for a real step to be taken. China is one of the few communist nations left on earth, however its population is tremendous. There are well over one billion people in China. As of now, or shall we say three weeks ago, the U. S. did not trade with China, and vice versa. The Chinese market is one with such tremendous potential and capability, it just needs to be tapped. The U. S. and China talked trade and came to an agreement. One that hadn’t been able to be reached in thirteen years of on and off negations (Eckholm). So what does this have to do with global mindset?

One might say what the heck does Malaysian shrimp fishing have to due with global mindset. These examples are of real life events taking place now that show very clearly the path and direction our society is traveling in. You need concrete examples to show you some sort of idea how to get on this path. This leads us into the next part about what it takes from you, me, or a top-level, corporate manager to have a global mindset. The first and definitely the most important thing to have when trying to begin or expand your global mindset is “open-mindedness.

It seems too simple to be the most important detail in such a complex topic, however, you can’t get anywhere without being open-minded. Once you’ve become open-minded you must put all of your ethnocentric beliefs aside and totally eliminate parochialism. By this we mean that everyone has ethnocentric beliefs. For example, if you have a valid driver’s license in the United States, you drive on the right side of the road. However, in the United Kingdom you drive on the left side of the road.

You may think that your way is the best way, but in order to function in the United Kingdom, you must drive on the left side of the road. This is a mild example of ethnocentrism, the belief that my way is the best way. Unfortunately, parochialism exists. This is the belief that, my way is the only way. The person who used this attitude would refuse to drive a car in the United Kingdom. Secondly, we need experience. This is key to developing a global mindset. You don’t develop a global mindset by sitting on your couch in Nowheresville, USA.

Get out there and interact with other people. A good example of this experience is through education, like we discussed earlier pertaining to the cross-cultural communication experienced in the BUS/COM 354 at URI. Another very important aspect of defining your global mindset is job experience. Almost all of us who are working Americans work with someone from a different culture. Not only can you work with them, you can learn from them. If you are in a employment situation where you do work with someone from a different culture, do you work well together?

If they don’t speak English can you still communicate with them? These are things that we need to think about in all working situation. If you get a job and realize that many of your co-workers are from different cultures it is important that you act in a culturally synergistic fashion. Or at least make an attempt to interact with them on a daily basis by using part of their culture. If you are the first one to take this step often times they will follow you and a culturally synergistic level can be reached.

If they make the first step then it is up to you as a well-rounded person with a global mindset to follow in their footsteps. It is important for us now to take a look why it is important to have a global mindset in today’s business world, then examine some managerial tactics to define this mindset. The number one thing is that you don’t want to be left behind, while other more “globalized” individuals take the jobs and money that you could have. Michael Hick is a speaker on globalization and the importance of a global mindset.

We are going to take a brief look at some of the ideas he poses and these can be found on his website at www. michaelhick. com. He first states the importance of a business global mindset, “Having Global Mindset is the crucial competence of your business future. Any level of Manager who does not act with a Global strategy will be left in the slipstream as business hurtles across national frontiers in the decades ahead. ” This statement is nothing closer to the truth. It is very obvious to see how without the global mindset your business will not function in today’s market.

Michael Hick also has this to say about how he feels about the global economy, “Your people need to have awareness of Global issues to understand the events which will affect them and their families in the future. The Global Age is here. We are all linked now. Business is going global at break-neck speed and suddenly our lives, attitudes, belief systems and jobs depend on our having ‘Global Mindset’. ” We definitely agree with Michael Hick on this topic and see the world market growing at an exponential rate. Expanding your global mindset will at the same time give you effective cross cultural communication skills.

Not all cultures have the same meanings for all the words that are in the English language. Moreover, the United States is a low-context culture and communication tactics can vary tremendously from a high-context culture such as Japan (Deresky). The difference between high-context and low-context cultures is that in a low-context culture words are used to explain, however in a high-context culture the explanation of many things are left up to body language and what the sender thinks the receiver will think. This concept goes hand in hand with uncertainty avoidance.

In the United States w/ have a high uncertainty-avoidance (Deresky). This means that we like to know all there is to know about certain things such as a business contract. Germany is like this also, however, many Middle Eastern countries have a low uncertainty-avoidance (Adler 56). This means that they don’t need everything written on paper and many agreements may be verbal. Just having a minor grasp on these concepts, such as you would pick up from this class, gives you a much bigger global mindset. “Employee communication is becoming increasingly important to global corporations in their quest for efficiency and effectiveness.

But it is proving ever more difficult as they grow and change shape. Companies face a number of deeper issues in deciding how best to communicate with staff across borders and at different levels (Kessler 1). The fact that you may possess some of this global knowledge can make you a more marketable person. After all that should be one of your main goals; to make yourself as marketable as possible. By this we mean as stated above, if you were going to a business negotiation in Saudi Arabia and you showed up with ten pages of your policy, you may look foolish.

That could be the worst possible thing to have happen. However, if you know that in Middle Eastern culture, many policies are unwritten, you could show up to the negotiation with all of the information in your head and greatly impress your soon to be business partners. You need this global mindset to keep yourself “one up” on everyone else. Although it may be true that the United States dominates, the global business society, it is a very poor practice to have the attitude, “that I’m an American and they’ll do business my way and in my language.

To think like this is using a closed minded, ethnocentric approach to business. Although your partners might very well be prepared to work under American terms, the globally minded person would say to himself, “I know that they will probably be prepared to do business American style, however, I am going to present myself with the best combination of both styles, in a culturally synergistic fashion, and that might impress them. ” There are very little standardized methods of using the global mindset, because it is a relatively new field.

Moreover, it is growing tremendously and tactics change ever so quickly. It is good though that some people are coming up with new ideas. According to a student at DePaul University who studies international business, a global mindset has levers: Boards must be a mix of nationalities Members must have two or more language skills There must be cross-boarder business teams Reward international experience Optimize local and foreign performance Develop global marketing managers (www. ibs. depaul. edu)

The items stated here are very precise, however, items like these are almost never expected from entry-level business employees. It is good to take a look at because it can show what a corporate international management team might have on its skills list. Company like, IBM, Microsoft, and Xerox have international management and marketing teams with skills like these listed above or even more advanced. We have discussed in detail the reasons why it is important to have a global mindset, but there are those out there who fear the globalization of the United States.

Also there are those that think many will get hurt in this globalization process. In the book Global Village or Global Pillage, it is discussed in depth that the globalization of the world will leave those who are poor and suffering right now, even poorer and more suffering as the globalization grows (Bretcher and Costello, 142). This goes along with the Darwin’s theory of the survival of the fittest. It seems easy to talk about this when you are discussing animals, but it is a lot tougher to rational when human lives come in to play.

In another book, American Patriotism in a Global Society, we see examples of how many Americans want just America to be on the top pillar of the global network. This book argues that the transformation of our world into a global society is causing a resurgence of tribalism at the same time that it is inspiring the ideology of political holism – the under-minding of human society as an evolving global system of interdependent individuals, cultures, and nations (Craige 5).

To simplify this books main idea is to say that there is an underlying battle going on between not necessarily the nations to be the best, but inside the individuals, most of whom have tribalistic instincts. It is hard to do things and participate in things that you don’t feel accustomed to which drives us into the conclusion of this paper. Those who want to succeed in today’s global society, yes, have to have a global mindset, but it is much bigger than that. One must excel in what he or she does and give one hundred and ten percent effort all of the time.

Of course, some people get lucky and get jobs handed to them, but for the vast majority of us it is a race. One race to the end of the path that we decide to take. Not everyone wins that race, and maybe that is not important. In today’s globalizing society it is hard to feel like you are even part of the race. You may feel like you are doing well and then someone runs right past you and you don’t even know what happened. We can’t control the rate at which the global society is growing at and we wish we could say that everything and everyone will turn out a winner or at least happy. Yet this is untrue.

Electronic Voting and What Should be Done

There’s been a lot of talk about this new computer system that casts election votes. Ideally, using electronic equipment has many advantages but there are disadvantages hiding in the cave ready to attack. We’ve all seen electronic equipment often work as expected but more importantly, it’s not uncommon for electronic equipment to fail and when this sort of concept is applied to voting, miscounting is simply unacceptable.

I think the best way to solve this type problem is to try to make the voting machines work without fail but to never assume it won’t fail. As we’ve seen from the arrogance of the engineers of the Titanic or from the 2004 New York Yankees, just because it looks and sounds workable, we should never assume these machines will do what it should. By this, I don’t mean the system should fail completely but we should design the system to constantly self-check itself to insure no errors have been made.

In addition, the system should friendly so that at least at the user point-of-view, there should not be problems with confusion or misinterpretation. Overall, making an e-voting system work requires the engineer to consider the logical, defensive (security against hacking) and personal standpoint of design and do so in a sensitive, introspective manner. First and foremost, the system should be ethical. What this means is the system should be created to an acceptable and mainstream protocol.

Ethics means different things to different people but we can’t satisfy all of these morals that people have all on one system since some might contradict one another so we need to decide on what the majority would find acceptable. Right off the bat, it’s important to prevent hacker attacks because people want a fair election and not a tailored one. We go to vote to voice our opinion and not that of someone else.

Secondly, it’s important to let the public know what these voting machines do and how they’re secured, letting the public know that the e-voting companies care about their security and that these voting machines are engineered with exhaustive research on how to keep it secure. Lastly, the user interface, the user interface should be unbiased (it shouldn’t look like the one candidate is better than the other). Another thing about the user interface is that it should be easy to understand as to not intimidate voters.

I think there also should be the option to choose the electronic voting systems or traditional paper ballots, having both systems operating in one polling place. This may allow voters who don’t believe in electronic equipment or aren’t used to using electronic equipment to take an alternative option. We can discuss how to get an e-voting system to appeal to people all we want, “evangelize” until we’re exhausted but I doubt that most of this would work on stubborn, one-sided people and more importantly, we shouldn’t force people to use something they feel uncomfortable with.

Using the bank system as an example, you can withdraw money from an ATM or going to a teller. For example, my grandmother doesn’t use ATMs at all because she doesn’t feel comfortable interacting with electronic equipment. On the voting side, this may or may not be needed because some areas may overwhelmingly prefer paper ballots over electronic voting or vise versa, in this case the polls would have to accommodate. If electronic voting systems are actually used, it is important that the programming is acceptable and safe.

This is why I agree private e-voting organizations should either share the source code with top security departments in the government or have the government regulate how the security department in the organization does business. Ideally, it doesn’t have to be checked by government directly, as long as the private e-voting organization is checked by security professionals of some kind, working outside of the company. This allows some sort of checks and balances so that these companies don’t manufacture poorly secured equipment.

As soon as a machine is certified, it can be manufactured. On top of this, there should be an individual who takes charge and watches out for any employees tampering with the software while the equipment is being manufactured. This is important since in the past there has been tampering with software on lottery machines and this can’t happen for e-voting machines. As far as the internal operation of the e-voting machine goes, I think three words say it all: checksums, tickets and encryption.

Encryption is mainly important for voter privacy, because we don’t want hackers interpreting the messages sent from the machine to the database server. The choice for the encryption should be that of an algorithm that has the best reputation among secured connections, such as RSA. In addition, checksums are important because we need to validate to see if hackers have changed or added code in the machine, we should never assume that the software wouldn’t be tampered with. The checksum would only validate correctly on the original copy of software.

The checksum algorithm, like encryption, should be reputable such as MD5 and the checksum and e-voting software both should be burned on static ROM chips, which shouldn’t be changed. If suspicion occurs there should be a way to plug a device that stored a backup copy of the checksum into the e-voting machine to check the software. Lastly, e-voting systems should use tickets, or signatures that identify each voter uniquely and each vote should be logged with user token in memory so that the voter can’t vote twice.

Although, any security measure isn’t entirely secure from clever hackers, keeping security very strict would prevent many attempts. Once you’ve voted from an e-voting machine and somehow passed all the security that was involved I think it would be appropriate to give the user a receipt showing exactly what was stored in memory because like I said early, we should never assume everything will work and if the voter can verify by eye what was stored then this would allow corrections if needed.

In addition this type of system could be used for hand recounts or to check user errors which would probably be 90% of all the complaints given that the programming was planned properly. Yes, it may be possible for deceiving information to be printed out, making the vote look correct but if all the aforementioned methods of checking, security, and logging and provided that everything was executed well, an electronic problem to this point should be very rare. Comparatively, this should be no more secure than paper ballots.

Mind and Machine

Technology has traditionally evolved as the result of human needs. Invention, when prized and rewarded, will invariably rise-up to meet the free market demands of society. It is in this realm that Artificial Intelligence research and the resultant expert systems have been forged. Much of the material that relates to the field of Artificial Intelligence deals with human psychology and the nature of consciousness.

Exhaustive debate on consciousness and the possibilities of consciousnessness in machines has adequately, in my opinion, revealed that it is most unlikely that we will ever converse or interract with a machine of artificial consciousness. In John Searle’s collection of lectures, Minds, Brains and Science, arguments centering around the mind-body problem alone is sufficient to convince a reasonable person that there is no way science will ever unravel the mysteries of consciousness. Key to Searle’s analysis of consciousness in the context of Artificial Intelligence machines are refutations of strong and weak AI theses.

Strong AI Theorists (SATs) believe that in the future, mankind will forge machines that will think as well as, if not better than humans. To them, pesent technology constrains this achievement. The Weak AI Theorists (WATs), almost converse to the SATs, believe that if a machine performs functions that resemble a human’s, then there must be a correlation between it and consciousness. To them, there is no technological impediment to thinking machines, because our most advanced machines already think.

It is important to review Searle’s refutations of these respective theorists’ proposition to establish a foundation (for the purpose of this essay) for discussing the applications of Artificial Intelligence, both now and in the future. Strong AI Thesis Strong AI Thesis, according to Searle, can be described in four basic propositions. Proposition one categorizes human thought as the result of computational processes. Given enough computational power, memory, inputs, etc. , machines will be able to think, if you believe this proposition.

Proposition two, in essence, relegates the human mind to the software bin. Proponents of this proposition believe that humans just happen to have biological computers that run “wetware” as opposed to software. Proposition three, the Turing proposition, holds that if a conscious being can be convinced that, through context-input manipulation, a machine is intelligent, then it is. Proposition four is where the ends will meet the means. It purports that when we are able to finally understand the brain, we will be able to duplicate its functions.

Thus, if we replicate the computational power of the mind, we will then understand it. Through argument and experimentation, Searle is able to refute or severely diminish these propositions. Searle argues that machines may well be able to “understand” syntax, but not the semantics, or meaning communicated thereby. Esentially, he makes his point by citing the famous “Chinese Room Thought Experiment. ” It is here he demonstrates that a “computer” (a non-chinese speaker, a book of rules and the chinese symbols) can fool a native speaker, but have no idea what he is saying.

By proving that entities don’t have to understand what they are processing to appear as understanding refutes proposition one. Proposition two is refuted by the simple fact that there are no artificial minds or mind-like devices. Proposition two is thus a matter of science fiction rather than a plausible theory A good chess program, like my (as yet undefeated) Chessmaster 4000 Trubo refutes proposition three by passing a Turing test. It appears to be intelligent, but I know it beats me through number crunching and symbol manipulation.

The Chessmaster 4000 example is also an adequate refutation of Professor Simon’s fourth proposition: “you can understand a process if you can reproduce it. ” Because the Software Toolworks company created a program for my computer that simulates the behavior of a grandmaster in the game, doesn’t mean that the computer is indeed intelligent. Weak AI Thesis There are five basic propositions that fall in the Weak AI Thesis (WAT) camp. The first of these states that the brain, due to its complexity of operation, must function something like a computer, the most sophisticated of human invention.

The second WAT proposition states that if a machine’s output, if it were compared to that of a human counterpart appeared to be the result of intelligence, then the machine must be so. Proposition three concerns itself with the similarity between how humans solve problems and how computers do so. By solving problems based on information gathered from their respective surroundings and memory and by obeying rules of logic, it is proven that machines can indeed think. The fourth WAT proposition deals with the fact that brains are known to have computational abilities and that a program therein can be inferred.

Therefore, the mind is just a big program (“wetware”). The fifth and final WAT proposition states that, since the mind appears to be “wetware”, dualism is valid. Proposition one of the Weak AI Thesis is refuted by gazing into the past. People have historically associated the state of the art technology of the time to have elements of intelligence and consciousness. An example of this is shown in the telegraph system of the latter part of the last century. People at the time saw correlations between the brain and the telegraph network itself.

Proposition two is readily refuted by the fact that semantical meaning is not addressed by this argument. The fact that a clock can compute and display time doesn’t mean that it has any concept of coounting or the meaning of time. Defining the nature of rule-following is the where the weakness lies with the fourth proposition. Proposition four fails to again account for the semantical nature of symbol manipulation. Referring to the Chinese Room Thought Experiment best refutes this argument. By examining the nature by which humans make conscious decisions, it becomes clear that the fifth proposition is an item of ancy.

Humans follow a virtually infinite set of rules that rarely follow highly ordered patterns. A computer may be programmed to react to syntactical information with seeminly semantical output, but again, is it really cognizant? We, through Searle’s arguments, have amply established that the future of AI lies not in the semantic cognition of data by machines, but in expert systems designed to perform ordered tasks. Technologically, there is hope for some of the proponents of Strong AI Thesis. This hope lies in the advent of neural networks and the application of fuzzy logic engines.

Fuzzy logic was created as a subset of boolean logic that was designed to handle data that is neither completely true, nor completely false. Intoduced by Dr. Lotfi Zadeh in 1964, fuzzy logic enabled the modelling of uncertainties of natural language. Dr. Zadeh regards fuzzy theory not as a single theory, but as “fuzzification”, or the generalization of specific theories from discrete forms to continuous (fuzzy) forms. The meat and potatos of fuzzy logic is in the extrapolation of data from seta of variables. A fairly apt example of this is the variable lamp.

Conventional boolean logical processes deal well with the binary nature of lights. They are either on, or off. But introduce the variable lamp, which can range in intensity from logically on to logically off, and this is where applications demanding the application of fuzzy logic come in. Using fuzzy algorithms on sets of data, such as differing intensities of illumination over time, we can infer a comfortable lighting level based upon an analysis of the data. Taking fuzzy logic one step further, we can incorporate them into fuzzy expert systems. This systems takes collections of data in fuzzy rule format.

According to Dr. Lotfi, the rules in a fuzzy logic expert system will usually follow the following simple rule: “if x is low and y is high, then z is medium”. Under this rule, x is the low value of a set of data (the light is off) and y is the high value of the same set of data (the light is fully on). z is the output of the inference based upon the degree of fuzzy logic application desired. It is logical to determine that based upon the inputs, more than one output (z) may be ascertained. The rules in a fuzzy logic expert system is described as the rulebase.

Technology and the Future of Work

Every society creates an idealised image of the future – a vision that serves as a beacon to direct the imagination and energy of its people. The Ancient Jewish nation prayed for deliverance to a promised land of milk and honey. Later, Christian clerics held out the promise of eternal salvation in the heavenly kingdom. In the modern age, the idea of a future technological utopia has served as the guiding light of industrial society.

For more than a century utopian dreamers and men and women of science and letters have looked for a future world where machines would replace human labour, creating a near workerless society of bundance and leisure. (J Rifkin 1995 p. 42) This paper will consider developments in technology, robotics, electronic miniaturisation, digitisation and information technology with its social implications for human values and the future of work. It will argue that we have entered post modernity or post Fordism, a new age technological revolution, which profoundly effects social structure and values.

Some issues that will be addressed are: elimination of work in the traditional sense, longevity, early retirement, the elimination of cash, the restructuring of education, industry nd a movement to global politics, economics and world government. In particular this paper will suggest that the Christian Judao work ethic with society’s goals of full employment in the traditional sense is no longer appropriate, necessary or even possible in the near future, and that the definition of work needs to be far more liberal.

It argues that as a post market era approaches, that both government and society will need to recognise the effects of new technology on social structure and re-distribute resources, there will need to be rapid development of policies to assist appropriate social djustments if extreme social unrest, inequity, trauma and possible civil disruption is to be avoided. Yonedji Masuda (1983) suggests we are moving from an industrial society to an information society and maintains that a social revolution is taking place.

He suggests that we have two choices Computopia’ or an Automated State’, a controlled society. He believes that if we choose the former, the door to a society filled with boundless possibilities will open; but if the latter, our future society will become a forbidding and a horrible age. He optimistically predicts our new future society will be computopia’ which he describes as xhibiting information values where individuals will develop their cognitive creative abilities and citizens and communities will participate voluntarily in shared goals and ideas.

Barry Jones (1990) says we are passing through a post-service revolution into a post- service society – which could be a golden age of leisure and personal development based on the cooperative use of resources. Jeremy Rifkin (1995) uses the term The Third Industrial Revolution’ which he believes is now beginning to have a significant impact on the way society organises its economic activity.

He describes it as the third and final stage f a great shift in economic paradigm, and a transition to a near workless information society, marked by the transition from renewable to non-renewable sources of energy and from biological to mechanical sources of power. In contrast to Masuda, Jones and Rifkin, Rosenbrock et al. (1981) delved into the history of the British Industrial Revolution, and they concluded firmly that we are not witnessing a social revolution of equivalent magnitude, because the new information technology is not bringing about new ways of living.

They predicted that we are not entering an era when work becomes largely unnecessary, here will be no break with the past, but will be seeing the effect of new technology in the next 20 years as an intensification of existing tendencies, and their extension to new areas. I suggest that Rosenbrock may come to a different conclusion with the benefit of hindsight of changing lifestyles, 15 years later, such as the persistent rise in unemployment and an aging society.

Population is aging especially in developed countries and will add significantly to a possible future lifestyle of leisure. Most nations will experience a further rapid increase in the proportion of their population 65 years and older y 2025. This is due to a combination of the post war baby boom and the advances in medicine, health and hygiene technology with the availability and spread of this information. Governments are encouraging delayed retirement whereas businesses are seeking to reduce the size of their older workforce.

The participation rates of older men has declined rapidly over the past forty years with the development of national retirement programmes. In many developed countries the number of men 65 and older who remain in the workforce has fallen below ten percent. Due in part to technological advances there are more older eople and they are leaving the workforce earlier. Thus this body of people will contribute to the growing numbers of people with more leisure time.

Professor Nickolas Negroponte (1996) of the MIT Media Lab, points out that in percentage per capita it is those people under seventeen years of age and over fifty five who are the greatest users of the Internet, and that the Internet and other information technologies encourage democracy and global egalitarianism. Furthermore he envisions a new generation of computers so human and intelligent that they are thought of more as companions and colleagues rather than echanical aids.

Jones (1990) points out a number of elements relating to the adoption of new technology that have no precedent in economic history and suggests that there is a compelling case for the rapid development of policies to assist appropriate social adjustments. He points out that manufacturing has declined as the dominant employer and that there has been a transition to a service’ or post industrial economy in which far more workers are employed in producing tangible and intangible services than in manufacturing goods.

The cost of technology has fallen dramatically relative to the cost of human labour. Miniaturisation has destroyed the historic relationship between the cost of labour and the cost of technology, allowing exponential growth with insignificant labour input, which is leading to the reduction of labour in all high volume process work. Sargent (1994) points out that in Australia during the last decade, the rich have become richer and the poor poorer: the top 20 per cent of households received 44 per cent of national incomes in 1982, and by 1990 this had risen to 47 per cent.

But the top 1 per cent received 11 per cent of incomes in 1982, and this rose to 21 per cent in 1990. Meanwhile unemployment continued to increase. Jones (1990) further points out that the new technology has far greater reliability, capacity and range than any which proceeded it. Microprocessors can be directed to do almost anything from planning a school syllabus and conducting psychotherapy to stamping out metal and cutting cloth.

It is cheaper to replace electronic modules than to repair them and the new technology is performing many functions at once and generating little heat or waste and will work twenty four hours a day. The making and servicing of much precision equipment which required large skilled labour force has been replaced by electronic systems that require fewer workers. The relationship between telecommunications and computers multiplies the power of both, the power for instant, universal communications is unprecedented, consequently the influence of any individual economy to control its own destiny is reduced.

All advanced capitalist nations and many third world and communist blocks are now largely interdependent, this has led to an international division of labour and the growth of the multinational corporations. The global economy is rapidly taking over from individual nations. The adoption of each new generation of technology is increasing and is rapidly becoming cheaper than its predecessor. Technologies developed in the 1960s have seen rapid rates of development, adoption and dissemination.

Less developed countries can now acquire the new technologies due to the rapid decrease in cost, and the combination of their low wages and the latest technology make them formidable competitors in the global market. Almost every area of information based employment, tangible services and manufacturing is being profoundly influenced by new technology. Jones (1990) notes that few economists have addressed the many social mplications that stem from the development of science and technology.

Most economists’ thinking is shaped by the Industrial Revolution and they are unable to consider the possibility of a radical change from the past, they give no hint that Australia has passed a massive transition from a goods based economy to a service base. Attempts to apply old remedies to new situations are simply futile. Jenkins (1985) disagrees with Jones and argues on behalf of the traditional economic model suggesting that it will continue to work well in the new era and the facts do not support any causal relationship between automation, higher roductivity, and unemployment.

He claims that it cannot be emphasised too strongly that unemployment does not stem from the installation of new technology. He says it is the failure to automate that risks jobs and the introduction of new technology will increase the total number of jobs. Further, he suggests that the primary reason for introducing new technology such as computer controlled robots is to reduce costs and to improve product quality and that lower costs mean lower prices.

This results in increased demands for goods and services, which in turn generates higher output and employment and profits. He uggests that higher profits induce higher investment and research and development expenditure whilst the domestic producers of robotics and microelectronic based equipment increase output and employment. He sees the greatest problem simply in the need for occupational restructure of employment, as the need for software experts, computer programmers, technicians and engineers are likely to sharply rise.

Rifkin (1995) like Jones believes that the old economic models are inappropriate in the Third Industrial Revolution’ and describes views similar to Jenkin’s as ” century old conventional economic wisdom” and ” a logic eading to unprecedented levels of technical unemployment, a precipitous decline in purchasing power, and the spectre of a worldwide depression. ” It is questioned whether Jenkins’ solution of re-training will be able to replace all displaced workers.

Educator Jonathon Kazol (1985) points out that education for all but a few domestic jobs starts at the ninth grade level. And for those, the hope of being retrained or schooled for a new job in the elite knowledge sector is without doubt out of reach. Even if re-training and re- education on a mass scale were undertaken, the vast numbers of dislocated orkers could not be absorbed as there will not be enough high-tech jobs available in the automated economy of the twenty-first century. A British Government backed study by Brady and Liff (1983) clearly supported this view.

They concluded that jobs may be created through new technology, but it will be a very long time before the gains could offset the losses from traditional industries. Even the neo-classical economists continue to subscribe to traditional economic solutions, yet they have been met with stiff opposition over the years. In Das Kapital, Marx (McLelland 1977) predicted in 1867 that increasing the automation f production would eliminate the worker altogether, and believed the capitalists were digging their own graves as there would be fewer and fewer consumers with the purchasing power to buy the products.

Many orthodox economists agreed with Marx’s view in many respects, but unlike Marx, supported the notion of trickle down economics’ and said that by releasing’ workers, the capitalists were providing a cheap labour pool that could be taken up by new industries that in turn would use the surplus labour to increase their profits that would in turn be invested in new labour saving echnology which would once again displace labour, creating an upward cycle of prosperity and economic growth.

Such a viewpoint may have some validity in the short-term but one must consider the longer term effects of such a cycle, it is questionable whether it could be sustained. Another important question is whether consumerism will continue unabated, whether it is a normal human condition to see happiness and salvation in the acquisition of goods and services. The word “consumption” until the present century was steeped in violence. In its original form the term, which has both

French and English roots, meant to subdue, to destroy, to pillage. Compared with the mid 1940s the average American is consuming twice as much now. The mass consumption phenomena was not the inevitable result of an insatiable human nature or a phenomenon that occurred spontaneously, quite the contrary. Business leaders realised quite early that they needed to create the dissatisfied customer’, and to make people want’ things that they had not previously desired (Rifkin 1996).

Nations throughout the world are starting to understand the ill effects that production has on the natural’ environment, and the acquisition of oods and services on the psyche. With more people with less money, and a trend towards a lifestyle that emphasises quality rather than quantity, it is questionable whether consumerism will, or is desirable, to continue.

Science and technology’s profile grew to such an extent in the early part of this century in the United States that the supporters and proponents of technocracy were prepared to abandon democracy, and favoured rule by science’ rather than rule by humans’ and advocated the establishment of a national body, a technate, that would be given the power to assemble the nation’s resources and ake decisions governing production and distribution of goods and services.

The image of technology as the complete and invincible answer, has somewhat tarnished in recent years with the number of technological accidents such as those which occurred in nuclear power stations at Chernobl and Three Mile Island, and threats of nuclear war and environmental degradation increasing and coming to the fore. Yet the dream that science and technology will free humanity from a life of drudgery continues to remains alive and vibrant, especially among the younger generation.

During the 1930s, government officials, trade unionists, economists and usiness leaders were concerned that the result of labour saving devices, rising productivity and efficiency, was worsening the economic plight of every industrial nation. Organised labour wished to share the gains by business, such as increased profits and fewer workers required. They joined together, to combat unemployment by fighting to reducing the working week and improve wages, thus sharing the work and profits amongst the workers and providing more leisure time.

By employing more people at fewer hours, labour leaders hoped to reduce unemployment brought on by labor-saving technology, stimulate purchasing power nd revive the economy. Clearly unions saw the problems resulting from technological change to lie partly, in increased leisure time (Rifkin 1996). Unemployment is steadily rising, global unemployment has now reached its highest level since the great depression of the 1930s. More than 800 million people are now underemployed or are unemployed in the world, while the rich are becoming richer and the poor getting poorer.

Unemployment rates among school leavers in South Australia is as high as twenty five per cent and nine per cent for the rest of the community, which leads one to question whether the traditional conomic model is working. Trade unions have pursued their response to unemployment throughout the years with wages and salaries growing and the working week reduced, for example in the UK the working week has reduced from eighty four hours in 1820 down to thirty eight hours in 1996 (Jones 1990).

Typical government response to unemployment has been to instigate public works programmes and to manipulate purchasing power by tax policies that stimulate the economy and lower tax on consumption. It can been seen in Australia that governments no longer see this as the answer, in fact there is an opposite pproach with a strong movement for a goods and services tax, to redistribute wealth, as proposed by the defeated Liberal Party of Andrew Peacock in 1992, and now being re-introduced. Many job creation schemes and retraining programmes are being abandoned by the new Australian Liberal Government of John Howard.

However the power of the workers and unions in 1996 is severely restricted. The unions have lost the support of workers as reflected in their falling membership, and no longer can use the threat of direct action with jobs disappearing fast. The Liberal Government passed legislation to limit collective bargaining, with nions power of direct action becoming even more eroded and ineffective because of global competition and division of labour, and automation gave companies many alternatives. Unions have been left with no option but to support re- training, whether they believe it is the answer to unemployment or not.

Today, it seems far less likely that the public sector, the unions or the marketplace will once again be able to rescue the economy from increasing technological unemployment. The technological optimists continue to suggest that new services and products resulting from the technological revolution will enerate additional employment. While this is true, the new products and services require less workers to produce and operate, and certainly will not counteract those made redundant through obsolete trades and professions.

Direct global marketing by way of the Superhighway’ the Internet’ and other forms of instant telecommunications is making thousands of middle marketing employees obsolete. For example the SA bank introduced phone banking some while ago, they now are the first bank in South Australia to trade on the Internet (http://www. banksa. com. au), and many rural banks are closing. Also, it has just een announced by the electoral commission that voting by telephone will be trialed next year, with enormous potential job loss.

The widely publicised information superhighway brings a range of products, information and services direct to the consumer, bypassing traditional channels of distribution and transportation. The numbers of new technical jobs created will not compare with the millions whose jobs will become irrelevant and redundant in the retail sectors. Jones (1990) notes that there is a coy reticence from those who believe that social structure and economics will continue as in the past, to identify the ysterious new labour absorbing industry that will arise in the future to prevent massive unemployment.

Jones believes that industry X’ if it does appear, will not be based on conventional economic wisdom but is likely to be in areas where technology will have little application, he suggests it may be in service based areas such as education, home based industry, leisure and tourism. Despite Barry Jones predictions, most service industries are very much affected by new technology. Education is fast becoming resource based with students in primary, secondary, technical and tertiary levels expected to do their own esearch and projects independent of class teachers with schools being networked and teaching through video conferencing.

The conventional teacher is fast becoming obsolete, with the number of permanent teachers reducing, There are numerous examples of workers in service industries being displaced by technology. Shop fronts such as banking, real estate, travel and many more, are disappearing. Small retail food outlets continue to collapse, with the growth of supermarkets and food chains organised around computer technology, and on- line shopping from home. Designers of all types are being superseded by CAD omputer design software. Even completely automated home computerised services such as a hardware and software package called “Jeeves” is now available.

Business management and company directors are finding voice activated lap top computer secretaries far more reliable and efficient than the human form. The New Zealand Minister for Information and Technology, Hon. Maurice Williamson MP, wrote the foreword for the paper How Information Technology will change New Zealand’: On the threshold of the twenty first century we are entering a period of change as far reaching as any we have ever seen. Since the industrial revolution people have had to locate themselves in large centres where they could work with others, but now new technologies are rendering distance unimportant.

The skills that are needed in tomorrow’s society will be those associated with information and knowledge rather than the industrial skills of the nineteenth and twentieth centuries. Changing technology will affect almost every aspect of our lives: how we do our jobs; how we educate our children; how we communicate with each other and how we are entertained. As Williamson points out, with the explosion of technologies , it is easy to ose sight of the larger patterns that underlie them.

If we look at the fundamental ways people live, learn and work, we may gain insights about everyday life. These insights are the basis for new technologies and new products that are making an enormous difference in people’s lives. Stepping back from the day-to-day research for new electronic devices, life can be seen as being fundamentally transformed. There is development of a networked society; a pattern of digital connections that is global, unprecedented, vital, and exciting in the way that it propels the opportunities for entirely new markets and leisure.

As people make digital technology an integral part of the way they live, learn, work and play, they are joining a global electronic network that has the potential for reshaping many of our lives in the coming decade. In the future, technologies will play an even greater role in changing the way people live, learn, work and play, creating a global society where we live more comfortably; with cellular phones and other appliances that obey voice commands; energy-efficient, economical and safe home environments monitored by digital sensors.

There will be “Smart” appliances and vehicles that anticipate our needs nd deliver service instantly. We are seeing portable communications devices that work without wires; software intelligent agents that sort and synthesise information in a personally tailored format; new technologies that provide increased safety and protect our freedom, ranging from infra-red devices that illuminate the night to microwave devices that improve radar and communications.

People are also learning more efficiently, with interactive video classrooms that enable one-on-one attention and learning systems that remember each student’s strengths and tailor lesson plans accordingly. There are lap-top computers and desktop video clips that bring in-depth background on current events with instant access to worldwide libraries and reference books with full motion pictures.

People are working more productively, with “virtual offices” made possible by portable communications technologies and software that allows enterprise-wide business solutions at a fraction of the usual cost and in a shorter length of time with massive memory available at the desktop and lap-top levels. There are “Intelligent” photocopiers that duplicate a document and route it to a file and imultaneous desktop video-conferencing from multiple locations, sending voice and data simultaneously over the same communications channel.

With the explosion of leisure activities available, people play more expansively. There are hundreds of movies available on demand at home, virtual-reality games, a growth in the number of channels delivered by direct satellite television, videophones that link faces with voices, interactive television for audience participation, instant access to worldwide entertainment and travel information and interactive telegaming with international partners (Texas Instruments 1996).

This paper has considered developments in electronic miniaturisation, robotics, digitisation and information technology with its social implications for human values and the future of work. It has argued that we have entering a post-modern period and are entering a post-market era in which life will no longer be structured around work in the traditional sense, there will be greater freedom and independent living, paid employment will be de-emphasised and our lifestyle will be leisure orientated.

I have argued that the social goal of full employment in the traditional sense s no longer appropriate, necessary or even possible, that both government and society will need to recognise the effects of technology on social structure and re-organise resources to be distributed more equally if extreme social unrest, inequity, trauma and possible civil disruption is to be avoided. I foresee a scenario of a sustainable integrated global community in which there will be some form of barter but cash will be largely eliminated, money will be virtual’.

A minimal amount of people will be involved and enjoy some forms of high tech activity, while the vast majority will have a vocation that s essentially creative and enjoyable perhaps involving the arts and music with a spirituality that involves deep respect and care for the natural world with new forms of individual and group interaction. There will be minimal forms of world central democratic government. Vast forms of infrastructure will no longer be required as citizens will largely be technologically independent.

Most communication and interaction will be instant and conducted from home, office or public terminal. There will be new forms and ways of living, new family structures that may consist of larger and smaller groups. A comfortable, pleasurable and leisure based lifestyle in which all the essentials and wants will be automatically provided through the processes of the largely self- sustaining and self evolving technology.

Rifkin (1995) has a similar view, and concludes that he believes the road to a near-workerless economy is within sight and that road could head for a safe haven or a terrible abyss, it all depends on how well civilisation prepares for the post-market era. He too is optimistic and suggests that the end of work could signal the beginning of a great social transformation, a rebirth in the human spirit.

Molecular Switches Essay

We live in the technology age. Nearly everyone in America has a computer or at least access to one. How big are the computers you are used to? Most are about 7″ by 17″ by 17″. That’s a lot of space. These cumbersome units will soon be replaced by something smaller. Much smaller, we’re talking about computers based on lone molecules. As far off as this sounds, scientists are already making significant inraods into researching the feasability of this. Our present technology is composed of solid-state microelectronics based upon semiconductors. In the past few years, scientists have made momentus discoveries.

These advances were in molecular scale electronics, which is based on the idea that molecules can be made into transistors, diodes, conductors, and other components of microcircuits. (Scientific American) Last July, researchers from Hewlitt-Packard and the University of California at Los Angeles announced that they had made an electronic switch of a layer of several million molecules and rotaxane. “Rotaxane is a pseudorotaxane. A pseudorotaxane is a compound consisting of cyclic moles threaded by a linear molecule. It also has no covalant interaction.

In rotaxane, there are bulky blocking groups at each end of the threaded molecule. ” (Scientific American) The researchers linked many of these switches and came up with a rudimentary AND gate. An AND gate is a device which preforms a basic logic function. As much of an achievement as this was, it was only a baby step. This million-moleculed switch was too large to be useful and could only be used once. In 1999, researchers at Yale University created molecular memory out of just one molecule. This is thought to be the “last step down in size” of technology because smaller units are not economical.

The memory was created through a process called “self-assembly”. “Self-assembly” is where computer engineers “grow” parts and interconnections with chemicals. (Physics News Update, 1999) This single molecule memory is better than the conventional silicon memory (DRAM) because the it live around one million times longer. ‘ “With the single molecule memory, all a general-purpose ultimate molecular computer needs now is a reversible single molecule switch,” says Reed (the head researcher of the team. ) “I anticipate we will see a demonstration of one very soon. (Yale, 1999)

Reed was correct. Within a year, Cees Dekker and his colleagues at Delft University of Technology in the Netherlands had produced the first single molecule transistor. Dekker won an innovation award from Discover magazine for the switch which was also built from a lone molecule. The molecule they used was the carbon nanotube. It’s composition is of a lattice of carbon atoms rolled up into a long, narrow tube, one billionth of a meter wide. These can conduct electricity or, depending on how the tube is twisted, they can semiconductors.

The semiconducting nanotube is the only active element in the transistor. The transistor works like it’s silicon relatives, but in much less space. Dekker did, however, emphasize that they had made only a prototype. “Although it is “a technologically usable device,” he says, there’s still a long way to go. The next steps include finding ways to place the nanotubes at the right locations in an electronic circuit, probably by attching chemical guides that bind only to certain metals. ” (Discover) From there, we go back to Yale where efforts were being put forth to make a better switch.

Mark Reed and his collegues were at work on a different class of molecules. To make a switch the inserted regions into the molecules, that when made subject to certain voltages, trapped electrons. If the voltage was varied, they could continuously change the state of the molecules from nonconducting to conducting, the requirements of a basic switch. Their device was composed of 1,000 nitromine benzenethiol molecules in between metal contacts. One interesting developement was to find that these microswitches indeed followed Moore’s Law.

Moore’s Law says that each transistor chip contains approximately twice the memory of it’s predesessor. The chips also come out 18-24 months later. This demonstrates a rising exponential curve in the developement of transistors. Engineers can now put millions of transistors on a sliver of silicon just a few square centimeters. Moore’s Law does show that even technology has it’s limits, as it can get only so small and stay economically possible. (Physics News Update) Free electrons can take on energy levels from a continuous range of possibilities.

But in atoms or molecules, electrons have energy levels that are quantized: they can only be any one of a number of discrete values, like rungs on a ladder. This series of discrete energy values is a consequence of quantum theory and is true for any system in which the electrons are confined to an infinitesimal space. In molecules, electrons arrange themselves as bonds among atoms that resemble dispersed “clouds,” called orbitals. The shape of the orbital is determined by the type and geometry of the constituent atoms. Each orbital is a single, discrete energy level for the electrons.

Even the smallest conventional microtransistors in an integrated circuit are still far too large to quantize the electrons within them. In these devices the movement of electrons is governed by physical characteristics–known as band structures–of their constituent silicon atoms. What that means is that the electrons are moving in the material within a band of allowable energy levels that is quite large relative to the energy levels permitted in a single atom or molecule. This large range of allowable energy levels permits electrons to gain enough energy to leak from one device to the next.

And when these conventional devices approach the scale of a few hundred nanometers, it becomes extremely difficult to prevent the minute electric currents that represent information from leaking from one device to an adjacent one. In effect, the transistors leak the electrons that represent information, making it difficult for them to stay in the “off” state. The standard methods of chemical synthesis allow researchers to design and produce molecules with specific atoms, geometries and orbital arrangements.

Moreover, enormous quantities of these molecules are created at the same time, all of them absolutely identical and flawless. Such uniformity is extremely difficult and expensive to achieve in other batch-fabrication processes, such as the lithography-based process used to produce the millions of transistors on an integrated circuit. The methods used to produce molecular devices are the same as those of the pharmaceutical industry. Chemists start with a compound and then gradually transform it by adding prescribed reagents whose molecules are known to bond to others at specific sites.

The procedure may take many steps, but gradually the pieces come together to form a new potential molecular device with a desired orbital structure. After the molecules are made, we use analytical technologies such as infrared spectroscopy, nuclear magnetic resonance and mass spectrometry to determine or confirm the structure of the molecules. The various technologies contribute different pieces of information about the molecule, including its molecular weight and the connection point or angle of a certain fragment. Physics)

By combining the information, we determine the structure after each step as the new molecule is synthesized. Once the assembly process has been set in motion, it proceeds on its own to some desired end [see “Self-Assembling Materials,” by George M. Whitesides; Scientific American, September 1995]. In our research Reed} we use self-assembly to attach extremely large numbers of molecules to a surface, typically a metal one [see illustration on self- assembly].

When attached, the molecules, which are often elongated in shape, protrude up from the surface, like a vast forest with identical trees spaced out in a perfect array. Scientific American) Handy though it is, self-assembly alone will not suffice to produce useful molecular-computing systems, at least not initially. For some time, they will have to combine self-assembly with fabrication methods, such as photolithography, borrowed from conventional semiconductor manufacturing. In photolithography, light or some other form of electromagnetic radiation is projected through a stencil-like mask to create patterns of metal and semiconductor on the surface of a semiconducting wafer.

In their research they use photolithography to generate layers of metal interconnections and also holes in deposited insulating material. In the holes, they create the electrical contacts and selected spots where molecules are constrained to self-assemble. The final system consists of regions of self-assembled molecules attached by a mazelike network of metal interconnections. The molecular equivalent of a transistor that can both switch and amplify current is yet to be found. But researchers have taken the first steps by constructing switches, such as the twisting switch described earlier.

In fact, Jia Chen, a graduate student in Reeds Yale group, observed impressive switching characteristics, such as an on/off ratio greater than 1,000, as measured by the current flow in the two different states. For comparison, the device in the solid-state world, called a resonant tunneling diode, has an on/off ratio of around 100. (Yale Bulletin) “Foremost among them is the challenge of making a molecular device that operates analogously to a transistor. A transistor has three terminals, one of which controls the current flow between the other two.

Effective though it was, our twisting switch had only two terminals, with the current flow controlled by an electrical field. In a field-effect transistor, the type in an integrated circuit, the current is also controlled by an electrical field. But the field is set up when a voltage is applied to the third terminal. ” (Scientific American) Another problem with molecular switches is the thermodynamics. A microprocessor with 10 million transistor gives and a clock cycle of half a gigahertz gives off 100 watts, which is much hotter than a stovetop.

Finding the minimum amount of heat that a single molecular device emits would help limit the number of devices we could put on a chip or substrate of some kind. Operating at room temperature and at today’s speeds, this fundamental limit of a molecule is about 50 picowatts (50 millionths of a millionth of a watt). That suggests an upper limit to the number of molecular devices we can closely utilize: it is about 100,000 times more that what is possible now with silicon microtransistors on a chip. That may seem like a vast improvement, it is far below the density that would be possible if we did not have to worry about heat.

The smaller you get, the more problems you also come across. We are already incountering many problems in the fabrication of silicon based chips. These problem will become worse with each step down in size until they are no longer useful (or until they no longer function. ) The bigger problem is that this is bound to occur before computer science is able to achieve it’s primary goal of creating a viable working “brain. ” This means that the possibility of creating artifical life forms, or “androids,” is slim at this point due to the expected impasse in technology.

Industrial Robots Essay

Two years ago, the Chrysler corporation completely gutted its Windsor, Ontario, car assembly plant and within six weeks had installed an entirely new factory inside the building. It was a marvel of engineering. When it came time to go to work, a whole new work force marched onto the assembly line. There on opening day was a crew of 150 industrial robots. Industrial robots don’t look anything like the androids from sci-fi books and movies. They don’t act like the evil Daleks or a fusspot C-3P0.

If anything, the industrial robots toiling on the Chrysler line resemble elegant swans or baby brontosauruses with their fat, squat bodies, long arched necks and small heads. An industrial robot is essentially a long manipulator arm that holds tools such as welding guns or motorized screwdrivers or grippers for picking up objects. The robots working at Chrysler and in numerous other modern factories are extremely adept at performing highly specialized tasks – one robot may spray paint car parts while another does spots welds while another pours radioactive chemicals.

Robots are ideal workers: they never get bored and they work around the clock. What’s even more important, they’re flexible. By altering its programming you can instruct a robot to take on different tasks. This is largely what sets robots apart from other machines; try as you might you can’t make your washing machine do the dishes. Although some critics complain that robots are stealing much-needed jobs away from people, so far they’ve been given only the dreariest, dirtiest, most soul-destroying work.

The word robot is Slav in origin and is related to the words for work and worker. Robots first appeared in a play, Rossum’s Universal Robots, written in 1920 by the Czech playwright, Karel Capek. The play tells of an engineer who designs man-like machines that have no human weakness and become immensely popular. However, when the robots are used for war they rebel against their human masters. Though industrial robots do dull, dehumanizing work, they are nevertheless a delight to watch as they crane their long necks, swivel their heads and poke about the area where they work.

They satisfy “that vague longing to see the human body reflected in a machine, to see a living function translated into mechanical parts”, as one writer has said. Just as much fun are the numerous “personal” robots now on the market, the most popular of which is HERO, manufactured by Heathkit. Looking like a plastic step-stool on wheels, HERO can lift objects with its one clawed arm and utter computer-synthesized speech. There’s Hubot, too, which comes with a television screen face, flashing lights and a computer keyboard that pulls out from its stomach.

Hubot moves at a pace of 30 cm per second and can function as a burglar alarm and a wake up service. Several years ago, the swank department store Neiman-Marcus sold a robot pet, named Wires. When you boil all the feathers out of the hype, HERO, Hubot, Wires et. al. are really just super toys. You may dream of living like a slothful sultan surrounded by a coterie of metal maids, but any further automation in your home will instead include things like lights that switch on automatically when the natural light dims or carpets with permanent suction systems built into them.

One of the earliest attempts at a robot design was a machine, nicknamed Shakey by its inventor because it was so wobbly on its feet. Today, poor Shakey is a rusting pile of metal sitting in the corner of a California laboratory. Robot engineers have since realized that the greater challenge is not in putting together the nuts and bolts, but rather in devising the lists of instructions – the “software – that tell robots what to do”. Software has indeed become increasingly sophisticated year by year.

The Canadian weather service now employs a program called METEO which translates weather reports from English to French. There are computer programs that diagnose medical ailments and locate valuable ore deposits. Still other computer programs play and win at chess, checkers and go. As a results, robots are undoubtedly getting “smarter”. The Diffracto company in Windsor is one of the world’s leading designers and makers of machine vision. A robot outfitted with Diffracto “eyes” can find a part, distinguish it from another part and even examine it for flaws.

Diffracto is now working on a tomato sorter which examines colour, looking for no-red – i. e. unripe – tomatoes as they roll past its TV camera eye. When an unripe tomato is spotted, a computer directs a robot arm to pick out the pale fruit. Another Diffracto system helps the space shuttle’s Canadarm pick up satellites from space. This sensor looks for reflections on a satellites gleaming surface and can determine the position and speed of the satellite as it whirls through the sky.

It tells the astronaut when the satellite is in the right position to be snatched up by the space arm. The biggest challenge in robotics today is making software that can help robots find their way around a complex and chaotic world. Seemingly sophisticated tasks such as robots do in the factories can often be relatively easy to program, while the ordinary, everyday things people do – walking, reading a letter, planning a trip to the grocery store – turn out to be incredibly difficult.

The day has still to come when a computer program can do anything more than a highly specialized and very orderly task. The trouble with having a robot in the house for example, is that life there is so unpredictable, as it is everywhere else outside the assembly line. In a house, chairs get moved around, there is invariably some clutter on the floor, kids and pets are always running around. Robots work efficiently on the assembly line where there is no variation, but they are not good at improvisation. Robots are disco, not jazz.

The irony in having a robot housekeeper is that you would have to keep your house perfectly tidy with every item in the same place all the time so that your metal maid could get around. Many of the computer scientists who are attempting to make robots brighter are said to working in the field of Artificial Intelligence, or AI. These researchers face a huge dilemma because there is no real consensus as to what intelligence is. Many in AI hold the view that the human mind works according to a set of formal rules.

They believe that the mind is a clockwork mechanism and that human judgement is simply calculation. Once these formal rules of thought can be discovered, they will simply be applied to machines. On the other hand, there are those critics of AI who contend that thought is intuition, insight, inspiration. Human consciousness is a stream in which ideas bubble up from the bottom or jump into the air like fish. This debate over intelligence and mind is, of course, one that has gone on for thousands of years. Perhaps the outcome of the “robolution” will be to make us that much wiser.

My Vision of Tomorrow

Tomorrow’s world will be much different and also, much better in many ways. We will have developed much better technology. We will have made huge medical advancements. The general quality of life will be much better, and living will also have become much easier. Still, nothing can ever be perfect, and in a world of the future, we will experience many complex and unavoidable problems such as depletion of resources, overpopulation, and the threat of nuclear and biological warfare.

The solutions to these dilemmas will not be immediately apparent; but, we will have to overcome them. The uture could hold great opportunities for many people, but we will need to work at it. In the future, technology will have advanced so much and so fast that many new possibilities will arise. Most likely, we will enjoy interplanetary space travel frequently in the future and we may even develop communities on other planets, such as mars, or perhaps on the moon. Numerous scientists and writers have already also predicted this.

Life will also be made much, much easier in the future for humans by robots, computers, and other automatons. Many simple tasks done today by humans such as cooking, cleaning, and epairing household items will be done by these machines much more quickly and efficiently and with less pollution. Almost all of the current manual labor jobs, especially in the United States, will become obsolete and robots will do all of the work for us. The advantages of using robots and computers include no pay, no time off, and no complaints or questions asked.

Also, nearly every job in the future will require extensive knowledge and skills of computers and anyone without them will be completely lost. At the pace that doctors and medical researchers have been moving at, n the next few generations we will have developed treatments and/or cures for all of the diseases that plague the worlds people today. These diseases include; AIDS, cancer, the common cold, Alzheimer’s disease, and even the most exotic and deadly diseases like Ebola.

But, the practice of medicine will not be eliminated because these diseases will be continually mutating. Things that doctors cannot even comprehend today will become clear to us in the not so distant future. Everyone will also be living longer due to the knowledge of more remedies and of enhanced wellness. People will also be much different in the future. They will become more separate from each other (linked only by computer and telephone). They will become even more materialistic and our society will move closer and closer to complete capitalism.

Rules and laws will also be much stricter and the kind of crime that is commonly seen today will become rare in the future. The days to come will not be without problems and stress though. To overcome problems like waste disposal, depleted natural resources, world nuclear and biological warfare, and global warming will be no easy task. Everyone around the world will have to join together and help each other to olve problems that will eventually effect all of us.

One of the biggest problems that we will have to deal with is the deterioration of the average family and its values. If the human race cannot get out of the hole that is has dug, everyone in it will be doomed to extinction. So, to sum it up, the future can and will most likely be great, but to achieve this greatness, humans will have to make some personal sacrifices and they will have to face many hardships. For now, we can look forward to the world of tomorrow; but, when it really comes, life, as everyone knows it, will have drastically changed.

Nanotechnology: Immortality Or Total Annihilation

Technology has evolved from ideals once seen as unbelievable to common everyday instruments. Computers that used to occupy an entire room are now the size of notebooks. The human race has always pushed for technological advances working at the most efficient level, perhaps, the molecular level. The developments and progress in artificial intelligence and molecular technology have spawned a new form of technology; Nanotechnology. Nanotechnology could give the human race eternal life, or it could cause total annihilation.

The idea of nanotech was conceived by a man named K. Eric Drexler (Stix 94), which he defines s “Technology based on the manipulation of individual atoms and molecules to build structures to complex atomic specifications (Drexler, “Engines” 288). ” The technology which Drexler speaks of will be undoubtedly small, in fact, nano- structures will only measure 100 nanometers, or a billionth of a meter (Stix 94). Being as small as they are, nanostructures require fine particles that can only be seen with the STM, or Scanning Tunneling Microscope (Dowie 4). Moreover the STM allows the scientists to not only see things at the molecular level, but it can pick up and move atoms as well (Port 128).

Unfortunately the ne device that is giving nanoscientists something to work with is also one of the many obstacles restricting the development of nanotech. The STM has been regarded as too big to ever produce nanotech structures (Port 128). Other scientists have stated that the manipulation of atoms, which nanotech relies on, ignores atomic reality. Atoms simply don’t fit together in ways which nanotech intends to use them (Garfinkel 105). The problems plaguing the progress of nanotech has raised many questions among the scientific community concerning it’s validity.

The moving of atoms, the gathering of information, the estrictions of the STM, all restrict nanotech progress. And until these questions are answered, nanotech is regarded as silly (Stix 98). But the nanotech optimists are still out there. They contend that the progress made by a team at IBM who was able to write letters and draw pictures atom by atom actually began the birth of nanotech (Darling 49). These same people answer the scientific questions by replying that a breakthrough is not needed, rather the science gained must be applied (DuCharme 33).

In fact, Drexler argues that the machines exist, trends are simply working on building better ones (“Unbounding” 24). Drexler continues by stating that the machines he spoke about in “Engines of Creation” published in 1986 should be developed early in the 21st century (“Unbounding” 116). However many scientists still argue that because nanotech has produced absolutely nothing physical, it should be regarded as science fiction (Garfinkel 111). Secondly, nano-doubters rely on scientific fact to condemn nanotech.

For example it is argued that we are very far away from ever seeing nanotech due to the fact that when atoms get warm they have a tendency to bounce around. As a result he bouncing atoms collide with other materials and mess up the entire structure (Davidson A1). Taken in hand with the movement of electron charges, many regard nanotech as impossible (Garfinkel 106). But this is not the entirety of the obstacles confining nanotech development. One major set-back is the fact that the nanostructures are too small to reflect light in a visible way, making them practically invisible (Garfinkel 104).

Nevertheless, Nanotech engineers remain hopeful and argue that; “With adequate funding, researchers will soon be able to custom build simple molecules that can store and process information and anipulate or fabricate other molecules, including more of themselves. This may occur before the turn of the century. “(Roland 30) There are other developments also, that are pushing nanotech in the right direction for as Lipkin pointed out recent developments have lead to possibilities of computers thinking in 3-D (5). Which is a big step towards the processing of information that nanotech requires.

Although there are still unanswered questions from some of the scientific community, researchers believe that they are moving forward and will one day be able to produce nanomachines. One such machine is regarded as a replicator. A replicator, as it’s name implies, will replicate; much like the way in which genes are able to replicate themselves (Drexler, “Engines” 23). It is also believed that once a replicator has made a copy of itself, it will also be able to arrange atoms to build entirely new materials and structures (Dowie 5).

Another perceived nanomachine is the assembler. The assembler is a small machine that will take in raw materials, follow a set of specific instructions, re-arrange the atoms, and result in an altogether new product (Darling 53). Hence, one could make diamonds simply by giving some assemblers a lump of coal. Drexler states that the assemblers will be the most beneficial nanites for they will build structures atom by atom (“Engines” 12). Along with the assemblers comes its opposite, the disassembler. The disassembler is very similar to the assemblers, except it works backwards.

It is believed that these nanites will allow scientists to analyze materials by breaking them down, atom by atom (Drexler, “Engines” 19). As a result of the enhanced production effects of assemblers Drexler believes that they will be able to shrink computers and improve their operation, giving us nanocomputers. These machines will e able to do all things that current computers can do, but at a much more efficient level. Once these nanomachines are complete they will be able to grasp molecules, bond them together, and eventually result in a larger, new structure (Drexler, “Engines” 13).

Through this and similar processes the possibilities of nanotech are endless. It is believed that nanites could build robots, shrunken versions of mills, rocket ships, microscopic submarines that patrol the bloodstream, and more of themselves (Stix 94). Hence, their is no limit to what nanotech can do, it could arrange circuits and build uper-computers, or give eternal life (Stix 97). Overall Drexler contends; “Advances in the technologies of medicine, space, computation, and production-and warfare all depend on our ability to arrange atoms.

With assemblers, we will be able to remake our world, or destroy it” (“Engines” 14). In a more specific spectrum, are the impacts nanotechnology could have on the area of production. Nanotechnology could greatly increase our means of production. Nanites have the ability to convert bulks of raw materials into manufactured goods by arranging atoms (DuCharme 58). As a result of this increased efficiency, DuCharme believes that this will become the norm in producing goods, that this whole filed will now be done at the molecular level (34).

Thus, nanotech could eliminate the need for production conditions that are harmful or difficult to maintain (Roland 31). Moreover, the impact that nanotech will have on production could lead to a never before seen abundance of goods. Costs and labor will all be significantly cheaper. Everyone would be able to use nanotech as a tool for increased efficiency in the area of production (DuCharme 60). The overall effects of nanotech on producing materials were best summed up by Dowie, “This new revolution won’t require crushing, boiling, melting, etc. Goods would now be built from the atom up by nanomachines” (4).

Nanotech will also be able to benefit us in other ways. One great advantage to nanotech will be the improvements it will lend in the areas of medicine. With the production of microscopic submarines, this branch of nanotech could be the most appealing. These nanites would be able to patrol the bloodstream sensing friendly chemicals and converting bad ones into harmless waste (Darling 7). But anites will be able to do more than this, this brand of nanites could also repair damaged DNA and hunt cancer (Port 128). Thus, nanites would be able to cure many illnesses and repair DNA.

Moreover, nanites could remove the need to keep animals for human use, they could simply produce the food inside your body (Darling 59). As a result of nanites floating through your body and attacking harmful substances such as cholesterol, people could live indefinitely – perhaps a millennia (Davidson A1). This idea opens up another door in the field of nanotech research, dealing with the potential for immortality. But aside from providing eternal life through fixing DNA and curing illnesses, nanotech could be used with cryogenics in providing never-ending life.

The current problem with cryogenics is after a person is frozen the cells in their body expand and burst. Nanotech could solve for this problem for they could find and replace the broken cells (DuCharme 152). Also, however, nanites wouldn’t even require the entire frozen body. They could simply replicate the DNA in a frozen head and then produce a whole new person (DuCharme 155). However, this poses a potential problem, that being overpopulation, and the environment. DuCharme contends that this should not be a concern for a high standard of living will keep the population from growing (61).

However, if the population were to increase nanotech will have produced the energy to allow us to live in currently uninhabitable areas of the earth (DuCharme 63). Nanites will allow people to not only live on earth, but on the sea, under the sea, underground, and in space due to increased flight capabilities (DuCharme 64). Hence, the human race will have a near infinite space for living. Also, nanites would reduce the toxins manufactured from cars by producing cheap electric cars, ut also use disassemblers to clean up waste dumps (DuCharme 68).

The benefits of nanotech are countless, it could be used to do anything from spying to mowing the lawn (Davidson A1). However, with the good comes the bad. Nanotech could also bring some distinct disadvantages. One scenario which illustrates the danger of nanotech is referred to as the gray goo problem. Gray Goo is referred to as when billions of nanites band together and eat everything they come into contact with (Davidson A1). However, Davidson only gets the tip of the iceberg when it comes to the deadliness of gray goo.

Roland better illustrates this hazards threat; “Nanotechnology could spawn a new form of life that would overwhelm all other life on earth, replacing it with a swarm of nanomachines. This is sometimes called the ‘gray goo’ scenario. It could take the form of a new disease organism, which might wipe out whole species, including Homo Sapiens”(32). Simply put the nanites would replicate to quickly and destroy everything including the human race (Stix 95). Moreover, the rapid replication rate that nanotech is capable of could allow it to out-produce real organisms and turn the biosphere to dust Drexler, “Engines” 172).

However, death is only one of the dangers of gray goo. If controlled by the wrong people, nanites could be used to alter or destroy those persons enemies (Roland 32). But gray goo is only of one of the many potential harms of nanotech. If so desired, nanotech could be used as a deadly weapon. Although microscopic robots don’t sound like a very effective weapon, Drexler states that they are more potent than Nuclear weapons, and much easier to obtain (“Engines” 174). But aside from being used as a weapon, nanites would be able to roduce weapons at a quick and inexpensive rate.

In fact, with the ability to separate isotopes and atoms one would be able to extract fissionable Uranium 235 or Plutonium 239. With these elements, a person has the key ingredients for a nuclear bomb (Roland 34). As a result of the lethality of nano-weapons the first to develop nanotech could use it to destroy his rivals. New methods for domination will exist that is greater than Nukes and more dangerous (Roland 33). This along with simple errors, such as receiving the wrong instructions points toward nanotech doing more harm than good (Darling 56).

Moreover, the threats from nanotech could be a potential cause of extinction (Drexler, “Engines” 174). Drexler continues by saying that unless precautions are taken nano could lead to complete annihilation (“Engines” 23). However, if nanotech does not lead to extinction, it could be used to increase the power of states and individuals. Bacon believes that only the very most elite individuals will receive benefits from nanotech. Beyond that however, it is perceived that advanced tech extends the possibilities of torture used by a state (Drexler, “Engines” 176).

However, states will become more powerful in other ways. With the increase means of production, nanotech could remove the need for any if not all people (Drexler, “Engines” 176). This opens new doors for totalitarian states. They would no longer require keeping anyone alive, individuals would not be enslaved, rather they would be killed (Drexler, “Engines” 176). It is perceived that all the benefits would remove all interdependence, and destroy the quality of life itself (Roland 34). In the end, nanotech could give a lifestyle never before imagined. On the other hand, it could destroy entire species.

The effects and potentials of nanotech are best summed up by it’s inventor, Drexler, “Nanotechnology and artificial intelligence could bring the ultimate tools of destruction, but they are not inherently destructive. With care, we can use them to build the ultimate tools of peace” (“Engines” 190). The question of how beneficial nanotech will prove to be, can only be answered by time. Time will tell whether developments and progress in artificial intelligence and molecular technology will eventually produce true nanotechnology. And, if produced, whether this branch of science will give us immortality or total annihilation.

Comparison of 3 Stocks

All my stock market choices are technology based. nVIDIA is a producer of video card software, AMD a provider of motherboard processors and Electronic Arts a videogame publisher. nVIDIA is an example of a decreasing-cost industry. While a rather early new comer to the video card industry, nVIDIA was showing potential from the start. Major competition to nVIDIAs foothold in the industry included 3dfxs voodoo technology and ATIs Rage.

Although 3dfxs foothold seemed unmovable, the next wave of technology to rise brought about their eventual downfall in the market. fxs lack of support for their next generation video cards (the voodoo4 & voodoo5) resulted in their being bought out by the nVIDIA Corporation. While nVIDIA released patches to over double the performance of the GeForce2s technology, 3dfxs patches for the Voodoo4 and Voodoo5 were riddled with flaws, resulting in performance issues for all of their customers. After the buyout, nVIDIA were now free to utilize the voodoo technology and excel in the market.

Now ready to explode even bigger than before, will be the arrival of the GeForce3; boasting results over ten fold that of previous video cards, the GeForce3 will have unparallel performance in the market. This is observed by the slow increase in the percent gain, which will rise dramatically with the release of their new board. AMD, Advanced Micro Devices Inc. , was a company entering a seemingly unbreakable market. Processor technology with a high initial cost categorizes it as a decreasing-cost industry. AMD, now the most popular provider of processor technology, came in against the multi-billion dollar corporation of Microsoft.

Microsofts Pentium processor currently held a foothold in the market; however, AMDs cheaper K-6 series (although not as powerful processor) provided an economical alternative to Microsofts more powerful Pentium II processor. With sales being lost to the more economical K6 series, Microsoft released the Celeron processor, which was widely accepted as a poor alternative to AMDs K6. However, customer familiarity with the Microsoft brand name, allowed for Microsoft to recoup some of its losses in the field, but with AMDs following rising, the K7 (Athlon) processor, took a firm hold in the field against the Pentium III.

Furthermore, the Athlon Thunderbird (the successor to the K7) has now taken a majority control of the market, by outperforming the Pentium IV in most performance tests. By observing the percent gain indicated in the chart, it is observed there is a steady increase as AMDs new Athlon Firebird board strikes the market. The brief drop can be attributed to the initially high cost of the board, where individuals will hold off on purchasing the board in hope of a lower price or a sale.

Furthermore, if someone was not to watch carefully, an individual might not realize the new technology is available, because AMD does utilize many advertising techniques, unlike Microsoft. EA, Electronic Arts, a long time developer in the videogame industry, has only recently exploded to new levels in the field. Videogames provide a strong example of a decreasing-cost industry model. By purchasing small companies and partnering up with other major companies (such as Squaresoft), EA has managed to reach all classes of the videogame industry, where before, it only had a minor field of influence.

Furthermore, EA has achieved a startlingly high level of quality in all their products, gaining an unsurpassed level of credibility in multiple domains. Gaining such a foothold in the videogame industry guarantees profits for a companys future. A strong footing in the gaming market, gains an initial interest by the consumer cliental in any future products, resulting in positive press and a general wave of interest in the companies activities. An initial negativity in the percent gain is noted, because EA decided on a very close release period on all their products.

Consequently, they gained strong recognition in the gaming community for releasing such a diverse range of entertainment titles, all of which held high selling numbers in their respective categories of Action, Sports, Adventure, Simulation and role-playing. At the root of success lies a great product. Hand in hand with a great product comes a cliental loyal to a product It can be seen in everything, from music to sports. Having individuals willing to pledge their name to a product spreads popularity faster than any commercial ever could; not to mention, it is free.

Network In My House

For my independent study, I have created a network in my house. A network by definition are more than one computer that are linked together electronically via a protocol (common language) so the computers can communicate and share resources. This network improves the day-to-day life by adding value and usefulness to the computers. The processes and ideas that I have learned thru this experience can be applied directly into today’s rich electronic business environment. Identifying the needs of the user is the first step for building a well-designed Network. A professional installation was needed to maintain the aesthetics of the rental house.

Most of the wires are run in the attic and then down plastic conduit attached to the wall. The conduit is run all the way to the wall boxes where the Ethernet ports are located. Every wire is clearly labeled and included in an easy to read schematic of the house. This way future tenants will have the ability to utilize the network. Next, every room needed to have access to the network. In order to minimize the overall use of wires, hubs were placed in strategic locations. An 8-port 10/100-megabit auto-sensing hub is located in the computer room and a 5 port 10-megabit in the sound room.

There, needed to be docking stations, so laptop users or visiting computers could easily plug into the network and utilize the pre-existing monitor, keyboard, and mouse. These are the basic needs that have been put into the design of the network. Each computer setup is unique with certain strengths and weaknesses. The network takes advantage of the strengths of each individual computer and makes them available to all users. A network essentially expands the capabilities of each computer by increasing functionality thru resource sharing. In the house, there are a total of four computers and two laptops.

Processing speed and an abundance of ram is not essential for a server with such low traffic. Thus the most antiquated computer was elected for this function. Between all the computers, we have several extra pieces of hardware such as a zip drive, CDRW, DVD ROM, scanner, and multiple printers. Each piece of hardware is dispersed between the computers. There were several immediate efficiencies that occurred when the network went operational. The zip drive is located on the server while the CDRW is located on one of the individual workstations.

Previously, if the need arose to burn some information stored on the zip disk to a CD, the individual computers were practically worthless for this task. However, with the network, one can map a network drive on the computer with the CDRW to the zip drive on the server. This allows information to be efficiently transferred from the zip drive to a CD. In addition, the server also has a scanner attached to it. The problem is that the server is too slow to handle sophisticated photo editing software. Now an image can be scanned on to the server and then a faster computer can be used to edit it.

There are 3 different printers, each varies in quality, speed, and maintenance costs. The most expensive one is reserved for only making color photos, and the other two are used for everyday printing, one of which is much faster and has more reliable paper feeding. A user can easily choose a printer depending on their needs. This network takes full advantage of each computer through resource sharing which ads tremendous value for its users. In Business it is important in any network to be able to restrict access to individuals private files or directories.

Security would demand that not all users would be allowed access to highly confidential information. There is other information that would be made available to other users on a read only basis. The same is true of the users in my network. Microsoft developed NT to be very secure. Most of this security is devoted to protecting network resources and the filing system (NTFS). The administrator decides who gets access to which resources by setting up users and user groups. Each person is asked to choose a user name and password. Then the administrator identifies the needs and privileges of each individual user.

Next the administrator grants users either full access, modify, change, read only, or no access at all to directories and resources on the network. In the house each roommate, trusted friend, and guest is given a user name and rights to the resources he/she needs. Roommates, as a profile group, have access to the Server’s C drive, which contains the core o/s. They are also given access to all directories on the D or storage drive except for the individual User and private directories. The User’s Folder has a directory for each user to store personal files on the Server.

The read and write rights are given only to that user, so the data in that directory is secure. A Guest account is set up for anybody to use. This account is given minimal access to resources with no ability to adjust system settings or cause adverse affects. There were four operating systems to deal with on this project. Two laptops and one of the PC’s use windows 98 while another pc runs 2000 Advanced server, and the server uses NT 4. 0 SP 6 with a dual boot of Linux Red Hat 7. 0. Microsoft developed windows 98 for the home user and did not include adequate security with the FAT32 filing system.

When a user logs onto a machine utilizing the windows 98 o/s, they have access to all the information on that computer and have the ability delete, change, or modify directories. In any event the server still secures the rest of the network and only grants access to the pre-determined resources. The NT and 2000 machines can be set up to allow different levels of users access inside that machine, and also restrict rights to others on the network. On these operating systems A guest account would be denied most write privileges so they couldn’t accidentally delete important files.

It is a security flaw in the network that cannot be fixed without upgrading the operating systems on machines that run windows 98. Most businesses store their vital records in the form of digital data. Keeping this data secure is a key issue. Many problems may arise that can cause the loss or corruption of data. A virus attack, system crash, hardware failure, or a natural disaster are just a few potential problems that could cause loss of information and in turn devastate a company. It is imperative for a business to consider these possibilities and make sure they back up their data.

As college students living in the technology age, we too have lots of important data stored on our computers. This information ranges from term papers to financial records that would be devastating if lost. This is even a bigger worry with laptops as they go thru the daily rigors and abuse of being transported and connected to many different networks. It only takes one bad bump (over 14 to 17 G’s of force) to break a hard drive arm and/or data reading heads to render it useless. Possible virus threat, accidentally transmitted thru email, could corrupt a hard drive and render the O. S. useless and trash the hard drive.

For all the above reasons we needed to put a system in place to back up all of our information. One of the benefits of the network is that backing up data is both fast and convenient. For example, the users in my network back up their data onto their user directory located on the server. Once a month, the user directory is burnt onto a CD. This back up is then stored in a fireproof lockbox, where it is guaranteed to be safe. Getting into the habit of such practices is imperative for today’s I. T. professional. Connecting a network to the Internet can bring tremendous improvements in productivity, but not without posing major security issues.

A network always has to be on the defense, making sure the information and systems that lie within are protected. Anybody can hack right into an unprotected network with just a little bit of knowledge. Once inside, a hacker can access confidential information, read, write, install a virus, or delete whatever he/she sees fit. In order to prevent such attacks over the Internet, a firewall needs to be installed. A firewall is powerful defensive software that blocks unauthorized intruders from entering a network. There are many ways to configure a firewall.

Generally, a firewall locks down all ports except for ports being managed by secure communication programs, such as email. It also does not allow in coming request from the Internet for network resources. Sybergate Personal Firewall has been installed on the network to protect it from outside attacks. Once configured, I tested it at http://scan. sygatetech. com. It scans all the ports and tries some Trojans to ensure that the network is protected. Overall I view the project as being successful. The network is up and running and all the users are able to be more productive.

The ability to access all the different peripherals is a real money saver for the budget conscious college student. Personally I found the setting up of the security features and the installation of the software to be the most rewarding part of the experience. My next step in my ongoing process to improve the network is to install and configure Apache. This gives a unique opportunity to see first hand how Unix manages a network as compared to Windows. I have learned marketable job skills that I intend on applying in the interview process. I am even now considering becoming a network specialist as a career.

The Role of Technology in Management Leadership

Over the last sixty years of business activity, there has been new ways and means of conducting business through something we call technology. Technology is the advancement and use of electronic devices and other high-tech equipment to produce and progress knowledge into the future. Advancements in technology have affected management leadership in many ways over the last sixty years. New technology has altered leaders’ consciousness, language, and the way they view their organization.

Technological advancements have made things easier for those in management leadership roles. But as with anything, there are positive and negative aspects of technology on leadership. Some of the positive aspects of technology are: the availability and use of wireless networking, collaboration tools, digital video, handheld devices, and videoconferencing. On the other hand, the negative aspects of technology are: it offers less privacy, it allows for less interaction with others, and it runs a high risk of contact with viruses.

On the more positive side, wireless networking allows leaders to share resources with their team operating by means of wireless media, such as microwaves, cellular technology, and radio frequencies. Wireless networking is paving the way for technology integration around the world. The use of collaboration tools allows ongoing conversations among leaders, their subordinates, board members, and community members. Professional development is one area where collaboration can have an enormous impact on management leadership.

When leaders can casually share new approaches and practices with each other through a technology connection to their workplace, both leaders and their team will benefit. One way of doing this is to create an Internet mailing list where they can share questions, problems, solutions, successful techniques, and less successful techniques. The Internet is enabling digital video to achieve professional-quality and two-way interaction. This will be one of the rare cases where management leadership will be leading a technological shift in society as a whole.

Hand-held devices are high-tech gadgets, now more powerful than early Windows or Macintosh machines. Handheld devices offer more versatility than full-size computers and are much more portable than the alternative personal device, the laptop. For leaders especially, this take-along advantage lets them develop a feeling of ownership, as the device is ever-present and ready to take on the current task. In addition, many of the newest handheld models can be wirelessly networked, which means leaders can send and receive e-mail and surf the Web without having to “synch up” to a computer.

Video-conferencing is a three-dimensional, top-quality audio and video virtual reality telecommunication that will allow leaders to examine minute objects through remotely controlled microscopes. Videoconferencing technologies use a compressed video system to transmit information from one location to another either via the Internet or a telephone line. On a more negative note, when leaders are using some of the technological advantages as mentioned above, they run the risk of reducing the privacy of their organization.

Privacy is a privilege that we take for granted in this country, yet it is strongly threatened by advances in technology. The ability of political and economic institutions to discover private information about individuals and organizations is overwhelming. Some of the various ways that information about an organization’s activities can be collected without their knowledge or consent are: through cookies, browsers, search engines, electronic commerce, E-mail, and spam. The threat of spyware and other security threats are unlikely to be eradicated.

Hackers, criminals, and others with ill intent will always attempt to avoid the intentions and protections of users in an effort to exploit PCs and networks for vandalism or profit. Viruses are called viruses because they share some of the traits of biological viruses. A virus passes from computer to computer like a biological virus passes from person to person. Viruses show us how vulnerable we are. A properly engineered virus can have an amazing effect on the worldwide Internet. On the other hand, they show how sophisticated and interconnected human beings have become.

In top performing organizations, each area is strong and constantly improving. For example, in our technological age, leaders need to ensure that they’re constantly upgrading their technical expertise and technological tools. They can’t afford to fall behind. In many cases, the laptop computer can be a huge help with email, time management, storing and easily retrieving information, keeping contact and project records, maintaining databases, developing slides for presentations and workshops, and accessing a multitude of information and research through the Internet.

Without it, most leaders would be thirty to forty percent less productive and would need much more administrative help. If leaders’ understanding of their organization’s expectations is only partially accurate, expensive technology and reengineered processes will only deliver partial results. If leaders in our organizations cannot communicate face-to-face, electronic communications won’t improve communications very much. If management leadership has not established the discipline of setting priorities for their time or organizing themselves, a laptop computer or other wireless mobile device will not do it for them.

Systems and processes are also an extremely important area. An organization can be using the latest technologies and be highly people-focused, but if the methods and approaches used to structure and organize work is weak, performance will suffer badly. Leaders in organizations can be empowered, energized, and enlightened, but if systems, processes, and technologies don’t enable them to perform well, they won’t. Developing the discipline and using the most effective tools and techniques of management leadership, organization systems, and processes is a critical element of high performance.

The Performance Balance triangle has management leadership at its base, which is very deliberate. In well-balanced, high performing teams or organizations, technology, systems, and processes serve people. We have the opportunity to use technology for positive changes; however, ignoring the possible negative effects is not only naive but potentially dangerous. Moreover, leaders have to take actions to overcome the negative aspects of technology.

In order to tackle the risk of computer viruses, leaders should enlist the help of an information technology specialist who will then reassure the users and explain to them the no-fault assumption and assure them that any information they give will be treated confidentially. It is the duty of the IT specialist to confirm the presence of a virus, contain the initial infection, notify everyone within the department, and clean up the known infections. This includes installing the current anti-virus programs and the latest virus-signature file on the infected systems.

The IT specialist should find all infected diskettes and systems: check for possible computer server infections, check diskettes and PCs for viruses, and interview users to determine the source of the virus and where it may have spread to. Preventive actions must take place in management leadership in order to prevent the effects of viruses from destroying the infrastructure of an organization. These preventive actions should include: knowledge, policy, awareness, and anti-virus software. Leaders must educate users about viruses and how a computer can become infected.

Train them in the whole process of virus infection, from initial infection to final eradication. They should prepare a company virus prevention policy, which should include procedures in dealing with an infected environment, in handling Internet e-mail that originates from outside the company, in using computer storage, and in sharing of computer files and documents. Finally, the leaders should create a virus awareness program. If you are aware of any potential problem, you are more likely to prevent it from happening.

Create a computer bulletin board or billboard for users to exchange ideas and known viruses to be on the look out for. Finally, the leaders should have the IT specialist choose anti-virus software and install it in every computer system in the company. The virus software should be updated periodically to prevent infection from new virus strains. Along with the responsibility to protect against and prevent viruses, leaders have the role of privacy protection as well. Avoiding the attack of communication in transit is less a legal problem than a technological one.

There is software that can provide privacy protection for the individual Internet user. Hardware exists that can prevent very sophisticated industrial spying. Protection of email and the organizations most trusted and valuable information is of most concern to management leadership. The rising use of technology among leaders can challenge leaders to accomplish their goals. Leaders provide order in chaos, but technology is always changing. A good leader is knowledgeable of the positive and negative aspects of technology, and tolerates the uncertainty that is essential in it.

Leaders set effective goals, but technology’s future is unclear. A good strategy is to determine goals first and use technology second. Leaders utilize others’ competencies, but technology has challenged us to develop new competencies and reevaluate old ones. An organization usually has many leadership positions. In light of emerging technology, it may be time to create some new ones, such as information technology specialist and group web-manager. Not only will such a strategy improve the functioning of the group or organization, it can pull-in other types of member, who have not been seen as leaders traditionally.

Leaders are experts in communication, but technology both limits and enhances communication. Leaders should take advantage of technology that enhances communication. Leaders get and give effective feedback, but technology allows leaders to hide behind a screen. You can’t be a leader if you are in front of your computer screen instead of your members. Besides leading by example, you may purposefully want to find ways to get between your members and their computer screens. Leaders motivate others to get involved, but technology competes for attention.

Leaders are good time managers, but. using technology is a new learning task that dominates time. Leaders should allow more time for the technological solutions to common problems. Conversely, the Internet’s ability to link your group with similar ones is a wonderful thing, especially if your group has some extremely focused goals for a rather small student group. It is important to remember that the Internet is a public place, and you should never assume that something is secure.

Leaders should be the moral compass for groups, but… Technology has blurred some distinctions between what’s right and wrong. Many institutions have been caught in the discourse about websites such as Napster. Some people seem to want to have two sets of rules, one for the cold reality of the analog world and another for the magical digital world. It may take many years to establish equilibrium. In the meantime, be clear and consistent about expectations, after you’ve given full consideration to the implications. Choose your battles, but be prepared to say “let’s wait and see”.

Leaders appreciate differences, but technology threatens to marginalize others. As for any program goal, a leader should always ask “who gets excluded by this approach? ” There is a misperception among many leaders that technology is naturally bias-free: The research suggests otherwise. Furthermore, individuals from some backgrounds do not “buy into” or choose to participate in the emerging technology culture, as should be their choice. Be careful when a technological solution becomes the only solution.

Technology Development In The Last Two Decades

Technology development in the last two decades have help us come this far, and maybe even more further as human being, including ourselves, have the passion and bigger need for something more advanced than what we already achieved. The technology developing is now not only a scientist’s job to invent, but also it has become major businesses in the world economy. One of the major technology business is computer technology or usually known as IT or Information and Technology businesses.

The growth of computer technology, including Internet in the last decade has a huge effect on most of the people on the face of the earth, from the United States until the Africa. Asia, as one part of the world, also affected by the [email protected] the industry Asian countries has developed from the production department, because of its cheap human resources. But the myth of Asian countries only can do what they told to do without any idea what their doing has proofed to be wrong. The myth of uncreative Asian workers has proofed wrong by the rapid development in Asia in computers and Information Technology businesses .

Acer Reputation Acer, means “spirited” or “energetic” in Latin , is one of worldwide known company based in Asia. Acer a computer company based in Taiwan, has became one of the top three computer company in the world with sales of $5. 9 billion , best-known brand in Asia for 3 years in the row , included in brand world’s 100 best managed companies , top in overall leadership , second in the 50 most competitive companies in Asia , and also has became Asia’s most admired companies .

With all those achievements, Acer has became one of the top Asian companies in Asia, and one of the leader companies in Taiwan. A big achievement for a young Asian companies, compare to other big Asian companies which most of it are Japanese company based in Japan. Acer History Acer has reached its success recently, after 26 years of hard work by the founder and the CEO of the company, Stan Shih. This Taiwan based company was founded in 1976 under the name of Multitech , as an Original Equipment Manufacturer or OEM for other computer makers such as U. S. based Unis Instrument .

Multitech started to design Taiwan’s first mass-produced computer product for export in 1979 . The name Multitech was used since the company was found until 1987 when the name of Multitech was changed into Acer by Stan Shih, its founder and CEO, because it’s short, easy to pronounce and best of all, it would appear near the top when companies are listed in the brochure . A silly reasons for choosing a company name, but as mentioned before the meaning of Acer is the same as its name.

The growth of Acer was really fast after it starts cloning the western technology for computers technology. As one of the first Asian computer makers, Acer has the courage to compete with those early starters in the business with better quality and cheaper products. The global market seems to like the product offer by Acer, as its product take them in to the third biggest computers maker in the world. Acer Cultures As a company, Acer has its own company culture for its employee to imply.

These company cultures are divided into four, which are: 1. Human Nature Is Basically Good Acer aimed to create an efficient work environments based on mutual understanding. Acers’ employees are encouraged to freely express their opinion, take risks and most importantly learning from their mistakes. In the early days, Acer was one of the companies in the world that offer the stock option program which encourage entrepreneurships as the company grew, driving employees ambition to corporate ladder and giving more incentive for a job well done.

Acer employees are still proud to continue the tradition of “integrity, open minded-ness, and corporate ownership. Acer has created and delivered many human centric benefits to its staff. In return, the employee will worked with greater empowerment and efficiency. 2. Customer Is Number One All Acer operation follow the successful “Acer 1-2-3” business philosophy which placed customer in first place, employee in second and shareholders in third. With this philosophy, the connection between all human aspects of the company will meet.

Consequently, to reach customer satisfaction, Acer has to continue maintaining its competitiveness in all aspects, speed, cost on production and delivery, and also performance and product quality. 3. Put The Knowledge to Work For The Company Through proper empowerment in the workplace, employees are highly encouraged and rewarded to develop skills and “know-how” that help sustain Acer’s long-term business growth. As a company motto goes, Acer employees will always “tackle difficulties, break through bottlenecks and create new opportunities that bring real value.

Acer’s intangible assets such intellectual property can be turned into tangible rewards and managed to foster the corporate strength. Acer employees have always been encourage and rewarded an entrepreneurial skills which made them can be trusted to make sensible business decisions with minimal supervision. This culture that they implied made their new employees to be able to quickly learn to think independently and act for the best interest of the company. 4. Be Pragmatic and Accountable Everyone is encouraged to calculate risks and avoid taking the risks that we cannot afford.

Flexibility is extremely important when running a business. Acer believes it is more important to keep the business alive than save face. Stan Shih, chairman and CEO of The Acer Group states, “In an perpetually changing IT industry, what principals can Acer count on? First, we take care of all stakeholders; second, we continuously improve and create value. We believe these will be achieved through our long-term pursuit and effort. ” With those four corporate cultures, the working environment built in Acer Company is trust between the corporation and its employee.

By implementing these cultures, Acer has always been able to make improvement and maintaining its performance. Acer Strategy From the cultures mentioned above, we might already have the conclusions for the reasons of Acer success. But the fact, a good company culture is hardly taking a company into a great success as Acer does. Acer also has their own corporate strategy in doing their business. Their strategy is a decentralized management as well as decentralized structure made by their CEO Stan Shih, designing the company to be a CEO factory .

The corporate strategy is a three in one system, “The Fast-food business model”, “Client-server structure”, “Global brand and local touch” . The Fast-Food Business Model Acer implicating a fast-food business model as Mc Donald’s it has systematic operations and unified brand name. Adopting Mc Donald’s operation model, Acer built their strength of motherboard manufacturing while preventing shortcomings . With the fast-food business model, Acer improved their inventory turnover rate by 100% . It shows that efficient and systematic operation of a company can improve company condition.

With branches all over the world, Acer has worked all barriers they used to have just by implementing a new company strategy. A Client – Server Structure The structure able the company to be a worldwide business that takes a local partnership approach. The center of decisions making is the shareholder meeting, which is the only opportunities for the headquarter to influence its branches business decisions . A partnership of a client and a server as works in a computer technology, also applied in the company strategies.

Partnership of Acer with other computer related companies has already established from the day that Acer started cloning computer technology. One of the major partnership between Acer and other computer related was made in 1996 when Acer signs a reciprocals patent licensing agreement with IBM, Intel, and Texas Instrument for using each other’s patented technology . The agreement gives advantages to all the companies included to freely improvise their technology to be included in their product without any patent barrier.

Global Brand Local Touch Global brand local touch means that Acer as a Global brand gives opportunities to local investor in their branches location to join partnership with the corporation. This strategy reduces the corporation risk overseas. Not only that, it gives more opportunities for the corporation to adopt and to understand the need of local customer of their product. Conclusion Acer as one of the major player in computer and IT business has a great corporation strategy. The founder, Stan Shih, is the genius behind the success of Acer.

The success of Acer has brought Stan Shih to be one of the top executives in the world. “Shih is considered to be a high-tech visionary in Taiwan” . From what he has done for the company in the last 24 years since the company was founded, the quote can be considered to be very reasonable. All decisions made by Stan Shih have brought Acer to be one of the most growth company in the world in the last two decades. “Acer is perhaps the only PC vendor to have made significant progress in the consumer arena in the last ten years” .

The consumer, their first priority has helped them in their growth. Their effort of meeting the customer satisfaction has succeeded. Implementing their “Three in One” strategy has improved their sales and growth. The independent of each branch, even each employee to make their decision with the sense of business for the good of the company has worked out well. Stan Shih as a very optimistic person who has a great sense of business has a very good and promising view for Acer based on the strategies that the corporation applied.

Three reasons that make Stan Shih believe that the strategies and the structure will create an even more promising future are: First, under the structure, each business unit will take over its current operating responsibility. Second, the vision that he developed will have to get everyone’s consent. With consensus established, they have greater chance of turning their vision into reality together. Third, share of common interest and any visionary strategy will make more strength and motivate the full cooperation of every colleague . The future of Acer is still far ahead, and new visionary of technology in advanced is still needed.

The Acer group is weathering out changes in the PC business with a strategy aimed at long term diversification” . Future vision can be build from experiences and knowledge. The strategies that they applied have a high probability to be implicated in the future. The problem is that the market has always changes, as happened in the Tech-Slump in the end of 1998. Will the flexibility of their corporate strategy cover or meet the flexibility of the world market? And will they have their strength to face such a challenge in the future to keep competing in the global market?

In my opinion, their corporation culture and their corporation strategy might have the high possibility to cover those challenges. Strategy applied so far to the corporation have proofed that it could be very flexible in facing barrier. They are ready to face the challenges, the structure they built as the foundation is very strong, and one more thing that I personally impressed from this company is the vision and the strategy made by Stan Shih to gives trust to Acer branches, their executives, and even employee to make their own business decision for the good of the company.

The decentralized strategy applied in the company has worked out very well and gave a great result. Stan Shih will resign as the CEO of the company in the near future . But the replacement is already built and prepared to take the company into maybe even a greater future for the company through their company cultures and strategy. From this point of view, I can say that Acer is really ready for everything’s and every possibility in the future.

It shows that the strategy that they applied is not only for the sales, operating efficiency, better quality of products, but also for a good human resources. One impression for me in doing this report is Stan Shih is a genius because of his vision in strategy, building every aspects of the company in one time. And Acer is a great company because of great pioneer and founder, for the time being and for the future.

Technology And Stock Market

The purpose of this research paper is to prove that technology has been good for the stock market. Thanks to technology, there are now more traders than ever because of the ease of trading online with firms such as Auditrade and Ameritrade. There are also more stocks that are doing well because they are in the technology field. The New York Stock Exchange and NASDAQ have both benefitted from the recent technological movement. The NYSE says they “are dedicated to maintaining the most efficient and technologically advanced marketplace in the world.

The key to that leadership has been the tate-of-the-art technology and systems development. Technology serves to support and enhance the human judgement at point-of-sale. NASDAQ, the worlds first fully electronic stock market, started trading on February 8th, 1971. Today, it is the fastest growing stock market in the United States. It alo ranks second among the worlds securities in terms of dollar value. By constantly evolving to meet the changing needs of investors and public companies, NASDAQ has achieved more than almost any other market, in a shorter period of time.

Technology has also helped investors buy stocks in other markets. Markets used to open at standard local times. This would cause an American trader to sleep through the majority of a Japanese trading day. With more online and afterhours trading, investors have more access to markets so that American traders can still trade Japanese stocks. This is also helped by an expansion of most market times. Afterhours trading is available from most online trading firms. For investing specialists, technology provides operational capability for handling more stocks and greatly increased volumes of trading.

Specialists can follow dditional sources of market information, and multiple trading and post-trade functions, all on “one screen” at work or at home. They are also given interfaces to “upstairs” risk-management systems. They also have flexiblity to rearrange their physical workspaces, terminals and functional activities. Floor brokers are helped with supports for an industry-wide effort to compare buy/sell contracts for accuracy shortly after the trade. They are also given flexibility in establishing working relationships using the new wireless voice headsets and hand-held data terminals.

The ability to provide new and enhanced nformation services to their trading desks and institutional customers is provided. They have a comprehensive order-management system, that systematizes and tracks all outstanding orders. Technology gives a markets member organizations flexibility in determining how to staff their trading floor operations as well as flexiblity in using that markets provided systems, networks and terminals or interfacing their own technology. They are given assurance that their market will have the systems capacity and trading floor operations to handle daily trading and in billions of shares.

Member rganizations get faster order handling and associated reports to their customers, along with speedier and enhanced market information. They also have a regulatory environment, which assures member organiztions that their customers, large and small, can trade with confidence. Technology also allows lower costs, despite increasing volumes and enhanced products. Companies listed on the NYSE are provided with an electronic link so they may analyze daily trading in their stock, and compare market performance during various time periods.

The technology also supports the visibility of operations and information, and egulated auction-market procedures, which listed companies expect from their”primary” market in support of their capital-raising activities and their shareholder services. Institutions get enhanced information flow from the trading floor, using new wireless technologies, as to pre-opening situations, depth of market, and indications of buy/sell interest by other large traders.

Also supported are the fair, orderly, and deeply liquid markets which institutions require in order to allocate the funds they have under management whether placing orders in size for individual stocks (block orders) or executing rograms (a series of up to 500 orders usually related to an index). For institutional investors, technology gives information on timely trades and quotes and makes them available through member firms, market data services, cable broadcasts and news media.

They also are provided with a very effective way of handling “smaller” orders, giving them communications priority and full auction market participation for “price improvement” yet turning the average market order around in 22 seconds. Price continuitity and narrow quotation spreads, which are under constant market surveillence and a regulatory nvironment which enforces trading rules designed to protect “small investors” are also supported. There are many different kinds of equipment used on the stock market.

One of these machines is SuperDot, an electronic order-routing system through which member firms of the NYSE transmit market and limit orders directly to the trading post where the stock is traded. After the order has been completed in the auction market, a report of execution is returned directly to the member-firm office over the same electronic circuit that brought the order to the trading floor. SuperDot can currently process about 2. billion shares per day. Another piece of machinery is the Broker Booth Support System.

The BBSS is a state-of-the-art order-management system that enables firms to quickly and efficiently process and manage their orders. BBSS allows firms to selectively route orders electronically to either the trading post or the booths on the trading floor. BBSS supports the following broker functions: recieving orders, entering orders, rerouting orders, issuing reports, research, and viewing other services via terminal “windows”. The overhead”crowd” display is Americas first commercial application of large-scale, igh-definition, flat-screen plasma technology. It shows trades and quotes for each stock.

The display also shows competing national market system quotes. Clear, legible information is displayed at wide viewing angles. Full color and video capabilities are also provided. The “Hospital Arm” Monitor is suspended for convenient viewing by specialists. Multiple data sources that are displayed include point-of-sale books, overhead “crowd” displays, market montage and various vendor services. The list of information sources is going to continue expanding. The Point-of-Sale Display Book is a tool that greatly ncreases the specialists volume handling and processing capabilities.

Using powerful workstation technology, this database sysem maintains the limit order book for which the specialist has agency responsibility, assists in the recording and dissemination of trades and quotation changes, and facilitates the research of orders. All of this serves to eliminate paperwork and processing orders. The Consolidated Tape System is an integrated, worldwide reporting system of price and volume data for trades in listed securities in all domestic markets in which the securities are traded.

The Hand-Held is a mobile, hand-held device that enables brokers to recieve orders, disseminate reports, and send market “looks” in both data and image format, from anywhere on the trading floor. Intermarket Trading System is a display that was installed in 1978 linking all major U. S. exchanges. ITS allows NYSE and NASDAQ specialists and brokers to compare the price of a security traded on multiple exchanges in order to get the best price for the investor. These are the machines that have helped greatly increase the buying and selling of stocks over the past few years.

There are great advantages to trading today over the situation that past traders had. The biggest beneficiaries of this new technology are investors themselves. They have all day to trade instead of trading only during market hours, they have more stocks to choose from, and the markets are very high so people are making a lot of money. In conclusion, I have discovered that the research I have done on this project has revealed what I originally thought to be true. That is that the stock market has greatly benefitted from the recent advances in technologies.

Database Comparison of SQL Server 2000, Access, MySQL, DB2, and Oracle

This paper will compare and contrast five different database management systems on six criteria. The database management systems (DBMS) that will be discussed are SQL Server 2000, Access, MySQL, DB2, and Oracle. The criteria that will be compared are the systems’ functionality, the requirements that must be met to run the DBMS, the expansion capabilities – if it is able to expand to handle more data over time, the types of companies that typically use each one, the normal usage of the DBMS, and the costs associated with implementing the DBMS. System functionality

Microsoft Access is a database engine and development environment in one package. It is typically workstation-based, and designed to be easy to use, even for users with no experience. However, it also provides advanced functionality for experienced users. MySQL is the largest open-source RDMBS, and it is server-based, as well as the rest of the DBMS that will be discussed. According to the mysql. com website, it offers high reliability and performance, easy use and deployment, freedom from platform lock-in by providing ready access to source code, and cross-platform support.

SQL Server is an enterprise class RDBMS from Microsoft. It is part of the Back Office Suite of products. Although it is always server-based in production, it can be client-based in development. DB2 is also an enterprise-class DBMS, produced by IBM. It offers some object-oriented functionality, as well as cross-platform compatibility, and is server-based. Finally, Oracle offers much of the same functionality as DB2, with cross-platform capability, and some object-oriented features. It, as well, is server-based.

System Requirements There is a correlation between the complexity of the DBMS and the system requirements. For instance, Access can be installed on any Windows-based operating system from Windows 95 and above. SQL Server, in the widely used Standard and Enterprise editions, is also strictly Windows-based, but must be run on Windows NT or 2000 Servers. The personal and development editions of SQL Server may be run on Windows NT Workstation, and Windows 2000 and XP Professional, in addition to the server platforms.

MySQL has a wide variety of platforms, including the Windows platforms, Sun Solaris, FreeBSD, Mac OS X, and HP-UX, to list a few. DB2 will run on Windows NT 4 and higher, Sun Solaris, HP-UX and Linux. Oracle will run on all of the platforms supported by DB2, as well as AIX 4. 3. 3 or higher, and Compaq Tru64 5. 1. Expansion Capabilities Access is considered to be a small DBMS, with a maximum database size of 1 GB; therefore, it has very limited expansion capabilities.

MySQL does offer expansion, including clustering capability. MySQL also offers an enterprise-class DBMS through a joint venture with SAP. SQL Server, DB2, and Oracle, since they are all considered to be enterprise-class DBMS, are highly expandable, with maximum database size into the terabytes (TB). Truly, these databases are at a point where the limit is actually in the operating system, not the DBMS. Types of Companies There are different markets for the different classes of DBMS.

Access databases and applications will be used company-wide in very small companies. These databases can be found in different departments of larger companies, but would not be used at a company level. MySQL, according to their website http://www. mysql. com, has over 6 million installations, including companies like Yahoo and the Associated Press. I think MySQL would be a good fit for a mid-sized company that cannot afford the price of the higher-end DBMS, but need more functionality, security, and robustness than is offered by Access.

Finally, the large DBMS systems like SQL Server, Oracle, and DB2 are typically only utilized in large companies, because of the investment required to install and maintain these databases. Database Use Each of the databases is suited to particular classes of use. Although Access can be used in a multi-user environment, it is not a good choice when there will be multiple concurrent users, because Access does not have robust transaction process as the other DBMS do. Typically, an Access application will be a single-user installation on a workstation.

All of the other DBMS are suited to handle multi-user concurrency and offer a lot of features around transaction processing and record locking to prevent issues from arising. These databases can be found in client/server applications, as well as applications that utilize internet or intranet pages as a front end. Cost The cost for the different DBMS varies widely (in fact, from nothing, to millions of dollars). A standalone version of Access (without an upgrade), costs about $339. It is also included with the Office XP Professional and Developer Editions.

MySQL is free – if the application you are developing is open-source. If the application is proprietary, then the cost will be $495 per database server, with no cost for client access licenses. The more database servers that are purchased, the lower the cost per server is, down to $175 per server if 250 or more are purchased. Now it gets a bit more complicated. DB2 Enterprise, in a server with a single processor, will cost $25,000. At the high end, it will cost $800,000 for a 32 processor version.

If the company wants OLAP and Data Mining, those are additional, with prices up to $2,016,000 for a 32 processor implementation. SQL Server is a bit more reasonable – and OLAP and Data Mining are included in the Enterprise Edition. On the low end, SQL Server Standard with one processor will be $4,999. At the high end, SQL Server Enterprise with 32 processors will cost $639,968 (not as bad as those 2 million dollars). Oracle is the most expensive. At $40,000 dollars for Enterprise Edition on a single processor, and over $2. 5 million dollars for 32 processors with OLAP and Data Mining, it tops the list.

Negative Effects Of Technology

For a while now, science has been a mystery to man, leading him to want to discover more and more about it. This in many aspects is dangerous to our society, being that scientific developments in new studies have been advancing too quickly for our minds to comprehend. Things such as cloning, organ donation, and pesticides, are things that the world may sometimes find useful, when in reality, it only brings civilization down. Raising science to the status of godhood carries with it inherit risks that demand careful consideration.

Developmental experiments such as cloning have been successful, but they bring along ith them some very negative results, for example, in some early experiments in animal cloning some potential dangers had been encountered, cloned cows developed faulty immune systems, other projects which created cloned mice, grew obese, and in most studies, cloned animals seemed to grow old faster and die younger than the usual members of the species.

When adding on to the human race, not only are we increasing our huge population rate, but we are also adding humans and animals that have defects as well as a short life span. It would be a waste of our governments money to bring omething to life, that we will have to take extra care of, just to have it die in just a matter of weeks as quick as a goldfish dies. When talking about organ donation, people usually think that it is a great discovery and that scientists have made a break through in this portion of the medical field, with out knowing how highly the chances of ineffectiveness this procedure has.

The immune system attacks anything that lacks histocompatibility antigens or has antigens different from those found in the rest of the body, such as those found on invading viruses, bacteria, or other microorganisms. This recognition system causes the immune system to attack transplanted tissues that have different antigens because it has no way to tell the difference between harmful and helpful organisms, therefore causing the body to reject the organ which causes infection in the person body.

Also donated organs go to the patient who is nearest death, even though a healthier patient might benefit more by living longer after the transplant. A drug called tacrolimus (FK-506) was found to be even more effective for kidney, liver, heart, and lung transplants. However, patients who take this drug still face some increased risk of infection and cancer, and the drug can cause kidney damage. This shows proof that when a scientist trys to play god theyre plans are ineffective, and that no matter how much you try to perfect the human body, negative things will be of a greater outcome.

Another improvement in our society is the creation of pesticides. When I child is growing up, they need to eat their fruits and veggies so their not so tough immune systems can grow stronger, but when you have such strong pesticides being sprayed in rop fields, it makes it difficult to feed your kids these things, children cannot convert these toxins in to harmless chemicals as quickly as adults can. The largest hazard out of all of them, is the fact that they can also cause many people to be susceptible to illness and disease.

Most pesticides are synthetic chemicals derived from petroleum. They were first developed as offshoots from nerve gas used during WWI. A National Cancer Institute study indicated that the likelihood of a child contracting leukemia was more than six imes greater in households where herbicides were used for lawn care. According to the New York State Attorney Generals office, the EPA considers 95% of the pesticides used on residential lawns to be probable cause of an abnormal growth of tissue.

Pesticides have also been linked to a huge rise in the rate of breast cancer, and besides causing cancer, pesticides are most likely to cause infertility, birth defects, learning disorders, mental disorders, allergies, and multiple chemical sensitivities, along with other disorders of the immune system. Though scientist did try to eliminate bugs with the use of pesticides, they have created an even bigger problem.

Now, more than ever, people are susceptible to certain illnesses because of this breakthroughs. In the end most people dont realize that this so called industrial revolution in the medical as well as chemical field hasnt done much for us to be excited about. It has only put us back, by having us research for cures for the illnesses that these revolutions have brought us. In the end no one can create life, alter it, nor destroy it but god himself.

Technology and the Future of Work

Every society creates an idealised image of the future – a vision that serves as a beacon to direct the imagination and energy of its people. The Ancient Jewish nation prayed for deliverance to a promised land of milk and honey. Later, Christian clerics held out the promise of eternal salvation in the heavenly kingdom. In the modern age, the idea of a future technological utopia has served as the guiding light of industrial society.

For more than a century utopian dreamers and men and women of science and letters have looked for a future world where machines would replace human labour, creating a near workerless society of bundance and leisure. (J Rifkin 1995 p. 42) This paper will consider developments in technology, robotics, electronic miniaturisation, digitisation and information technology with its social implications for human values and the future of work. It will argue that we have entered post modernity or post Fordism, a new age technological revolution, which profoundly effects social structure and values.

Some issues that will be addressed are: elimination of work in the traditional sense, longevity, early retirement, the elimination of cash, the restructuring of education, industry nd a movement to global politics, economics and world government. In particular this paper will suggest that the Christian Judao work ethic with society’s goals of full employment in the traditional sense is no longer appropriate, necessary or even possible in the near future, and that the definition of work needs to be far more liberal.

It argues that as a post market era approaches, that both government and society will need to recognise the effects of new technology on social structure and re-distribute resources, there will need to be rapid development of policies to assist appropriate social djustments if extreme social unrest, inequity, trauma and possible civil disruption is to be avoided. Yonedji Masuda (1983) suggests we are moving from an industrial society to an information society and maintains that a social revolution is taking place.

He suggests that we have two choices Computopia’ or an Automated State’, a controlled society. He believes that if we choose the former, the door to a society filled with boundless possibilities will open; but if the latter, our future society will become a forbidding and a horrible age. He optimistically predicts our new future society will be computopia’ which he describes as xhibiting information values where individuals will develop their cognitive creative abilities and citizens and communities will participate voluntarily in shared goals and ideas.

Barry Jones (1990) says we are passing through a post-service revolution into a post- service society – which could be a golden age of leisure and personal development based on the cooperative use of resources. Jeremy Rifkin (1995) uses the term The Third Industrial Revolution’ which he believes is now beginning to have a significant impact on the way society organises its economic activity.

He describes it as the third and final stage f a great shift in economic paradigm, and a transition to a near workless information society, marked by the transition from renewable to non-renewable sources of energy and from biological to mechanical sources of power. In contrast to Masuda, Jones and Rifkin, Rosenbrock et al. (1981) delved into the history of the British Industrial Revolution, and they concluded firmly that we are not witnessing a social revolution of equivalent magnitude, because the new information technology is not bringing about new ways of living.

They predicted that we are not entering an era when work becomes largely unnecessary, here will be no break with the past, but will be seeing the effect of new technology in the next 20 years as an intensification of existing tendencies, and their extension to new areas. I suggest that Rosenbrock may come to a different conclusion with the benefit of hindsight of changing lifestyles, 15 years later, such as the persistent rise in unemployment and an aging society.

Population is aging especially in developed countries and will add significantly to a possible future lifestyle of leisure. Most nations will experience a further rapid increase in the proportion of their population 65 years and older y 2025. This is due to a combination of the post war baby boom and the advances in medicine, health and hygiene technology with the availability and spread of this information. Governments are encouraging delayed retirement whereas businesses are seeking to reduce the size of their older workforce.

The participation rates of older men has declined rapidly over the past forty years with the development of national retirement programmes. In many developed countries the number of men 65 and older who remain in the workforce has fallen below ten percent. Due in part to technological advances there are more older eople and they are leaving the workforce earlier. Thus this body of people will contribute to the growing numbers of people with more leisure time. Clerk 1993)

Professor Nickolas Negroponte (1996) of the MIT Media Lab, points out that in percentage per capita it is those people under seventeen years of age and over fifty five who are the greatest users of the Internet, and that the Internet and other information technologies encourage democracy and global egalitarianism. Furthermore he envisions a new generation of computers so human and intelligent that they are thought of more as companions and colleagues rather than echanical aids.

Jones (1990) points out a number of elements relating to the adoption of new technology that have no precedent in economic history and suggests that there is a compelling case for the rapid development of policies to assist appropriate social adjustments. He points out that manufacturing has declined as the dominant employer and that there has been a transition to a service’ or post industrial economy in which far more workers are employed in producing tangible and intangible services than in manufacturing goods.

The cost of technology has fallen dramatically relative to the cost of human labour. Miniaturisation has destroyed the historic relationship between the cost of labour and the cost of technology, allowing exponential growth with insignificant labour input, which is leading to the reduction of labour in all high volume process work. Sargent (1994) points out that in Australia during the last decade, the rich have become richer and the poor poorer: the top 20 per cent of households received 44 per cent of national incomes in 1982, and by 1990 this had risen to 47 per cent.

But the top 1 per cent received 11 per cent of incomes in 1982, and this rose to 21 per cent in 1990. Meanwhile unemployment continued to increase. Jones (1990) further points out that the new technology has far greater reliability, capacity and range than any which proceeded it. Microprocessors can be directed to do almost anything from planning a school syllabus and conducting psychotherapy to stamping out metal and cutting cloth.

It is cheaper to replace electronic modules than to repair them and the new technology is performing many functions at once and generating little heat or waste and will work twenty four hours a day. The making and servicing of much precision equipment which required large skilled labour force has been replaced by electronic systems that require fewer workers. The relationship between telecommunications and computers multiplies the power of both, the power for instant, universal communications is unprecedented, consequently the influence of any individual economy to control its own destiny is reduced.

All advanced capitalist nations and many third world and communist blocks are now largely interdependent, this has led to an international division of labour and the growth of the multinational corporations. The global economy is rapidly taking over from individual nations. The adoption of each new generation of technology is increasing and is rapidly becoming cheaper than its predecessor. Technologies developed in the 1960s have seen rapid rates of development, adoption and dissemination.

Less developed countries can now acquire the new technologies due to the rapid decrease in cost, and the combination of their low wages and the latest technology make them formidable competitors in the global market. Almost every area of information based employment, tangible services and manufacturing is being profoundly influenced by new technology. Jones (1990) notes that few economists have addressed the many social mplications that stem from the development of science and technology.

Most economists’ thinking is shaped by the Industrial Revolution and they are unable to consider the possibility of a radical change from the past, they give no hint that Australia has passed a massive transition from a goods based economy to a service base. Attempts to apply old remedies to new situations are simply futile. Jenkins (1985) disagrees with Jones and argues on behalf of the traditional economic model suggesting that it will continue to work well in the new era and the facts do not support any causal relationship between automation, higher roductivity, and unemployment.

He claims that it cannot be emphasised too strongly that unemployment does not stem from the installation of new technology. He says it is the failure to automate that risks jobs and the introduction of new technology will increase the total number of jobs. Further, he suggests that the primary reason for introducing new technology such as computer controlled robots is to reduce costs and to improve product quality and that lower costs mean lower prices.

This results in increased demands for goods and services, which in turn generates higher output and employment and profits. He uggests that higher profits induce higher investment and research and development expenditure whilst the domestic producers of robotics and microelectronic based equipment increase output and employment. He sees the greatest problem simply in the need for occupational restructure of employment, as the need for software experts, computer programmers, technicians and engineers are likely to sharply rise.

Rifkin (1995) like Jones believes that the old economic models are inappropriate in the Third Industrial Revolution’ and describes views similar to Jenkin’s as ” century old conventional economic wisdom” and ” a logic eading to unprecedented levels of technical unemployment, a precipitous decline in purchasing power, and the spectre of a worldwide depression. ” It is questioned whether Jenkins’ solution of re-training will be able to replace all displaced workers.

Educator Jonathon Kazol (1985) points out that education for all but a few domestic jobs starts at the ninth grade level. And for those, the hope of being retrained or schooled for a new job in the elite knowledge sector is without doubt out of reach. Even if re-training and re- education on a mass scale were undertaken, the vast numbers of dislocated orkers could not be absorbed as there will not be enough high-tech jobs available in the automated economy of the twenty-first century. A British Government backed study by Brady and Liff (1983) clearly supported this view.

They concluded that jobs may be created through new technology, but it will be a very long time before the gains could offset the losses from traditional industries. Even the neo-classical economists continue to subscribe to traditional economic solutions, yet they have been met with stiff opposition over the years. In Das Kapital, Marx (McLelland 1977) predicted in 1867 that increasing the automation f production would eliminate the worker altogether, and believed the capitalists were digging their own graves as there would be fewer and fewer consumers with the purchasing power to buy the products.

Many orthodox economists agreed with Marx’s view in many respects, but unlike Marx, supported the notion of trickle down economics’ and said that by releasing’ workers, the capitalists were providing a cheap labour pool that could be taken up by new industries that in turn would use the surplus labour to increase their profits that would in turn be invested in new labour saving echnology which would once again displace labour, creating an upward cycle of prosperity and economic growth.

Such a viewpoint may have some validity in the short-term but one must consider the longer term effects of such a cycle, it is questionable whether it could be sustained. Another important question is whether consumerism will continue unabated, whether it is a normal human condition to see happiness and salvation in the acquisition of goods and services. The word “consumption” until the present century was steeped in violence. In its original form the term, which has both

French and English roots, meant to subdue, to destroy, to pillage. Compared with the mid 1940s the average American is consuming twice as much now. The mass consumption phenomena was not the inevitable result of an insatiable human nature or a phenomenon that occurred spontaneously, quite the contrary. Business leaders realised quite early that they needed to create the dissatisfied customer’, and to make people want’ things that they had not previously desired (Rifkin 1996).

Nations throughout the world are starting to understand the ill effects that production has on the natural’ environment, and the acquisition of oods and services on the psyche. With more people with less money, and a trend towards a lifestyle that emphasises quality rather than quantity, it is questionable whether consumerism will, or is desirable, to continue.

Science and technology’s profile grew to such an extent in the early part of this century in the United States that the supporters and proponents of technocracy were prepared to abandon democracy, and favoured rule by science’ rather than rule by humans’ and advocated the establishment of a national body, a technate, that would be given the power to assemble the nation’s resources and ake decisions governing production and distribution of goods and services.

The image of technology as the complete and invincible answer, has somewhat tarnished in recent years with the number of technological accidents such as those which occurred in nuclear power stations at Chernobl and Three Mile Island, and threats of nuclear war and environmental degradation increasing and coming to the fore. Yet the dream that science and technology will free humanity from a life of drudgery continues to remains alive and vibrant, especially among the younger generation.

During the 1930s, government officials, trade unionists, economists and usiness leaders were concerned that the result of labour saving devices, rising productivity and efficiency, was worsening the economic plight of every industrial nation. Organised labour wished to share the gains by business, such as increased profits and fewer workers required. They joined together, to combat unemployment by fighting to reducing the working week and improve wages, thus sharing the work and profits amongst the workers and providing more leisure time.

By employing more people at fewer hours, labour leaders hoped to reduce unemployment brought on by labor-saving technology, stimulate purchasing power nd revive the economy. Clearly unions saw the problems resulting from technological change to lie partly, in increased leisure time (Rifkin 1996). Unemployment is steadily rising, global unemployment has now reached its highest level since the great depression of the 1930s. More than 800 million people are now underemployed or are unemployed in the world, while the rich are becoming richer and the poor getting poorer.

Unemployment rates among school leavers in South Australia is as high as twenty five per cent and nine per cent for the rest of the community, which leads one to question whether the traditional conomic model is working. Trade unions have pursued their response to unemployment throughout the years with wages and salaries growing and the working week reduced, for example in the UK the working week has reduced from eighty four hours in 1820 down to thirty eight hours in 1996 (Jones 1990).

Typical government response to unemployment has been to instigate public works programmes and to manipulate purchasing power by tax policies that stimulate the economy and lower tax on consumption. It can been seen in Australia that governments no longer see this as the answer, in fact there is an opposite pproach with a strong movement for a goods and services tax, to redistribute wealth, as proposed by the defeated Liberal Party of Andrew Peacock in 1992, and now being re-introduced. Many job creation schemes and retraining programmes are being abandoned by the new Australian Liberal Government of John Howard.

However the power of the workers and unions in 1996 is severely restricted. The unions have lost the support of workers as reflected in their falling membership, and no longer can use the threat of direct action with jobs disappearing fast. The Liberal Government passed legislation to limit collective bargaining, with nions power of direct action becoming even more eroded and ineffective because of global competition and division of labour, and automation gave companies many alternatives. Unions have been left with no option but to support re- training, whether they believe it is the answer to unemployment or not.

Today, it seems far less likely that the public sector, the unions or the marketplace will once again be able to rescue the economy from increasing technological unemployment. The technological optimists continue to suggest that new services and products resulting from the technological revolution will enerate additional employment. While this is true, the new products and services require less workers to produce and operate, and certainly will not counteract those made redundant through obsolete trades and professions.

Direct global marketing by way of the Superhighway’ the Internet’ and other forms of instant telecommunications is making thousands of middle marketing employees obsolete. For example the SA bank introduced phone banking some while ago, they now are the first bank in South Australia to trade on the Internet (http://www. banksa. com. au), and many rural banks are closing. Also, it has just een announced by the electoral commission that voting by telephone will be trialed next year, with enormous potential job loss.

The widely publicised information superhighway brings a range of products, information and services direct to the consumer, bypassing traditional channels of distribution and transportation. The numbers of new technical jobs created will not compare with the millions whose jobs will become irrelevant and redundant in the retail sectors. Jones (1990) notes that there is a coy reticence from those who believe that social structure and economics will continue as in the past, to identify the ysterious new labour absorbing industry that will arise in the future to prevent massive unemployment.

Jones believes that industry X’ if it does appear, will not be based on conventional economic wisdom but is likely to be in areas where technology will have little application, he suggests it may be in service based areas such as education, home based industry, leisure and tourism. Despite Barry Jones predictions, most service industries are very much affected by new technology. Education is fast becoming resource based with students in primary, secondary, technical and tertiary levels expected to do their own esearch and projects independent of class teachers with schools being networked and teaching through video conferencing.

The conventional teacher is fast becoming obsolete, with the number of permanent teachers reducing, There are numerous examples of workers in service industries being displaced by technology. Shop fronts such as banking, real estate, travel and many more, are disappearing. Small retail food outlets continue to collapse, with the growth of supermarkets and food chains organised around computer technology, and on- line shopping from home. Designers of all types are being superseded by CAD omputer design software. Even completely automated home computerised services such as a hardware and software package called “Jeeves” is now available.

Business management and company directors are finding voice activated lap top computer secretaries far more reliable and efficient than the human form. The New Zealand Minister for Information and Technology, Hon. Maurice Williamson MP, wrote the foreword for the paper How Information Technology will change New Zealand’: On the threshold of the twenty first century we are entering a period of change as far reaching as any we have ever seen. Since the industrial revolution people have had to locate themselves in large centres where they could work with others, but now new technologies are rendering distance unimportant.

The skills that are needed in tomorrow’s society will be those associated with information and knowledge rather than the industrial skills of the nineteenth and twentieth centuries. Changing technology will affect almost every aspect of our lives: how we do our jobs; how we educate our children; how we communicate with each other and how we are entertained. As Williamson points out, with the explosion of technologies , it is easy to ose sight of the larger patterns that underlie them.

If we look at the fundamental ways people live, learn and work, we may gain insights about everyday life. These insights are the basis for new technologies and new products that are making an enormous difference in people’s lives. Stepping back from the day-to-day research for new electronic devices, life can be seen as being fundamentally transformed. There is development of a networked society; a pattern of digital connections that is global, unprecedented, vital, and exciting in the way that it propels the opportunities for entirely new markets and leisure.

As people make digital technology an integral part of the way they live, learn, work and play, they are joining a global electronic network that has the potential for reshaping many of our lives in the coming decade. In the future, technologies will play an even greater role in changing the way people live, learn, work and play, creating a global society where we live more comfortably; with cellular phones and other appliances that obey voice commands; energy-efficient, economical and safe home environments monitored by digital sensors.

There will be “Smart” appliances and vehicles that anticipate our needs nd deliver service instantly. We are seeing portable communications devices that work without wires; software intelligent agents that sort and synthesise information in a personally tailored format; new technologies that provide increased safety and protect our freedom, ranging from infra-red devices that illuminate the night to microwave devices that improve radar and communications.

People are also learning more efficiently, with interactive video classrooms that enable one-on-one attention and learning systems that remember each student’s strengths and tailor lesson plans accordingly. There are lap-top computers and desktop video clips that bring in-depth background on current events with instant access to worldwide libraries and reference books with full motion pictures.

People are working more productively, with “virtual offices” made possible by portable communications technologies and software that allows enterprise-wide business solutions at a fraction of the usual cost and in a shorter length of time with massive memory available at the desktop and lap-top levels. There are “Intelligent” photocopiers that duplicate a document and route it to a file and imultaneous desktop video-conferencing from multiple locations, sending voice and data simultaneously over the same communications channel.

With the explosion of leisure activities available, people play more expansively. There are hundreds of movies available on demand at home, virtual-reality games, a growth in the number of channels delivered by direct satellite television, videophones that link faces with voices, interactive television for audience participation, instant access to worldwide entertainment and travel information and interactive telegaming with international partners (Texas Instruments 1996).

This paper has considered developments in electronic miniaturisation, robotics, digitisation and information technology with its social implications for human values and the future of work. It has argued that we have entering a post-modern period and are entering a post-market era in which life will no longer be structured around work in the traditional sense, there will be greater freedom and independent living, paid employment will be de-emphasised and our lifestyle will be leisure orientated.

I have argued that the social goal of full employment in the traditional sense s no longer appropriate, necessary or even possible, that both government and society will need to recognise the effects of technology on social structure and re-organise resources to be distributed more equally if extreme social unrest, inequity, trauma and possible civil disruption is to be avoided. I foresee a scenario of a sustainable integrated global community in which there will be some form of barter but cash will be largely eliminated, money will be virtual’.

A minimal amount of people will be involved and enjoy some forms of high tech activity, while the vast majority will have a vocation that s essentially creative and enjoyable perhaps involving the arts and music with a spirituality that involves deep respect and care for the natural world with new forms of individual and group interaction. There will be minimal forms of world central democratic government. Vast forms of infrastructure will no longer be required as citizens will largely be technologically independent.

Most communication and interaction will be instant and conducted from home, office or public terminal. There will be new forms and ways of living, new family structures that may consist of larger and smaller groups. A comfortable, pleasurable and leisure based lifestyle in which all the essentials and wants will be automatically provided through the processes of the largely self- sustaining and self evolving technology.

Rifkin (1995) has a similar view, and concludes that he believes the road to a near-workerless economy is within sight and that road could head for a safe haven or a terrible abyss, it all depends on how well civilisation prepares for the post-market era. He too is optimistic and suggests that the end of work could signal the beginning of a great social transformation, a rebirth in the human spirit.

Books and Technology: Is the Future of Printed Books in Jeopardy

Technology has impacted our lives in innumerable ways. It is so implemented into our daily lives, that not a thought crosses our minds about how easily we are living. Technology has changed our world significantly. But has the computer made life’s activities too easy for us? Are we becoming a lazy nation, by sitting at home letting the computer think for us? I feel that this is the case in some situations. Although one can see the advantages of ordering certain products online, like clothing or hard-to-get items, must we resort to ordering everyday necessities, like groceries, over the computer?

I feel that if a person has to order online items that they can buy in their community, he must be extremely lazy. Computers are great for research, but one must be sure that the information they are uncovering is credible. Computers may make activities like filing or organizing much simpler, yet people are not using their own minds to accomplish certain tasks. Our minds are not going to grow if we depend on a machine to think for us. Yes, there is a plethora of information available to us on the Internet, but is anyone applying it to everyday life?

Maybe so, but there is nothing like researching your interests through your own motivation. It is not hard to go to the library and read through books to gather information. The many people who read for leisure probably cannot imagine reading their favorite novel on the computer. One of the joys of reading is that you can be anywhere and still “lose” yourself in a book. You can sit in bed, or on your favorite chair, and thumb through a book. Books are imperative in the process of forming objectives. Books may not exist in the future because of technology.

Technology has brought forth many inventions like audiobooks and the newly introduced electronic books to simplify the process of reading. Is this really necessary? And even if it is, will printed books one day be extinct? Books have entertained people for a very long time. Oral tradition led to writing, and then movies and television came along. The computer is taking over all aspects of entertainment. Writer Connie Lauerman shares that: Ruth Perry, a professor of literature at the Massachusetts Institute of Technology, believes people are ‘too quick to jettison the old.

She says that a young graduate student at another university recently called herself ‘part of the last generation to learn from reading books. She said that other people after her…learned from reading on computers. ‘ (Lauerman 1) How can one enjoy reading a great novel on a computer? It seems there is something not quite right with that process. Perry states, “The experience of reading where you go back to look at another page, or compare… passages, that cannot be done on a screen….

I think there some important way in which the sensory experience of reading a hand-held book feeds into thinking about it. ” (Lauerman). I feel that nothing can replace the experience of reading a book of a respective interest. You can find information on the Internet, but this process is almost too easy in the sense that you can read only specific areas of a subject without exploring all areas of that particular subject. It is hard to have an objective mindset if that is the way you research.

Although millions of people surf the Internet and gain information, they are not really processing it in their minds. An author states, “Inhabitants of digital culture watch text and graphics scrolling down and streaming across computer monitors. But they don’t always call it reading. On PC’s [personal computers], people search, surf, browse, log on, but seldom admit to reading” (Fortune 1). You can download any newspaper on the Web, but do you want to read your daily newspaper on your computer when you wake up in the morning? I would not think so.

Some companies have thought about this and now they bring readers the electronic book, “…digital files of novels, magazines, and newspapers that can be downloaded into handheld gadgets for portable, paper-free reading” (Terell). Here we go-you can still curl up in bed and read your favorite book. The device is “about the size and weight of a nice hardcover…. the screen is easy on your eyes and you can mark your pages” (Fortune 2). Are avid readers going to start buying into this new invention? The companies certainly hope so. The companies want to attract people who will pay for what they read on the Internet.

Each E-book will be able to download numerous books and newspapers. These companies expect consumers to fall in love with this new-fangled device. But they still have a few issues to address, such as “high prices, limited reading selection, and uncertainty about which device will become the standard” (Terrell). Prices range from $300-$600 (Landers). Some of these books will be cheaper because they will not be accompanied with a modem. Other problems include occasional glare on the screen and short battery life. Even with these shortcomings, companies are confident that their product will sell.

I still cannot imagine reading a book without literally turning the pages. It is going to be hard for these companies to convince consumers that they really need this product. But maybe it will catch on. Does this mean that printed books will cease to exist? Many scholars worry about the effect of the onslaught of technology on the book. Paul Mosher, a library director at the University of Pennsylvania, states, “The messianic leaders of the information -technology takeover are the same ones who told us the book was dead” (Belsie 2).

Mosher also makes the point that retail sales of printed books in the United States reached $23 billion last year (Belsie 2). But others still worry. Not only do the e-books pose a threat, but the multitudes of audiobooks and CD-ROM versions of books also assist in the decline of the book. Although audiobooks provide listeners with the added dimension of sound, writer Donna Seaman claims that “Listeners do miss out on the dreaminess that reading induces, and the subtle patterns you discover as your eye drifts back across a page or two” (Seaman 2).

She also mentions that when you read a book, you can control the pace, whereas listening to a tape, you cannot control the speed. CD-ROM versions of books provide graphics and audio to enhance the joy of reading, but offer far less substantive text. Seamen states, “To read for pleasure is to transcend the physical as well as the body seeks the ideal balance between comfort and mental acuity” (Seamen 2). Many people sit in front of a computer all day long at their job. These people probably would not enjoy coming home and proceeding to read a novel on the computer.

There really cannot be any enjoyment in the process of clicking on icons when ready to turn the page. CD-ROM’s are handy for research and other information, but as a tool for leisurely reading, they do not compare to a printed book. The computer revolution is definitely changing the future of printed books. The printed book is endangered; I think that books will cease to exist in the far future. Libraries may become computer research centers or, if everyone owns their own computer by then, libraries will cease to exist.

Hopefully, this will become a major issue in the future and will instigate some form of protest. Computers are very useful but do not provide much room for imagination. I think it is great that technology has produced innovations such as audiobooks, CD-ROM’s, and electronic books. But as an avid reader and lover of printed books, I would like to state that technology should not step into the wondrous realm of reading. Books are the basis for many objectives, and we should not allow future generations to forget that.

UNIX vs NT

To build a good and stable network is extremely difficult. It takes a team of very knowledgeable engineers to put together a system that will provide the best service and will forfill the need for the companies users and clients. There are many issues that have to be resolved and many choices have to be made. The toughest choices IT managers have to make, are what will be the best server platform for their environment. Many questions must be answered. Which server software offers complete functionality, with easy installation and management?

Which one provides the highest value for the cost? What kind of support and performance can be expected from this system? And most important of all is what is more secure? In this paper, Microsoft Windows NT Server is compared to UNIX, in the large commercial environment. The main focus of the comparison is on the areas of, reliability, compatibility, administration performance and security. Which system is worth the money? What can you expect from Windows NT Server out of the box and from UNIX out of the box?

NT can communicate with many different types of computers. So can UNIX. NT can secure sensitive data and keep unauthorized users off the network. So can UNIX. Essentially, both operating systems meet the minimum requirements for operating systems functioning in a networked environment. Put briefly, UNIX can do anything that NT can do and more. Being over 25 years old, the UNIX design has been crystallized out further than any other operating system on a large scale. NT is fairly new and some say it is a cheap rip off of UNIX. But it is not cheap at all.

To purchase an NT server with 50 Client Access Licenses , one will spend $4,859. 00. Not so bad. But it gets much more costly than this. This price is just for software, but everyone knows to build a network you need a lot more than this. E-mail has become an indispensable tool for communication. It is rapidly becoming the most popular form of communication. With Windows NT, you will have to buy a separate software package in order to set up an e-mail server. Many NT-based companies use Microsoft Exchange as theyre mailing service.

It is a nice tool, but an expensive solution with not such great success in the enterprise environment. Microsoft Exchange Server Enterprise Edition with 25 Client Access Licenses costs $3,549. 00. UNIX operating systems come with a program called Sendmail. There are other mail server software packages available for UNIX, but this one is the most widely used, and it is free. Some UNIX administrators feel that exim or qmail are better choices since they are not as difficult to configure as sendmail.

Both exim and qmail, like sendmail as well, are free, they are very stable but not very user friendly, and may not be the best choice for a company with a lot of users that are not computer oriented. So why do people choose NT? NT is often chosen because many customers are not willing to pay for the more expensive hardware required by most commercial flavors of UNIX. More important, however, is the overall cost of implementation which includes system administration along with several other factors like downtime, telephone support calls, loss of data due to unreliability.

Unlike Unix, Windows NT Server can handle only one task well; so more systems are needed to support users. What about manpower? What is it going to cost to support these systems? Because NT 4. 0 lacks an enterprise directory on the scale of other systems, it requires more administrators to manage it in large enterprises. UNIX based networks require much less men power to maintain that NT. Both systems are able to run automated tasks, but running them is only useful when the scripts/tasks/executables can be run without human intervention.

So much that runs on NT is GUI-based, and thus, requires interaction with a human administrator. I guess this kind of defeats the purpose. NT servers lack remote control and scripting capabilities (it must be purchased through a third party vendors), and their instability requires rebooting once or twice per week. This equals more monitoring and most importantly downtime. The estimated cost for setting up NT network in 1000 user environment including hardware, software and network management, would total about $900,000 for the first year.

Annual cost of management, maintenance and salaries for a Windows NT Server network would be around $670,000. Is there much difference in design? NT is often considered to be a “multi-user” operating system, but this is very misleading. An NT server will validate an authorized user, but once the user is logged on to the NT network, all he/she can do is access files and printers. The NT user cannot just run any application on the NT server. When a user logs in to a UNIX server, he/she can then run any application if they are authorized to do so.

This takes a major load off his/her workstation. This also includes graphics-based applications since X-server software is standard issue on all UNIX operating systems. Another big difference is a disk related design. In Microsoft suite of operating systems is its antiquated use of “drive letters,” i. e. drive C:, drive D:, etc. This schema imposes hardware specific limitations on system administrators and users alike. This is highly inappropriate for client/server environments where network shares and file systems are to represent hierarchies meaningful to humans.

UNIX allows shared network filesystems to be mounted at any point in a directory structure. A network share can also span multiple disk drives (or even different machines! ) in UNIX, thus allowing administrators to maintain pre-existing directory structures that are well-known to users, yet allowing them to expand the available disk space on the server, making such system changes transparent to users. This single difference between the UNIX and Windows operating systems further underscores the original intentions of their respective designers.

The Human Experience With Technology

The world is full technology, almost everything you see is the result of technology. Our houses, cars, buildings, streets, lights, even simple things like spoons, pencils, and nail clippers are all examples of technology. We use it everyday without even think about how it affects us. We don? t think about how much a part of our society it has become, or what life would be like without it. We don? t question our technology once we become accustomed to it but maybe we should. We should think about what technology has brought us, and what it enables us to do.

Is there anything dangerous about our dependence on technology? Of course there is. Being dependant on anything can be dangerous. The good and bad consequences of our dependence have to be measured constantly. If the lights go out, if the car won? t start, if my computer crashes, these illustrate the weaknesses in technology that result in dangerous consequences. Yet those possibilities and realities do not deter us from using technology , or from designing new technologies. A technological failure can be a threat but, the worse threat to us is ourselves.

In a recent interview with Simonetta Rodriguez, (an engineer and recent graduate of the Massachusetts Institute of Technology in the field of civil and environmental engineering) I asked if she felt that there were any dangers about humans dependence on technology. She said, ? If the computers go down no work gets done. But before computers we got a lot less work done all the time. The greater danger is how we use technology.? We must use care when making decisions regarding technology.

If we do not pay close attention to the later consequences of decisions, we could find ourselves facing much greater threats that that of the lights going out. Consider the possibility in the near future of human cloning. What rights will a human clone have? Whose responsibility will they be? We face questions and fears such as this when we use technology. These fears will, no doubt, not stop our efforts to make our lives easier and more comfortable because, technology brings us added comfort.

Whether it is comfort in our homes or the comfort of knowing that we will not die if we get break a bone, we design most things with comfort as our ultimate goal. Regarding this issue Ms. Rodriguez noted, ? Why would we develop something that make our lives harder??. She is right, why would we, were is the survival advantage? Human nature drives us to find ways to make our lives easier, that is part of what makes us human. However, some argue that the technology we design has a serious down side: stress. Has technology really added stress to our lives? I believe that it changed the nature of the stress.

Instead of worrying about cave lions and wolves, we worry about traffic on the beltway and, how much money we have. For the most part, life threatening situations do not cause most of our stress. Instead we have lower level stresses that last longer. Stress in both forms leads to serious health problems. Let? s say that you are walking in the woods. Suddenly you spot a very large bear. Your pulse quickens, pupils dilate, you start to sweat, your body releases adrenaline, and your digestion stops: this is stress. Your body prepares you to fight or to run.

Then you see that the bear is walking away from you. You relax a little, but your body remains active, your immune system is suppressed, fat is converted to fuel, and cortisol is released. Cortisol regulates your metabolism and immunity but it is toxic over time. All this is the normal bodily response to something stressful. Now let? s say that you are driving to work and you get stuck in a traffic jam. This is another stressful situation and your body reacts much the same way as if you had just seen a bear. Today, traffic jams occur more often then bears in the woods, traffic results in ongoing stress.

But is this to say that we should stop using technology? Absolutely not. We must balance the stress technology causes, with the good things that it offers us. I think that most people balance the stresses or they would not continue using technology. People will not use something that adds too much stress to their lives. Everything that we build, everything that we invent is only successful if the advantages are greater than the disadvantages. If we developed any technology that put more stress on us than it was worth, we would discard the technology and develop something else.

Some people feel that technology is controlling their lives and the world. I say that technology only controls the lives of people who allow it to controls their lives. Something can only control your life if you let it. Technology itself cannot control anything except what it was made to control. It is human beings who decide what the technology should and will control. We are the masters and makers of technology. If we did not invent it there would not be any. Technology is and does what we want it to do. It has no mind of its own, it cannot force anyone to use it, only humans can do that.

If a person feels he or she depends of their computer too much, he or she should cut down on its use, or stop using it altogether. Humans survived without computers and we could do it again should we choose to do so. Few would choose to do this. Technology does not have a mind of its own. It cannot decide for itself to launch nuclear missiles at the U. S. , humans must decide that. We built this destructive technology, we must control it. Technology is not dangerous to humans, humans are dangerous to humans.

Evolution of Technology

David Suzuki and Holly Dressel’s book From Naked Ape to Superspecies provides an intriguing and shocking view into technology and culture in today’s society. Their opinions, which are based on various experiences and observations made over the years, suggest that human beings will eventually lead to the destruction of the natural world. “Human beings and the natural world are on a collision course. Many of our current practices put at serious risk the future for human society and may so alter the living world that it will be unable to sustain life in the manner we know.

Fundamental changes are urgent. No more than one or a few years remain before the chance to aver the threats we now confront will be lost and the prospects for humanity immeasurably diminished. ” *1 For millions of years, the earth has maintained a life-supporting biosphere in which all organisms coexist. Life is created and shaped by other life forms, in a continuous, interlocking pattern that all creatures rely on for survival.

Even the smallest organisms have their place in the great scheme of things, which top carnivores need to stay alive. Human interference, such as pollution, has devastated the world’s natural balance to a point such that, “The fate of every ecosystem on the planet is now determined by human activity. ” *2 With the failure of the Biosphere Project (which was intended to prove that we have learned enough about the world to recreate it) powerful evidence was provided to show us how little we actually understand the natural systems that sustain us.